Smart glasses offer a keyboard to type text
K-Glass, smart glasses reinforced with AR that were first developed by the Korea Advanced Institute of Science and Technology (KAIST) in 2014, with the second version released in 2015, is back with an even stronger model. The latest version, which KAIST researchers are calling K-Glass 3, allows users to text a message or type in key words for Internet surfing by offering a virtual keyboard for text and even one for a piano.
Currently, most wearable HMDs suffer from a lack of rich user interfaces, short battery lives, and heavy weight. Some HMDs, such as Google Glass, use a touch panel and voice commands as an interface, but they are considered merely an extension of smartphones and are not optimised for wearable smart glasses. Recently, gaze recognition was proposed for HMDs including K-Glass 2, but gaze is insufficient to realise a natural UI and UX, such as user's gesture recognition, due to its limited interactivity and lengthy gaze-calibration time, which can be up to several minutes.
As a solution, Professor Hoi-Jun Yoo and his team from the Electrical Engineering Department recently developed K-Glass 3 with a low-power natural UI and UX processor to enable convenient typing and screen pointing on HMDs with just bare hands. This processor is composed of a pre-processing core to implement stereo vision, seven deep-learning cores to accelerate real-time scene recognition within 33 milliseconds, and one rendering engine for the display.
The stereo-vision camera, located on the front of K-Glass 3, works in a manner similar to 3D sensing in human vision. The camera's two lenses, displayed horizontally from one another just like depth perception produced by left and right eyes, take pictures of the same objects or scenes and combine these two different images to extract spatial depth information, which is necessary to reconstruct 3D environments. The camera's vision algorithm has an energy efficiency of 20 milliwatts on average, allowing it to operate in the Glass more than 24 hours without interruption.
The research team adopted deep-learning-multi core technology dedicated for mobile devices to recognise user's gestures based on the depth information. This technology has greatly improved the Glass's recognition accuracy with images and speech, while shortening the time needed to process and analyse data. In addition, the Glass's multi-core processor is advanced enough to become idle when it detects no motion from users. Instead, it executes complex deep-learning algorithms with a minimal power to achieve high performance.
Professor Yoo said, "We have succeeded in fabricating a low-power multi-core processer that consumes only 126.1 milliwatts of power with a high efficiency rate. It is essential to develop a smaller, lighter, and low-power processor if we want to incorporate the widespread use of smart glasses and wearable devices into everyday life. K-Glass 3's more intuitive UI and convenient UX permit users to enjoy enhanced AR experiences such as a keyboard or a better, more responsive mouse."
Along with the research team, UX Factory, a Korean UI and UX developer, participated in the K-Glass 3 project.