Gesture-based computer interfaces

gesture-based

Many tech developers tend to take on bigger and bigger challenges after they have cracked their last one, because that is the only way how to ensure that technology is constantly evolving and pushing the limits of what we think is possible. And that is what Microsoft is doing too, with developing a gesture-based computer interface that will be able to translate out hand gestures into their renderings on the screen.

This technology have several names, but most often it is called either gesture-based interface or gesture recognition, since that essentially is what the program does, it recognizes gestures either hand gestures or nowadays ever so popular face gestures and interprets them on some kind of output device. This interpretation usually happens using mathematical algorithms, however there is still a lot of work to do in this field, since recognition and identification of hand and face gesture is hard even for humans, let alone a computer.

gesture-based computer interfaces

Progress towards this technology can be seen in the form of the DeepHand, a system developed at the Indiana based Purdue University, which uses a depth-sensing camera to read the movements a human hand makes and then with the help of deep-learning network translates them into their accurate renderings in a virtual reality environment. Another version of this interface in a more consumer friendly option is the Leap Motion Controller, a motion sensor that connect to software on your computer and it allows for people to control their computers with their movements instead of their keyboard and mouse. On top of that Leap Motion have released several apps, that also can be used with the controller, allowing you to experience gesture-based interface usage for yourself.

And Microsoft, too, right now is working towards a hand tracking, heptic technology and gesture input interface systems, so in the future we will be able to interact with virtual objects in our computers just as we are interacting with objects in real world. It could be the thing that people are looking for to improve how we use computers and other similar devices. Imagine if we would have to have a keyboard or a mouse, but we could still use the computer just as well just by using out hands. But the other great thing about gesture recognition will be that it will also make it possible for our technology to anticipate our needs and therefore respond to us better, so we can have a better and experience using any computer like device.