Gesture recognition technology -- like the Microsoft Kinect -- is an emerging field that is going to play a large part in the future of tech. Traditionally, human/machine interaction has been achieved through the use of input devices such as keypads, keyboards, joysticks, and the computer mouse. They all are either constructed from arrays of pushbuttons or basic potentiometer technology. More recently, we have seen the transition to touch screens and more touch-based interfaces.
With the way things are shaping up, it looks like in the future, we are going to be using gestures to control and interact with computers and technology.
The Microsoft Kinect kickstarted this trend when it was released with the intent to make video games more interactive. However, hackers quickly got hold of these sensors and began to expose their potential for practical applications. We have seen interactive walls and rooms, virtual fitting technologies, gesture-controlled computer programs, and much more. In addition, Leap motion was introduced, which promised us much more precision in gesture recognition.
Both of these technologies made any room or nearby area interactive, but WiSee, an emerging gesture recognition tech, is looking to expand the recognition to beyond one room and through walls.
The device, created by computer scientists from the University of Washington, uses a standard wireless router modified in the correct way, along with any other WiFi-enabled devices, to track Doppler shifts created by human movements. The modified wireless router receives signals from all the devices in the area with WiFi capabilities. Using this information, the software can detect frequency shifts within the incoming signals. The shifts, known as “Doppler frequency shifts,” are very small, only a few Hertzs, but researchers have developed algorithms to detect them.
For a sense of security, the software has been developed to detect a specific gesture sequence to initiate interaction. Therefore, someone dancing in a room or falling down will not accidentally trigger the lights to be turned off or music to be turned on. The prototype has been tested with a set of nine different gestures in two different types of environments.
According to the team, the system has displayed an accuracy of 94 percent. Additionally, each person's movement within a room is tracked individually so multiple people can be around without the systems getting scrambled. The system still has much development to undergo.