A collaboration with researchers from Microsoft Research Asia and the Institute of Computing Technology (CAS) has developed a sign-language communicator using the Kinect sensor.
The idea is nothing new to the researchers of Microsoft as the company filed patents regarding sign language early in the Kinect's development. The researcher's system allows users who know and don't know sign language to interact with one another by translating sign into a wide variety of languages and back again.
The system has two modes with one being a Translation Mode where the system translates the American version of sign (it is capable of multiple versions as well) into text or speech for non-sign users. The second is the Communications Mode, which translates verbal language into sign through the use of an onscreen avatar. The verbal language is done using a keyboard where the sentences are then translated to sign, text, or both. Users on both ends do not have to wait for one another to finish before they can respond as the translation is done almost in real-time, which is incredible, to say the least.
The research team developed specialized algorithms that track deaf or hard of hearing user's hands while signing with "3D motion-trajectory alignment," which is then parsed with matching words, phrases, or even whole sentences after the motions have been analyzed. The system works surprisingly well and tests have showed that both the hearing impaired and verbal speaking users can communicate at a natural pace.
The system's initial success was attributed not only to the researchers themselves, but also in part by garnering information from teachers and hearing impaired students from Beijing Union University by providing "real-world" data when it came to sign language. Using the system would benefit a tremendous amount of users worldwide who don't have translators readily available in order to speak to one another. Companies and businesses would benefit greatly using the new technology, as a greater pool of talent would become available for positions otherwise limited in scope because of their disabilities.
The language translation project was done using the first generation of Kinect sensor. It will be interesting to see what the researchers may be able to do with the Kinect 2 sensor as it has increased tracking and audio capabilities on a completely new level than its predecessor. Now, the only question is how much will it cost and when will it become available. Chances are it will be affordable enough to use in our homes and be available sometime in the near future.
- Developers Get Early Release of Next-Gen Kinect Sensor
- Video: Purdue Researchers Develop Hands-On 3D Digital Modeling Tool
- Video: WiSee Takes Gesture Recognition to the Next Level
- Video: Wearable Sensor Builds Maps on the Fly
- Video: Create 3D Scans of Everything
- Video: Tool Is a Handy Replacement for a Mouse