The good news is that Microchip introduced a very cool 3D gesture IC. The bad news is that I didn’t actually get to see it work because our meeting got derailed by Hurricane Sandy (if you live in New Jersey where I do, it was a hurricane, not a “superstorm”). But I did see it in action via video and got a complete download from the Microchip technical folks. And I have to say, it’s pretty cool (if you know me or are a long-time reader, you know I don’t get excited by these kinds of things too often).
Microchip’s GestIC technology, which enables intuitive, gesture-based, non-contact user interfaces, is the heart of a range of end products, starting with the MGC3130. Microchip claims it’s the first electrical-field (E-field)-based, configurable 3D gesture controller, offering low-power, precise, and fast hand position tracking with free-space gesture recognition. The part can be always on, thanks to a 150-µW active sensing state, even in battery-power applications.
GestIC achieves its high gesture-recognition rates through its on-chip library of intuitive and natural human gestures called the Colibri Suite. These tools combine a stochastic Hidden Markov model and x/y/z hand-position vectors to provide a reliable set of recognized 3D hand and finger gestures that can be easily employed in their products. Examples include wake-up on approach, position tracking, and flick, circle, and symbol gestures, all needed to perform functions such as on/off, open application, point, click, zoom, scroll, free-space, and mouse-over. This library can help designers get their products to market quickly and reduce development risks by matching their system commands to Microchip’s predetermined gestures.
With Windows 8 in mind (where’d that Start button go?), it’s clear where a product like this can be used. I’m excited to get my hands on one of their dev boards (called the Sabrewing MGC3130 single-zone eval kit) so I can start playing around with it myself.
Nice article and cool video, Rich. It will be interesting to see how this technology will be applied. The first thing that comes to mind is game technology. My guess is that adoption will begin with young folks.
A new PC HMI is something I have wanted for quite some time now. I tried touchpads, touchscreens, Kinect, and I am usually let down. My goal is to do more work faster. I thought Windows 8 might be the first step in that direction. So far, I have not been impressed with the OS.
I want to do real work like CAD, art, video editing with this HMI, not just casual use.
I hope that GestIC can help with what I want. With other touchpads, I found my arm/hand would get more fatigued than with general mouse use.
My dream of a Minority Report/Ironman design interface still seems elusive.
Good question about right and left handed, Chuck. I would imagine it could be set up so there is a selection as to left handed or right handed. With games, this technology could be developed so a player could use both hands.
I'm torn with regards to touchscreens. They are neat, but I HATE fingerprints on my displays, so owning a smartphone kills me half of the time. :) I would much prefer using a stylus like on my old Palm Pilot. (Maybe I should pick up one of those capacitive screen stylus devices for my smartphone....)
It will be interesting to see how this can be used as an interface to new devices. Maybe some industrial HMI systems?
What drives me crazy about my smartphone is text entry and editing. Something that is so quick and easy to do with a full keyboard and mouse, is sometimes painful to do on my smartphone.
Like Cabe, I want to speed up my real work (mostly technical writing, schematic generation, and mechanical sketch generation) ... I have little interest in gaming. As long as alternative tools require me to take my hands off the keyboard, as a mouse or trackball do, I don't see how they could speed things much. My hope is that someday, I'll be able to, say select a line of text and move it without having to move my hands off the keyboard. Toward that end, I recently bought Dragon to explore its capabilities at such tasks. I've often wanted to rig a foot-operated positioning device, with the other foot used to click, to my desktop. These gesture sensing methods seem to be aimed at the gee-whiz gaming crowd ... which, of course, is where the $$$ are. Oh well ...
I'm normally not an early adopter and only recently traded my "vanilla" cell phone for a Galaxy S3. I was excited to see that I could make sketches to memorialize sudden inspirations as line drawings. I was equally disappointed to learn that the S3 screen is capacitive and requires a "stylus" as big as a pencil eraser to work! This makes a very, very poor imitation of a pencil and paper "sketchpad".
In some ways, Analog Bill, the pencil and sketch pad has not been improved upon. Yet anything you're working on that requires data works infinitely better than the pencil and pad. As an example, as a journalist, Word and Google have managed to save me at least 15 hours per week, year in and year out,
Agreed, Rob. Data is an important ingredient. I've had the same experience with word processing that you described. Today, I'm still a two-fingered typist (albeit a pretty fast two-fingered typist), and I can't even imagine how many hours I'd lose every week if I had to change every one of my typing mistakes with an eraser.
One of the biggest walls in embedded software development is the integration of low-level drivers with higher-level middleware and application code, but silicon vendors are stepping up to bring it down.
Focus on Fundamentals consists of 45-minute on-line classes that cover a host of technologies. You learn without leaving the comfort of your desk. All classes are taught by subject-matter experts and all are archived. So if you can't attend live, attend at your convenience.