If you think professional engineering and design tools are immune to being influenced by the consumer world, think again. Many interactive simulation and user interface innovations that are cropping up in hardcore CAD and CAE packages are taking their cues from the gaming world.
Now, input devices are following in the same direction. Leap Motion, a new motion-control company, is the latest to tackle the challenge of bringing accurate 3D motion control to traditional computer interaction, whether for performing basic computing tasks like navigation or doing more creative work like precise virtual 2D and 3D drawings.
In much the same vein as Microsoft's Kinect Xbox add-on that lets people game via hand and full-on body movements, the Leap attaches to a computer via a USB port and turns it into a gesture-recognition device.
The device's built-in, patented software analyzes images from built-in cameras, which track the individual movements of all 10 of a user's fingers in addition to tracking objects like a pen. And the startup team is touting its accuracy compared to other available offerings (think Kinect), saying it is down to one hundredth of a millimeter, a precision level that is well-suited for touch-free natural gestures controls like pinch-to-zoom.
The Leap accurately senses the individual movements of all 10 of a user's fingers and tracks objects like a pen, enabling a 3D workspace that recognizes intuitive gestures. (Source: Leap Motion)
Founders David Holz and Michael Buckwald envision the Leap filling a gap between what they say is easy to do in the real world but hard to do digitally. Traditional input devices like mice and keyboards make actions that are highly intuitive in the real world, like drawing a picture or manipulating 3D objects -- highly technical tasks. However, existing motion-sensing technology is still pretty crude, they said. That's where the Leap comes in.
"Like molding a piece of clay or creating a 3D model -- that inspired us to create the Leap and fundamentally change how people work with their computers," said Buckwald, Leap Motion's CEO, in a press release.
While CAD users and engineers aren't the sole target audience for this device, company officials see big potential for any professional working with 3D images and in need of large-scale 3D visualization, including scientists or energy experts working with oil exploration maps.
Once calibrated, the Leap creates a three-dimensional interaction space of four cubic feet to control the computer or device with precise gestures. Users can interact with their software tools via a mix of swipes, flicks, and pinches to move 3D models, sketch concepts, or make changes.
Of course, applications like CAD and sketch tools have to support the Leap device in order to enable this new world of gesture-based interaction. To that end, Leap Motion is offering a software development kit and encouraging developers to build native apps or port existing tools over to support the device.
Here's the best news. The Leap is pretty cheap. As it is not available yet, those interested in taking it for a spin can pre-order the device for $69.99.
Gesture interpretation for robot movements--now that's a cool idea and one I'm sure has to be well underway in research labs. I don't think gesture movements are that unfamiliar to users any more. Between the new generation of smart phones (not just Apple) and other commonplace electronic devices, more and more users are getting familiar. And for those up and coming engineers born and breed on consoles like the Xbox and Wii, this kind of interaction will be expected.
There's also the risk of unintentional consequences. How many laptop users have brushed the touchpad and sent their cursor elsewhere strewing their typing in several locations?
The article says Leap will track all ten fingers. No scratching while working! And I wonder what finger gesture will mean "CANCEL / Forget about it"?
In all seriousness, gestures can be quite difficult compared to holding something tangible, with a small amount of weight. This was compared to Kinect - I found Wii bowling to be more realistic with the minimal amount of the Wii controller in my hand compared to the similar game in Xbox with Kinect.
This is amazingly cool. It's yet another example of how gaming leads the way in electronics. For years, graphics chips have trickled down from gaming to less expensive products, giving us applications such as 3D navigation. Who says gaming is for kids?
What a great idea. I hope they work with Macs. It's not typing that I find wearing so much as all the touch-pad/mousing for web surfing. Aside from computer users, the other application possibility that comes to my mind is robot control. If this UI can interpret human movements, why can't it be adapted to do the same with robots? I've been wondering about gesture control/interpretation for robots that since the Kinect debuted. So far, I've seen research where a Kinect tells a robot about its environment http://www.designnews.com/document.asp?doc_id=240288 but what about the other direction?
I hear what you both are saying about there not being a direct link to CAD at this time. But I think that's the operative words--at this time. I think there's a lot in this device that portends how input devices will evolve incorporating many of the gestures and capabilities that are becoming commonplace on our personal devices. Perhaps they'll call it the "consumerization of CAD." In any event, it's all about this company convincing CAD and design tool vendors to leverage their toolkit to take advantage of the new interaction paradigm. But this device, the Kinect, and what we see on the gaming front are no doubt, going to heavily influence CAD and design tool development moving forward.
I was impressed. I is like taking the big screen Hollywood to the little desktop screen.
However, I am not sure how to integrate it into my CAD package. I am not sure how it would be that beneficial in actual drawing conditions, as the sensitivity may not handle changing a few tenths of a degree or mm.
For industrial control applications, or even a simple assembly line, that machine can go almost 24/7 without a break. But what happens when the task is a little more complex? That’s where the “smart” machine would come in. The smart machine is one that has some simple (or complex in some cases) processing capability to be able to adapt to changing conditions. Such machines are suited for a host of applications, including automotive, aerospace, defense, medical, computers and electronics, telecommunications, consumer goods, and so on. This radio show will show what’s possible with smart machines, and what tradeoffs need to be made to implement such a solution.