If you think professional engineering and design tools are immune to being influenced by the consumer world, think again. Many interactive simulation and user interface innovations that are cropping up in hardcore CAD and CAE packages are taking their cues from the gaming world.
Now, input devices are following in the same direction. Leap Motion, a new motion-control company, is the latest to tackle the challenge of bringing accurate 3D motion control to traditional computer interaction, whether for performing basic computing tasks like navigation or doing more creative work like precise virtual 2D and 3D drawings.
In much the same vein as Microsoft's Kinect Xbox add-on that lets people game via hand and full-on body movements, the Leap attaches to a computer via a USB port and turns it into a gesture-recognition device.
The device's built-in, patented software analyzes images from built-in cameras, which track the individual movements of all 10 of a user's fingers in addition to tracking objects like a pen. And the startup team is touting its accuracy compared to other available offerings (think Kinect), saying it is down to one hundredth of a millimeter, a precision level that is well-suited for touch-free natural gestures controls like pinch-to-zoom.
The Leap accurately senses the individual movements of all 10 of a user's fingers and tracks objects like a pen, enabling a 3D workspace that recognizes intuitive gestures. (Source: Leap Motion)
Founders David Holz and Michael Buckwald envision the Leap filling a gap between what they say is easy to do in the real world but hard to do digitally. Traditional input devices like mice and keyboards make actions that are highly intuitive in the real world, like drawing a picture or manipulating 3D objects -- highly technical tasks. However, existing motion-sensing technology is still pretty crude, they said. That's where the Leap comes in.
"Like molding a piece of clay or creating a 3D model -- that inspired us to create the Leap and fundamentally change how people work with their computers," said Buckwald, Leap Motion's CEO, in a press release.
While CAD users and engineers aren't the sole target audience for this device, company officials see big potential for any professional working with 3D images and in need of large-scale 3D visualization, including scientists or energy experts working with oil exploration maps.
Once calibrated, the Leap creates a three-dimensional interaction space of four cubic feet to control the computer or device with precise gestures. Users can interact with their software tools via a mix of swipes, flicks, and pinches to move 3D models, sketch concepts, or make changes.
Of course, applications like CAD and sketch tools have to support the Leap device in order to enable this new world of gesture-based interaction. To that end, Leap Motion is offering a software development kit and encouraging developers to build native apps or port existing tools over to support the device.
Here's the best news. The Leap is pretty cheap. As it is not available yet, those interested in taking it for a spin can pre-order the device for $69.99.
There's also the risk of unintentional consequences. How many laptop users have brushed the touchpad and sent their cursor elsewhere strewing their typing in several locations?
The article says Leap will track all ten fingers. No scratching while working! And I wonder what finger gesture will mean "CANCEL / Forget about it"?
In all seriousness, gestures can be quite difficult compared to holding something tangible, with a small amount of weight. This was compared to Kinect - I found Wii bowling to be more realistic with the minimal amount of the Wii controller in my hand compared to the similar game in Xbox with Kinect.
I was impressed. I is like taking the big screen Hollywood to the little desktop screen.
However, I am not sure how to integrate it into my CAD package. I am not sure how it would be that beneficial in actual drawing conditions, as the sensitivity may not handle changing a few tenths of a degree or mm.
I hear what you both are saying about there not being a direct link to CAD at this time. But I think that's the operative words--at this time. I think there's a lot in this device that portends how input devices will evolve incorporating many of the gestures and capabilities that are becoming commonplace on our personal devices. Perhaps they'll call it the "consumerization of CAD." In any event, it's all about this company convincing CAD and design tool vendors to leverage their toolkit to take advantage of the new interaction paradigm. But this device, the Kinect, and what we see on the gaming front are no doubt, going to heavily influence CAD and design tool development moving forward.
Beth, I mentioned this thing last month. I agree with what you are saying, I was saying the same thing. It will be useful, once it's use is shown and people acknowledge it. This kind of input is already being incorporated directly into tvs(no external device needed). This is the future of input it seems and will be integrated into everything before you know it. No more remotes, just talk and wave your hands.
Thanks Charles. I saw a show the other day talking about this very thing. Two guys..one for and one against...well one thinking "never". The anti guy was saying how noise is a problem for talking to the tv(or whatever device). The pro guy was saying that in a few years they will have so many samples of people talking in noisy environments that that will no longer be an issue. I agree with him. It won't be an issue eventually. This is like 1st gen stuff....like PS1. Wait till the 3rd gen stuff...they will have fixed all the probs by then.
What a great idea. I hope they work with Macs. It's not typing that I find wearing so much as all the touch-pad/mousing for web surfing. Aside from computer users, the other application possibility that comes to my mind is robot control. If this UI can interpret human movements, why can't it be adapted to do the same with robots? I've been wondering about gesture control/interpretation for robots that since the Kinect debuted. So far, I've seen research where a Kinect tells a robot about its environment http://www.designnews.com/document.asp?doc_id=240288 but what about the other direction?
Gesture interpretation for robot movements--now that's a cool idea and one I'm sure has to be well underway in research labs. I don't think gesture movements are that unfamiliar to users any more. Between the new generation of smart phones (not just Apple) and other commonplace electronic devices, more and more users are getting familiar. And for those up and coming engineers born and breed on consoles like the Xbox and Wii, this kind of interaction will be expected.
This is amazingly cool. It's yet another example of how gaming leads the way in electronics. For years, graphics chips have trickled down from gaming to less expensive products, giving us applications such as 3D navigation. Who says gaming is for kids?
"Here's the best news. The Leap is pretty cheap. As it is not available yet, those interested in taking it for a spin can pre-order the device for $69.99." Turns out that you pre-order and get charged when the item is shipped - projected for early 2013!!!!
The robot makers long ago eliminated the error prone method of sensing what the programmer intended as a rototic motion. The result was direct control of all six axis. Why in the worl waste time and effort in making it easy for the untalented, untrained, and unskilled person to program a robot for some task? Or is this an effort to put robots in every household, in the name of increased profits? There are thousands of folks who are totally unable to think out the results of their actions, and what we rally do not need is those folks programming robots, which can move much faster than people, but who are also capable of making the same wrong move repeatedly. What the developers are doing is attempting to blast the lid off of Pandora's Box, again.
Not only that, but even though it is 1st gen. There are a lot of people working on it. Look at the XBOX Kinect. It didn't take a genius to decide that we want that stuff on a pc...hence the LEAP. It will just keep getting better. Heck the LEAP is cheap compared to the kinect..no surpise there. Not to mention all the tv guys incorporating the same tech...no box needed. I want a new tv...lol
@CadmanLT: I think you're right about this being the future and second generation and third generation only advancing things further. Look at the post on research around robots and human gestures. They're using a lot of the same concepts.
I was just reading how Jelly Bean has a semi-Siri built in. That means that is does not need the internet to work. At first I thought, ohh ok, big deal. Then I just heard(and no I don't have an IPhone) that if you request a song to siri and you aren't online it doesn't work(even if the song is on your phone). Bummer. Then what Jelly Bean does makes all the more sense.
I mean no it can't do weather or directions off-line, but it can at least do music for you. I also have a question, tell me that siri can call your friends for you...even off-line...I mean come on? If it doesn't, then glad I saved my money.
Sharon Glotzer and David Pine are hoping to create the first liquid hard drive with liquid nanoparticles that can store 1TB per teaspoon. They aren't the first to find potential data stores, as Harvard researchers have stored 700 TB inside a gram of DNA.
The Wolfram Programming Cloud is expected to be a game changer. The application is based on the Wolfram Language and was specifically developed to streamline the creation and deployment of cloud-based programs.
Focus on Fundamentals consists of 45-minute on-line classes that cover a host of technologies. You learn without leaving the comfort of your desk. All classes are taught by subject-matter experts and all are archived. So if you can't attend live, attend at your convenience.