There's also the risk of unintentional consequences. How many laptop users have brushed the touchpad and sent their cursor elsewhere strewing their typing in several locations?
The article says Leap will track all ten fingers. No scratching while working! And I wonder what finger gesture will mean "CANCEL / Forget about it"?
In all seriousness, gestures can be quite difficult compared to holding something tangible, with a small amount of weight. This was compared to Kinect - I found Wii bowling to be more realistic with the minimal amount of the Wii controller in my hand compared to the similar game in Xbox with Kinect.
I was impressed. I is like taking the big screen Hollywood to the little desktop screen.
However, I am not sure how to integrate it into my CAD package. I am not sure how it would be that beneficial in actual drawing conditions, as the sensitivity may not handle changing a few tenths of a degree or mm.
I hear what you both are saying about there not being a direct link to CAD at this time. But I think that's the operative words--at this time. I think there's a lot in this device that portends how input devices will evolve incorporating many of the gestures and capabilities that are becoming commonplace on our personal devices. Perhaps they'll call it the "consumerization of CAD." In any event, it's all about this company convincing CAD and design tool vendors to leverage their toolkit to take advantage of the new interaction paradigm. But this device, the Kinect, and what we see on the gaming front are no doubt, going to heavily influence CAD and design tool development moving forward.
Beth, I mentioned this thing last month. I agree with what you are saying, I was saying the same thing. It will be useful, once it's use is shown and people acknowledge it. This kind of input is already being incorporated directly into tvs(no external device needed). This is the future of input it seems and will be integrated into everything before you know it. No more remotes, just talk and wave your hands.
Thanks Charles. I saw a show the other day talking about this very thing. Two guys..one for and one against...well one thinking "never". The anti guy was saying how noise is a problem for talking to the tv(or whatever device). The pro guy was saying that in a few years they will have so many samples of people talking in noisy environments that that will no longer be an issue. I agree with him. It won't be an issue eventually. This is like 1st gen stuff....like PS1. Wait till the 3rd gen stuff...they will have fixed all the probs by then.
What a great idea. I hope they work with Macs. It's not typing that I find wearing so much as all the touch-pad/mousing for web surfing. Aside from computer users, the other application possibility that comes to my mind is robot control. If this UI can interpret human movements, why can't it be adapted to do the same with robots? I've been wondering about gesture control/interpretation for robots that since the Kinect debuted. So far, I've seen research where a Kinect tells a robot about its environment http://www.designnews.com/document.asp?doc_id=240288 but what about the other direction?
Gesture interpretation for robot movements--now that's a cool idea and one I'm sure has to be well underway in research labs. I don't think gesture movements are that unfamiliar to users any more. Between the new generation of smart phones (not just Apple) and other commonplace electronic devices, more and more users are getting familiar. And for those up and coming engineers born and breed on consoles like the Xbox and Wii, this kind of interaction will be expected.
This is amazingly cool. It's yet another example of how gaming leads the way in electronics. For years, graphics chips have trickled down from gaming to less expensive products, giving us applications such as 3D navigation. Who says gaming is for kids?
"Here's the best news. The Leap is pretty cheap. As it is not available yet, those interested in taking it for a spin can pre-order the device for $69.99." Turns out that you pre-order and get charged when the item is shipped - projected for early 2013!!!!
The robot makers long ago eliminated the error prone method of sensing what the programmer intended as a rototic motion. The result was direct control of all six axis. Why in the worl waste time and effort in making it easy for the untalented, untrained, and unskilled person to program a robot for some task? Or is this an effort to put robots in every household, in the name of increased profits? There are thousands of folks who are totally unable to think out the results of their actions, and what we rally do not need is those folks programming robots, which can move much faster than people, but who are also capable of making the same wrong move repeatedly. What the developers are doing is attempting to blast the lid off of Pandora's Box, again.
Not only that, but even though it is 1st gen. There are a lot of people working on it. Look at the XBOX Kinect. It didn't take a genius to decide that we want that stuff on a pc...hence the LEAP. It will just keep getting better. Heck the LEAP is cheap compared to the kinect..no surpise there. Not to mention all the tv guys incorporating the same tech...no box needed. I want a new tv...lol
@CadmanLT: I think you're right about this being the future and second generation and third generation only advancing things further. Look at the post on research around robots and human gestures. They're using a lot of the same concepts.
I was just reading how Jelly Bean has a semi-Siri built in. That means that is does not need the internet to work. At first I thought, ohh ok, big deal. Then I just heard(and no I don't have an IPhone) that if you request a song to siri and you aren't online it doesn't work(even if the song is on your phone). Bummer. Then what Jelly Bean does makes all the more sense.
I mean no it can't do weather or directions off-line, but it can at least do music for you. I also have a question, tell me that siri can call your friends for you...even off-line...I mean come on? If it doesn't, then glad I saved my money.
For industrial control applications, or even a simple assembly line, that machine can go almost 24/7 without a break. But what happens when the task is a little more complex? That’s where the “smart” machine would come in. The smart machine is one that has some simple (or complex in some cases) processing capability to be able to adapt to changing conditions. Such machines are suited for a host of applications, including automotive, aerospace, defense, medical, computers and electronics, telecommunications, consumer goods, and so on. This discussion will examine what’s possible with smart machines, and what tradeoffs need to be made to implement such a solution.