The conception of the technology is a bit rushed, people are still struggling to adapt the touch sensitive screens and so this new gesture might have to wait till people are fed up with the touch sensitive one. More so I don't see any necessity of the technology especially in work places were conservation calls the shots.
If the gesture sensing is strictly a fast depth sensor plus position detection, then it will be wrong very often for soem folks who don't match the model, like me. So it will be wrong most of the time.
As for collecting and mining the data, I am cerain that a heard of weaseles is already working on that part, in order to sell marketing information. But that will need to have context added to the selections in order to be really effective. At that point there would be no more privacy available. The only solutions are to lie constantly or to disconnect. So now we have a moral question: "is it right to mislead internet snoops?" That could be an interesting challenge.
Does anyone else think there is a huge potential for undesired surveillance with this kind of technology? Could this not be used to build huge databases to be mined for marketing, or other, more nefarious, purposes?
As a side note, the Kinect is not a gesture recognition technology per se, it is a glorified depth-sensing camera. The gesture recognition technology is in the computer vision algorithms running over the depth annotated images.
The whole concept of computer gesture recognition smells a lot like a few programers wanting to show off how smart they are, and how they can create something really new. But the actual value added will not be very much, and the harm done by forcing a lot of people to think even more like the minions of Mr. Gates will take a while to become apparant. We will have the computers getting things wrong even more often and in no time things will be produced that have no other method of control except for some gestures invented by a bunch of programmers.
So is it OK to do something just to prove that it can be done, and show folks how smart you are, without delivering any actual value? In simpler words, " just because you can, does that mean that you should?"
@TJ McDermott - Well said. The time to adapt such technology will take a while and expect a lot of initial resistance from the working people. Most of the stuff in the lab gets replaced by the new stuff that gets made in the lab. Most of it never hits wide spread usage.
@Charles - I think that's debatable. There's too many people who still haven't had any interaction with touch and gesture-based input. It'll be quite some time before keyboards and mice are completely put out of the equation.
This is fascinating stuff and gives realization to the stuff we've seen in Sci-Fi movies like Minority Report etc. Although I am a bit concerned about the departure from mechanic interaction to gesture recognition. I think there are undiscovered implications of this. I hope I'm wrong though.
The problem I see is not with the the technology, but the time it takes to bring something like this to market. Before this can make inroads in the market, someone will have "taken it to the next level" again, and the cycle starts anew.
One cannot stand still, but the pace is so fast now cool ideas stay just that - ideas.
For industrial control applications, or even a simple assembly line, that machine can go almost 24/7 without a break. But what happens when the task is a little more complex? That’s where the “smart” machine would come in. The smart machine is one that has some simple (or complex in some cases) processing capability to be able to adapt to changing conditions. Such machines are suited for a host of applications, including automotive, aerospace, defense, medical, computers and electronics, telecommunications, consumer goods, and so on. This discussion will examine what’s possible with smart machines, and what tradeoffs need to be made to implement such a solution.