@Beth: I'll admit, at first this knob movement felt strange and awkward. However, my left hand quickly got up the learning curve and became used to the required motions. By the end of the week, I found myself subconsciously reaching for the knob to rotate and zoom the model with my left hand instead of using the standard mouse picks with my right hand mouse. Overall, I liked the idea of using two hands to manipulate and create CAD models (instead of mostly using just one).
These are really neat, usefull, and not to mention fun. I wish I worked for a company that would spring for $75 mice like the one I purchased because it has 19 buttons and track resolution is completely adjustable. I would have my stereoscopic glasses at work except the monitors are 60hz refresh. But wait... I am using a $7,000 workstation with 12 Xeon cores nearly 3Ghz each and it came with a $10 two button mouse/keyboard combo. I would love to see 3D devices in the work place, but I am afraid it's the employee who will be purchasing these things. Until things get a lot cheaper it's the massive multiplayer online gaming mouse for me. Heck, it even has that wow factor where the buttons eluminate and dim.
I think that as people get more accustomed to these new movements, it just becomes a more natural way of interesting with the computer. I've tried to use my daughter's laptop (which is my old MacBook) and I immediately get stymied because the gesturing and pinching movements supported by my new MacBook and that now don't seem strange to me at all, don't work on her system.
Steven Speilberg must have gotten a glimpse at his buddy Steve Job's early work in gesture interfaces for the iPhone and subsquent iPad. On a serious note, it's pretty crazy that what was considered out there 10 years ago is now pretty mainstream. All you have to do is hand an iPhone to a four-year-old and right off the bat, they intuitively know how to size and scroll through screens with gestures and pinch movements.
Beth, I was going to say that a 3D input device is going to be limited by a 2D visualization, but InfiniteZ seems to have the answer to that, with what looks like will be the "holotank" of science fiction.
I think gesture recognition capabilities of input devices will be as revolutionary as the mouse was. The touchpad has already completely changed how I surf the web and work in my computer's OS. Gesture recognition will also be kinder to our fingers, wrists and tendons.
Steven Spielberg had it right. If you remember the futuristic 2002 movie, "Minority Report," Tom Cruise interacted with his computer in 3D fashion, mostly by pinching, drawing and waving his arms. I think this kind of technology is inevitable.
In a bid to boost the viability of lithium-based electric car batteries, a team at Lawrence Berkeley National Laboratory has developed a chemistry that could possibly double an EV’s driving range while cutting its battery cost in half.
Using Siemens NX software, a team of engineering students from the University of Michigan built an electric vehicle and raced in the 2013 Bridgestone World Solar Challenge. One of those students blogged for Design News throughout the race.
Robots that walk have come a long way from simple barebones walking machines or pairs of legs without an upper body and head. Much of the research these days focuses on making more humanoid robots. But they are not all created equal.
For industrial control applications, or even a simple assembly line, that machine can go almost 24/7 without a break. But what happens when the task is a little more complex? That’s where the “smart” machine would come in. The smart machine is one that has some simple (or complex in some cases) processing capability to be able to adapt to changing conditions. Such machines are suited for a host of applications, including automotive, aerospace, defense, medical, computers and electronics, telecommunications, consumer goods, and so on. This discussion will examine what’s possible with smart machines, and what tradeoffs need to be made to implement such a solution.