I believe current touch screens (Ipads and even the biggest tablets available) to be too small for efficient use for "next generation" human touch input. A mouse still seems the best way to turn human input (with small, precise motions) into commands for the computer.
To go beyond this, I think we'll begin to see larger input systems (beyond tablets). Microsoft blazed the way with Kinect. Imagine if you will, a Kinect vision system watching hand gestures, combined with haptic feedback gloves which would give some tactile feel to the 3D model you're manipulating with your hands. Maybe not surgical-precision fine feel, but the at least the feel that you're manipulating an object the size of a basketball in front of the screens on your desk.
The tricky part will be disengaging your hands from the 3D model to initiate commands. The mental image I have is your hands stuck on a sticky ball, unable to release it.
The changes like the user interface all come along when design teams finally realize that tailoring the product to user needs and user ease ups the product's value. Apple is a master of this. The iPod, iPhone, and iPad all existed in different forms before Apple. But before Apple, users didn't care much for those early products.
Facebook beat MySpace for the same reason. While cool technology wins the first wave, the second wave is usually won when companies addresses the users' needs.
An Ipad may not be the best device to design a component, but it is a fantastic tool to display a design to potential customers and management. The ability to zoom, pan, and rotate on a tablet is remarkable.
I remember when 100 mhz Pentiums became available and game software was readily available with graphics that blew away most CAD programs avaialbe on the market. Now the CAD software can display graphics with stunning effectivity.
Good point, Tim, about the iPad and field applications in engineering.
To your other point, not only are CAD programs getting so rich in graphics capabilities, they are also borrowing lots of technology from the gaming world so we're starting to see photo realism and animation as a standard part of CAD platforms. This allows engineers to visualize how a particular mechanism might move within a design to check for parts interferences, for example, or to see how a particular part of a machine might operate from an ergonomics standpoint. All pretty amazing stuff!
Robots that walk have come a long way from simple barebones walking machines or pairs of legs without an upper body and head. Much of the research these days focuses on making more humanoid robots. But they are not all created equal.
The IEEE Computer Society has named the top 10 trends for 2014. You can expect the convergence of cloud computing and mobile devices, advances in health care data and devices, as well as privacy issues in social media to make the headlines. And 3D printing came out of nowhere to make a big splash.
For industrial control applications, or even a simple assembly line, that machine can go almost 24/7 without a break. But what happens when the task is a little more complex? That’s where the “smart” machine would come in. The smart machine is one that has some simple (or complex in some cases) processing capability to be able to adapt to changing conditions. Such machines are suited for a host of applications, including automotive, aerospace, defense, medical, computers and electronics, telecommunications, consumer goods, and so on. This discussion will examine what’s possible with smart machines, and what tradeoffs need to be made to implement such a solution.