I agree plasticmaster. Watching nature will likely provide much of the new developments in robotics. My dad, who worked in aerospace, said you can see the dynamics of flight just by putting your hand outside the window when you're in a fast-moving car. That says it all.
We've got to start somewhere. Using the building blocks we have currently at our disposal provide us with the information we need to proceed to the next logical step. We did it with airplanes (studying birds and bird flight). Vision, using amplified piezoelectric actuators is a fantastic step forward in developing the next stage of robotics whether microscopically or in a much larger sense.
This is an interesting development, though it's helpful to understand that vision encompasse more than eye movement. There is the ability to accommodate wide variations in ambient light, to change focus on-the-fly smoothly, and to use the internal image-processing "firmware" of the brain to re-map conflicting or confusing imagery into a rational construct. Eyes are pretty amazing organs that must balance the interplay of a lot of variables to create what we call vision.
Good point, Asupnekar. There seems to be a proclivity to mimicking human movement and capabilities with robots. Yet other natural occurrences -- like your vision example of a fish -- are likely to be superior to human capabilities.
Acctually, it depends on what you are doing. The greatest advancement of science, wealth and welfare in history has come since the digital revolution. If you want to get philosophical, the universe is inherently mathematical. By applying digital techniques we have made tremendous advances. Frankly, there are lots of things we want done that are better done by computers than by natural methods. Nature tends to be very inefficient, using more resources to do a task than is strictly necessary. Natural language, for example, is terriably inefficient as far as information content.
Even in the area of accumulating and using knowledge, we have advanced more in the digital age, which encompasses the last sixty years or so, than in all of previous human history. I don't see this as "unnatural". We got here by using our natural talents and intelligence, but there is something about thinking digitally and mathematically that has given our knowledge a whole new dimension.
I was actually thinking that there are a lot of weaknesses in trying to copy the motion the human eye. The two main things that popped into my mind was the relative lack of peripheral vision (compared with other animals) the the fact that it still needs to be mounted on a "neck" to see much of the field.
Yes, I am also fully agree with you both (Rob and Naperlou). In this case we can take the best example from nature i.e of a fish. If we consider the motion of fish eye its almost end to end from all directions and if this current invention matches with that then i feel that's the big acheivement.
Are they robots or androids? We're not exactly sure. Each talking, gesturing Geminoid looks exactly like a real individual, starting with their creator, professor Hiroshi Ishiguro of Osaka University in Japan.
For industrial control applications, or even a simple assembly line, that machine can go almost 24/7 without a break. But what happens when the task is a little more complex? That’s where the “smart” machine would come in. The smart machine is one that has some simple (or complex in some cases) processing capability to be able to adapt to changing conditions. Such machines are suited for a host of applications, including automotive, aerospace, defense, medical, computers and electronics, telecommunications, consumer goods, and so on. This discussion will examine what’s possible with smart machines, and what tradeoffs need to be made to implement such a solution.