@wahaufler: Is it feasible/possible/easy/common to use iphone/android smartphones and tablets to prototype such embedded systems? Where the CPU power is not enough, interfacing with a separate external FPGA or somesuch device for the vision computations? Or is the data bandwidth too much for USB or Bluetooth? It is indeed increasingly feasible to use smartphones and tablets as embedded vision platforms. However, I am doubtful that it will be feasiable to add processing power beyond what is built in. The good news is that the application processors in these devices have lots of processors beyond the CPU. E.g., they all have GPUs, and GPUs can often be used as programmable coprocessors for vision.
Are they robots or androids? We're not exactly sure. Each talking, gesturing Geminoid looks exactly like a real individual, starting with their creator, professor Hiroshi Ishiguro of Osaka University in Japan.
For industrial control applications, or even a simple assembly line, that machine can go almost 24/7 without a break. But what happens when the task is a little more complex? That’s where the “smart” machine would come in. The smart machine is one that has some simple (or complex in some cases) processing capability to be able to adapt to changing conditions. Such machines are suited for a host of applications, including automotive, aerospace, defense, medical, computers and electronics, telecommunications, consumer goods, and so on. This discussion will examine what’s possible with smart machines, and what tradeoffs need to be made to implement such a solution.