@MazianLab Jeff, how do you integrate embedded vision and system-on-a-chip? That's a very broad question. I'll address one aspect: Vision applications typically comprise a series of processing steps. The front-end steps (nearest the sensor) process extremely high data rates, but use relatively simple algorithms. These steps are typically implemented on some sort of highly parallel programmable processor, such as an FPGA, GPU, or DSP. Later algorithm steps work on reduced data rates (e.g., features rather than pixels) but use much more complex algorithms. These steps can often run efficiently on general-puropose CPUs. So an SoC that does embedded vision will usually have a combination of one or more programmable parallel processing engines (they have to be programmable because the algorithms tend to change quickly) and a general-purpose CPU.
For industrial control applications, or even a simple assembly line, that machine can go almost 24/7 without a break. But what happens when the task is a little more complex? That’s where the “smart” machine would come in. The smart machine is one that has some simple (or complex in some cases) processing capability to be able to adapt to changing conditions. Such machines are suited for a host of applications, including automotive, aerospace, defense, medical, computers and electronics, telecommunications, consumer goods, and so on. This discussion will examine what’s possible with smart machines, and what tradeoffs need to be made to implement such a solution.