One approach that is being used is distributing the processing as well. Many of the high level functions do not need to be performed at the camera level. Putting the "low level" image functions in the camera reduces the amount of data that needs to be transmitted. The systems you speak of are capable of going through feature extraction at the camera level and then communicating that higher level information to a centralized system (or a hierarichy of processors) to provide system function.
Image analysis and machine vision should be an open-source field. I don't want to re-invent the wheel in this area. I have a few applications I would love to use such tech in, but I have shied away from the task due to the daunting work in entails.
Agreed, the camera should just take pictures. The high-level work should be handled by more powerful computer systems.
Using Siemens NX software, a team of engineering students from the University of Michigan built an electric vehicle and raced in the 2013 Bridgestone World Solar Challenge. One of those students blogged for Design News throughout the race.
For industrial control applications, or even a simple assembly line, that machine can go almost 24/7 without a break. But what happens when the task is a little more complex? That’s where the “smart” machine would come in. The smart machine is one that has some simple (or complex in some cases) processing capability to be able to adapt to changing conditions. Such machines are suited for a host of applications, including automotive, aerospace, defense, medical, computers and electronics, telecommunications, consumer goods, and so on. This discussion will examine what’s possible with smart machines, and what tradeoffs need to be made to implement such a solution.