Image analysis and machine vision should be an open-source field. I don't want to re-invent the wheel in this area. I have a few applications I would love to use such tech in, but I have shied away from the task due to the daunting work in entails.
Agreed, the camera should just take pictures. The high-level work should be handled by more powerful computer systems.
One approach that is being used is distributing the processing as well. Many of the high level functions do not need to be performed at the camera level. Putting the "low level" image functions in the camera reduces the amount of data that needs to be transmitted. The systems you speak of are capable of going through feature extraction at the camera level and then communicating that higher level information to a centralized system (or a hierarichy of processors) to provide system function.
The legacy endpoint devices that control our critical infrastructure (utility systems, water treatment plants, military networks, industrial control systems, etc.) are some of the most vulnerable devices on the Internet.
For industrial control applications, or even a simple assembly line, that machine can go almost 24/7 without a break. But what happens when the task is a little more complex? That’s where the “smart” machine would come in. The smart machine is one that has some simple (or complex in some cases) processing capability to be able to adapt to changing conditions. Such machines are suited for a host of applications, including automotive, aerospace, defense, medical, computers and electronics, telecommunications, consumer goods, and so on. This radio show will show what’s possible with smart machines, and what tradeoffs need to be made to implement such a solution.