Image analysis and machine vision should be an open-source field. I don't want to re-invent the wheel in this area. I have a few applications I would love to use such tech in, but I have shied away from the task due to the daunting work in entails.
Agreed, the camera should just take pictures. The high-level work should be handled by more powerful computer systems.
One approach that is being used is distributing the processing as well. Many of the high level functions do not need to be performed at the camera level. Putting the "low level" image functions in the camera reduces the amount of data that needs to be transmitted. The systems you speak of are capable of going through feature extraction at the camera level and then communicating that higher level information to a centralized system (or a hierarichy of processors) to provide system function.
Altair has released an update of its HyperWorks computer-aided engineering simulation suite that includes new features focusing on four key areas of product design: performance optimization, lightweight design, lead-time reduction, and new technologies.
At IMTS last week, Stratasys introduced two new multi-materials PolyJet 3D printers, plus a new UV-resistant material for its FDM production 3D printers. They can be used in making jigs and fixtures, as well as prototypes and small runs of production parts.
In a line of ultra-futuristic projects, DARPA is developing a brain microchip that will help heal the bodies and minds of soldiers. A final product is far off, but preliminary chips are already being tested.
Focus on Fundamentals consists of 45-minute on-line classes that cover a host of technologies. You learn without leaving the comfort of your desk. All classes are taught by subject-matter experts and all are archived. So if you can't attend live, attend at your convenience.