Machine vision systems for product identification, measurement, and inspection and robot guidance are declining in price and improving in performance. Vision capabilities of competing categories of systems at the low end of the price spectrum are overlapping (see "Technology Up" sidebar)—but engineers who pick a system based on initial price could end up paying a higher overall cost of ownership.
Cost drivers depend on what type of vision system is chosen. There are two main types of vision systems. Smart cameras have image-processing circuitry (DSPs, FPGAs, or microprocessors) and software within them. They cost in the range of a few thousand dollars and are generally compact and easy to program and set up. But their small size limits space for processing power and memory, which restricts flexibility.
In contrast, embedded processor or PC-based systems link several cameras for central processing and synchronization of image data. These systems have more space for memory and processing tasks such as image acquisition and data buffering on high-throughput, frame-grabber cards. Programming these systems is more complex, but it allows for scaling up the system and updates in handling different tasks or imaging different objects. Depending on the application, however, software costs for the same system can vary widely, say from $2k to $10k.
But the lines between smart cameras and centralized vision systems are beginning to blur. Recently Eden Prairie, MN-based PPT Vision announced a hybrid system. Their engineers embedded a 1,000 MIPS IBM PowerPC® chip within a smart camera (see "New in Viewing" sidebar on page 92).
Keeping Overall Cost Down
Several factors can drive up overall cost that engineers need to take into consideration. They include:
Changes in position and orientation of objects viewed. Remedies include fixing object location and orientation on a production line or software tools that can accommodate expected variations. Mechanically, side rails on a line and bowl feeders that align parts can do the job, but these wear and require maintenance. Such an arrangement may not be flexible enough to handle different parts. Using software, a basic blob (binary large object) tool could handle imaging if the parts are simple and their orientation is fairly consistent. But imaging complex, shiny parts in highly variable positions on a belt that is no longer uniformly clean could require more complex, and costly, geometric pattern-matching software. A simpler solution may be found in adjusting the speed of the line to provide more time between imaging parts. The more processing time available, the greater success in finding the object.
Lighting. More costly, uniform lighting yields consistent imaging results. Non-uniform illumination can produce visual variations as a result of the finish on the object. Al-though LED lights may have higher initial costs with lifetimes on the order of 50k hours, they can actually provide savings by eliminating maintenance and line shutdowns every few thousand hours to replace fluorescent or incandescent lamps. But even with good lighting, when objects have little change in contrast, engineers should make sure the system they select has good image-processing filter software.
Image blur. Motion of an object and focusing on "deep" objects having a large variation in distance from the lens produce blur. Engineers can cut the motion effect with a camera with a quick enough electronic shutter or by strobe lighting the part. Flashing strobes can, however, present a risk of seizures to humans nearby and could increase work-men's compensation costs. Proper lens size can provide adequate depth of field to remove 3D focus blurring. For objects of great depth, a telecentric lens may be needed to eliminate perspective distortion—the effect of closer features on an object appearing proportionally larger than those more distant, even with both in focus.
In addition, engineers should select a system with a range of tools and flexibility, rather than opting for the least expensive system that just meets requirements, advises John Salls, consultant and president of Applied Vision Systems, which helps set up and program vision equipment. This approach allows for what he calls Vision Creep—permitting a system designed for one application to be adapted to another economically.
A Grip on Costs: Parts handling is a popular application for vision-based robot guidance. Benefits to the user include higher throughput and reduced workmen's compensation claims, cutting total cost of ownership. In this vision system, developed by Shafi Inc. for Daimler-Chrysler, six smart cameras mounted above a robotic station inspec racks containing 18 truck-bed parts. Robots place the racks onto a large turntable that rotates 180 degrees. Camera software uses histogram (pixel grayscale) and edge detection tools to determine the presence of a securing bar and to ensure that the correct part is placed in the designated rack position.
Engineers will face even greater choices in vision systems in the future, says Himanshu Shah, senior analyst at Dedham, MA-based ARC Advisory Group, a manufacturing consultancy and analysis firm. Specifically he notes, lower costs will allow vision with greater processing functions to compete with presence/absence sensors in some applications.
Nello Zuech, president of Vision Systems International consultancy, predicts that vision technology will find its way into non-industrial applications such as automotive. He mentions one system today that uses a truck-cab-mounted camera to detect lane markers and signal the driver when the vehicle crosses over them and the turn signal is not on.
Vision in the form of biometric imaging may also make its way into security systems including identity verification, access control, and keyless home entry. Zuech even sees systems being consolidated onto a single ASIC.
New in Viewing: Rugged and Intelligent
Here's a sampling of some of the newest vision products.
Lithium-ion battery prices will drop rapidly over the next 10 years, setting the stage for plug-in vehicles to reach 5%-10% of total automotive sales by the mid- to late-2020s, according to a new study.
Two researchers from Cornell University have won a $100,000 grant from NASA to continue work to develop an energy-harvesting robotic eel the space agency aims to use to explore oceans on one of the moons of Jupiter.
Is the factory smarter than it used to be? From recent buzzwords, you’d think we’ve entered a new dimension in industrial plants, where robots run all physical functions wirelessly and humans do little more than program ever more capable robotics. Some of that is actually true, but it’s been true for a while.
A recent Design News-exclusive study proves that engineering professionals are at the very forefront of this push into the future and making direct financial, performance, and value impact on their organizations by being personally involved or final decision-makers on automation solution and component choices.
Focus on Fundamentals consists of 45-minute on-line classes that cover a host of technologies. You learn without leaving the comfort of your desk. All classes are taught by subject-matter experts and all are archived. So if you can't attend live, attend at your convenience.