Cost is an interesting question--and a good one. The answer is not simple. The problem is always determining what costs you want to compare: cost of the equipment? Cost of the software for the increased number and complexity of the 3D algorithms? Cost of the processing power to run that software? And the cost of processing power may either be free, if you're using smart cameras, or at least relatively cheap, with today's multicore processors on a PC.
If the latter, do you include cost of programming the multithreaded cores? How about the cost of humans to analyze all that data somewhere in the process?
The cost of 3D cameras vs 2D cameras is also highly variable, depending on how 3D is achieved -- stereo? laser triangulation? -- and on the 2D cameras--linescan? array? smart or not? number of ports? a standard that allows direct PC connections, or one that requires frame grabbers? if yes, how many frame grabbers does that standard require per camera?
Or are you using a standalone vision system with an integrated controller and varying port counts, camera configurations and processor options?
Your description of the Automotive inspections using multiple cameras is not familiar to me, but made me think immediately of Computer-Aided-Verification (CAV) where the actual resultant component (in this case a complex tube) is Laser scanned, and the output data (as a point-cloud) is overlaid against the original CAD database.Very effective geometry verification technique, and has been used routinely by my peers at Product Development Technologies for over 15 years.I wonder if it’s a viable alternative to the process you are describing using multiple cameras-?
Images are always compute-intensive, but I would imagine that 3D images must needs mountains of processing power to get the job done right. How much processing is needed for these applications and is it cost effective?
I would guess we'll see some major advancements over the next few years as it pertains to refining those algorthims that make sense of the patterns coming from the 3D machine vision systems. Not only should that analysis flag problems and trigger fixes, but it also needs to be fed back into MES and PLM systems in a closed-loop manner so engineers have critical data at their fingertips when refining designs and addressing long-terms problems around quality.
Good point Rob. Yes, the automated eye is more consistent--and gets bored less often--than the human eye.
That said, it took years and years to train automated eyes to see as well and to respond almost as quickly to complex patterns as can the human eye, and to develop analysis programs that can make sense out of those complex patterns. And by complex, I mean which objects are really on that pallet or which components are on that board, not sub-micron defects (that technology is an entirely different order of magnitude in amount of data and complexity).
The main reason companies change from visual inspection to automated machine vision inspection, though, is cost. It's just too expensive to pay humans to do the inspection at the rate it's needed to approach zero defects, inspection's holy grail.
I would imagine the automated eye can be programmed to more accurately detect defects then the human eye. I understand that in some of these applications at auto plants, the camera can "see" into places that operators can't. I also understand that automated cameras have a more consistent attention span than operators.
Since automotive manufacturing and inspection are highly automated operations, images are processed and analyzed by the software, then instructions are sent to pass/fail a part, and if it passes, to shuttle it off to the next stage. Process monitoring is sometimes part of a machine vision system/network, and those instructions can stop the line if there's a problem with a particular station.
This is pretty simple inspection, compared to wafer defects, for example. But in most automated machine vision systems today, operators are not looking at everything that goes by.
Alex makes a good point about software. Do these 3D cameras provide images that are viewed by plan operators? Or are they programmed to send a warning or alert that something is out of spec? I would guess the latter, since some of what the camera sees may not trigger a response from an operator looking at the image.
Beth also makes a good point about iPads being used in conjunction with inspecting plant operations. iPhones are also getting into the action. A number of automation suppliers have developed iPhone apps for plant operations.
It's true that some portable devices are finding their way into machine vision applications on the factory floor. However, the cameras on consumer devices such as the iPad, smartphones, and most laptops are such low res--and the processors can't usually handle highly complex image processing software--that the most these devices can do is confirm presence/absence of relatively large objects (is there a board on the line at this stage? is there a box of a certain size on that pallet?). But when it comes to smaller objects, more detailed inspection, or more analysis, more specialized hardware and software are required.
Also, there's already an established class of portable and sometimes handheld machine vision devices for simpler inspection: the vision sensor or barcode-plus reader.
California State University, Chico was the first school in California to offer an ABET-accredited degree program in mechatronic engineering. Now its California Mechatronics Center works with industry on machinery, robotics, and surveillance vehicles.
Focus on Fundamentals consists of 45-minute on-line classes that cover a host of technologies. You learn without leaving the comfort of your desk. All classes are taught by subject-matter experts and all are archived. So if you can't attend live, attend at your convenience.