Seems like a logical move to deply 3D cameras on factory floor inspections. Another new tool gaining mileage for monitoring quality on the factory floor is the iPad. I've heard about companies outfitting production floor workers with the tablet so they can roam the factory floor and inspect for problems, using the built-in camera and location tracking capabilities to pinpoint trouble spots and send real-time images of faulty equipment back for evaluation and troubleshooting.
I'd be interested to hear about what, if any, development work is required on the software side to create programs which can analyze the 3D data coming out of the camera, so as to identify defects as they pass under the camera, as well as the quantitative experience of 3D versus 2D. As you allude to, probably the biggest positive impact is that 3D cuts down the inspection time needed to verify that complex assemblies are defect free.
It's true that some portable devices are finding their way into machine vision applications on the factory floor. However, the cameras on consumer devices such as the iPad, smartphones, and most laptops are such low res--and the processors can't usually handle highly complex image processing software--that the most these devices can do is confirm presence/absence of relatively large objects (is there a board on the line at this stage? is there a box of a certain size on that pallet?). But when it comes to smaller objects, more detailed inspection, or more analysis, more specialized hardware and software are required.
Also, there's already an established class of portable and sometimes handheld machine vision devices for simpler inspection: the vision sensor or barcode-plus reader.
Alex makes a good point about software. Do these 3D cameras provide images that are viewed by plan operators? Or are they programmed to send a warning or alert that something is out of spec? I would guess the latter, since some of what the camera sees may not trigger a response from an operator looking at the image.
Beth also makes a good point about iPads being used in conjunction with inspecting plant operations. iPhones are also getting into the action. A number of automation suppliers have developed iPhone apps for plant operations.
Since automotive manufacturing and inspection are highly automated operations, images are processed and analyzed by the software, then instructions are sent to pass/fail a part, and if it passes, to shuttle it off to the next stage. Process monitoring is sometimes part of a machine vision system/network, and those instructions can stop the line if there's a problem with a particular station.
This is pretty simple inspection, compared to wafer defects, for example. But in most automated machine vision systems today, operators are not looking at everything that goes by.
I would imagine the automated eye can be programmed to more accurately detect defects then the human eye. I understand that in some of these applications at auto plants, the camera can "see" into places that operators can't. I also understand that automated cameras have a more consistent attention span than operators.
Good point Rob. Yes, the automated eye is more consistent--and gets bored less often--than the human eye.
That said, it took years and years to train automated eyes to see as well and to respond almost as quickly to complex patterns as can the human eye, and to develop analysis programs that can make sense out of those complex patterns. And by complex, I mean which objects are really on that pallet or which components are on that board, not sub-micron defects (that technology is an entirely different order of magnitude in amount of data and complexity).
The main reason companies change from visual inspection to automated machine vision inspection, though, is cost. It's just too expensive to pay humans to do the inspection at the rate it's needed to approach zero defects, inspection's holy grail.
I would guess we'll see some major advancements over the next few years as it pertains to refining those algorthims that make sense of the patterns coming from the 3D machine vision systems. Not only should that analysis flag problems and trigger fixes, but it also needs to be fed back into MES and PLM systems in a closed-loop manner so engineers have critical data at their fingertips when refining designs and addressing long-terms problems around quality.
Images are always compute-intensive, but I would imagine that 3D images must needs mountains of processing power to get the job done right. How much processing is needed for these applications and is it cost effective?
Festo's BionicKangaroo combines pneumatic and electrical drive technology, plus very precise controls and condition monitoring. Like a real kangaroo, the BionicKangaroo robot harvests the kinetic energy of each takeoff and immediately uses it to power the next jump.
Design News and Digi-Key presents: Creating & Testing Your First RTOS Application Using MQX, a crash course that will look at defining a project, selecting a target processor, blocking code, defining tasks, completing code, and debugging.
Focus on Fundamentals consists of 45-minute on-line classes that cover a host of technologies. You learn without leaving the comfort of your desk. All classes are taught by subject-matter experts and all are archived. So if you can't attend live, attend at your convenience.