What factories do--or don't do--with all the data they collect via machine vision is a can of worms, according to what a lot of people told me off record when I was covering the subject for T&MW. That data can help monitor processes and equipment, as well as product quality, and make numerous predictions as Alex suggests, but it's often not utilized because of the lack of dollars, time and/or know-how for integrating it into a plant's SRP and QC systems.
That's an observant question, Chuck. 3D machine vision requires multiple cameras and GigE is good at handling data from multiple cameras, so that would make a good fit. But I don't think GigE--over 1 Gbps or 10 Gbps backbones--is currently being positioned that way, although perhaps it should be.
Good point, Bill, and I think the challenge here will be on manufacturing and automation engineers to work with their software counterparts to create what I'd call predictive diagnostic and QC systems, which can make use of that data (not just more data for data's sake, which can't be analyzed, like you say). The objective would be almost an artificial intellgence-like program, or, more properly, software that over time builds up a database of patterns from which it can analyze and predict future outcomes, such as potential near-term failures as well as tweaks to improve/maintain production quality.
Looks like we are again bumping up against the limits of informatics. At some point it stops being a question of how fast and how much data can be transferred, and becomes a question of where do we store it and how quickly can we analyze it. Advances in cognitive algorithms are concerned with "attention" -- noticing anomalies in a product inspection line or movement in a normally still scene. We have a great interplay between the development of hardware and software solutions. With GigE 2.0, it looks like its software's turn to make a move.
Beth, a machine vision system with multiple cameras is served well by any speed of GigE backbone and its multipoint-to-multipoint capabilities. Whether you need to ratchet that up to 10 GigE depends on the nature of the data and/or the speed of the transfer.
Beth, good question. Many say there aren't really any downsides, and higher price is definitely not one of them. Ethernet's ubiquity throughout the enterprise means that most components--network interface cards, cables--are generally quite low cost. Some critics say that although GigE takes away the frame grabber (image capture card) it puts back in the NIC (network interface card). Although this is technically true, NICs cost a lot less than frame grabbers.To date, the main initial concerns about using GigE as a backbone for real-time networks such as high-speed vision requires have been regarding CPU loading and latency. This basically means a potential source of slowing down data transfers. Enthusiasts say that CPU loading has been fixed with filters and drivers, and latency has turned out not to be a problem in 1 Gbps GigE networks. Whether this will all translate well to an order of magnitude speed increase is not yet known.
Even in cases where you have only 1 of the 2 (multiple cameras or multi-point distribution), you can have a significant cost savings. Camera Link medium cameras are plentiful in the market, well-understood, and come in a variety of performance classes. But they suffer from a costly interface - cables, repeaters, and frame grabbers aren't inexpensive. It's beneficial to convert the CL interface into something like 10 GigE.
Camera manufacturers are starting to take a look at offering the same camera (same sensor, same electronics), but with a 10 GigE interface built right in.
But yes, if you have 1-2 cameras that need to be connected to a PC a pace away, then there are other alternatives. But medical, military, and high-value quality inspection applications don't tend to fit this mold.
Thanks for clarifying, John. So what you're saying is that for the bulk of applications where there are multiple cameras that need to distribute images to multiple end points, 10GigE can make a big difference. For the fewer applications where there is closer proximity, it could be overkill from a price standpoint.
Truchard will be presented the award at the 2014 Golden Mousetrap Awards ceremony during the co-located events Pacific Design & Manufacturing, MD&M West, WestPack, PLASTEC West, Electronics West, ATX West, and AeroCon.
In a bid to boost the viability of lithium-based electric car batteries, a team at Lawrence Berkeley National Laboratory has developed a chemistry that could possibly double an EV’s driving range while cutting its battery cost in half.
For industrial control applications, or even a simple assembly line, that machine can go almost 24/7 without a break. But what happens when the task is a little more complex? That’s where the “smart” machine would come in. The smart machine is one that has some simple (or complex in some cases) processing capability to be able to adapt to changing conditions. Such machines are suited for a host of applications, including automotive, aerospace, defense, medical, computers and electronics, telecommunications, consumer goods, and so on. This discussion will examine what’s possible with smart machines, and what tradeoffs need to be made to implement such a solution.