What factories do--or don't do--with all the data they collect via machine vision is a can of worms, according to what a lot of people told me off record when I was covering the subject for T&MW. That data can help monitor processes and equipment, as well as product quality, and make numerous predictions as Alex suggests, but it's often not utilized because of the lack of dollars, time and/or know-how for integrating it into a plant's SRP and QC systems.
That's an observant question, Chuck. 3D machine vision requires multiple cameras and GigE is good at handling data from multiple cameras, so that would make a good fit. But I don't think GigE--over 1 Gbps or 10 Gbps backbones--is currently being positioned that way, although perhaps it should be.
Good point, Bill, and I think the challenge here will be on manufacturing and automation engineers to work with their software counterparts to create what I'd call predictive diagnostic and QC systems, which can make use of that data (not just more data for data's sake, which can't be analyzed, like you say). The objective would be almost an artificial intellgence-like program, or, more properly, software that over time builds up a database of patterns from which it can analyze and predict future outcomes, such as potential near-term failures as well as tweaks to improve/maintain production quality.
Looks like we are again bumping up against the limits of informatics. At some point it stops being a question of how fast and how much data can be transferred, and becomes a question of where do we store it and how quickly can we analyze it. Advances in cognitive algorithms are concerned with "attention" -- noticing anomalies in a product inspection line or movement in a normally still scene. We have a great interplay between the development of hardware and software solutions. With GigE 2.0, it looks like its software's turn to make a move.
Beth, a machine vision system with multiple cameras is served well by any speed of GigE backbone and its multipoint-to-multipoint capabilities. Whether you need to ratchet that up to 10 GigE depends on the nature of the data and/or the speed of the transfer.
Beth, good question. Many say there aren't really any downsides, and higher price is definitely not one of them. Ethernet's ubiquity throughout the enterprise means that most components--network interface cards, cables--are generally quite low cost. Some critics say that although GigE takes away the frame grabber (image capture card) it puts back in the NIC (network interface card). Although this is technically true, NICs cost a lot less than frame grabbers.To date, the main initial concerns about using GigE as a backbone for real-time networks such as high-speed vision requires have been regarding CPU loading and latency. This basically means a potential source of slowing down data transfers. Enthusiasts say that CPU loading has been fixed with filters and drivers, and latency has turned out not to be a problem in 1 Gbps GigE networks. Whether this will all translate well to an order of magnitude speed increase is not yet known.
Even in cases where you have only 1 of the 2 (multiple cameras or multi-point distribution), you can have a significant cost savings. Camera Link medium cameras are plentiful in the market, well-understood, and come in a variety of performance classes. But they suffer from a costly interface - cables, repeaters, and frame grabbers aren't inexpensive. It's beneficial to convert the CL interface into something like 10 GigE.
Camera manufacturers are starting to take a look at offering the same camera (same sensor, same electronics), but with a 10 GigE interface built right in.
But yes, if you have 1-2 cameras that need to be connected to a PC a pace away, then there are other alternatives. But medical, military, and high-value quality inspection applications don't tend to fit this mold.
Thanks for clarifying, John. So what you're saying is that for the bulk of applications where there are multiple cameras that need to distribute images to multiple end points, 10GigE can make a big difference. For the fewer applications where there is closer proximity, it could be overkill from a price standpoint.
Digital healthcare devices and wearable electronic products need to be thoroughly tested, lest they live short, ignominious lives, an expert will tell attendees at UBM’s upcoming Designers of Things conference in San Jose, Calif.
Designers of electronic interfaces will need to be prepared to incorporate haptics in next generation products, an expert will tell attendees at the upcoming Designers of Things conference in San Jose, Calif.
The company says it anticipates high-definition video for home security and other uses will be the next mature technology integrated into the IoT domain, hence the introduction of its MatrixCam devkit.
Focus on Fundamentals consists of 45-minute on-line classes that cover a host of technologies. You learn without leaving the comfort of your desk. All classes are taught by subject-matter experts and all are archived. So if you can't attend live, attend at your convenience.