"What the heck is embedded vision?" you may be asking when you see the title of our next Continuing Education Center course, Fundamentals of Embedded Computer Vision: Creating Machines That See. That's a good question, and Jeff Bier is the one to answer it. As someone who's reported on machine vision and has interviewed Jeff before, I'm especially looking forward to this course.
Until recently, because of its cost, embedded computer vision was found mostly in low-volume applications like machine vision. There, it usually consists of visible light and maybe also infrared cameras, plus various types of inspection systems, attached to robots or not, on the manufacturing floor, the assembly line, or the warehouse.
But then one of those magic moments happened in electronics. CMOS image sensors got cheaper, smaller, and much more powerful, and cameras started appearing everywhere -- for example, in tablet PCs, the iPhone, and driver safety systems. Those high-volume apps drove prices down even further. Meanwhile, embedded processors with enough performance to deal with vision applications reached price and power consumption levels low enough for consumer apps.
So last year Jeff, an expert on embedded processors and a prescient guy who's good at reading the electronics industry tea leaves, founded the Embedded Vision Alliance. The EVA is an industry partnership that works to inspire and empower designers to create more capable and responsive products by integrating vision capabilities into them.
During the week-long course, he will use case studies and demonstrations to illustrate what embedded vision is all about and why you should consider including it in your design. Jeff has divided the course into an introduction, a day on image sensors, another on processors, and one on vision algorithms and tools. The last day looks at even more complex algorithms, like face detection and object tracking. It also shows how to set up your own vision algorithm development environment using OpenCV, the free, open-source vision software library.
Ann, this is an interesting contrast to the article from August 13, by Al Presher, titled "Blurring the Lines of Control". In that article, the machine control system integrated image processing with other functions in a centralized computer. As I commneted then, this is going against the trend in the industry, as mentioned in this article, for distributing the intelligence. For example, a modern automobile has probably 50 or more processors. Genreally there is even a processor for the temperature guage. It is cheaper to do that than to integrate software on a centraized machine. In addition, it is generally more accurate when performing control functions to have the processing near the device.
I am looking forward to Jeff's course as well. It should be interesting.
Thanks @Ann! Embedded Vision is a very timely topic. With inexpensive vision sensors being place in all sorts of devices we are moving beyond the technology of "how can we do this?" and into "how can we use this?" ...and soon after "why do we need this?". One of the largest immediate impacts will be its disruptive effect on police sketch artists and eyewitnesses. With every action digitally recorded, we will move quickly away from the "he said", "she said" impasses of yesteryear.
On the positive employment side, the demand for "video-photo-shoppers" will skyrocket, both for vanity's sake and for those underworld characters that have enough money to alter the photographic record.
Lou, glad you will be joining us. In the time I spent reporting on machine vision, I discovered that there really isn't a single MV industry anymore, and within it there are multiple trends, sometimes apparently opposite ones. While some vision system/production system designers are shifting more toward distributed control, others are moving toward centralized control. And these differences exist not only between application clusters within traditional industrial machine vision, but within them, too. It all depends on what the vision system is being required to do and what constraints there are on it. At a larger scale, the formation of the EVA made it clear that things are even more complex when you go outside industrial machine vision and look at other uses of embedded computer vision.
williamlweaver, glad you will be joining us. I think that's an intriguing point about all the photoshopping of the (soon-to-be) massive amounts of online photos. Might make a good question for our lecturer.
At the Design News webinar on June 27, learn all about aluminum extrusion: designing the right shape so it costs the least, is simplest to manufacture, and best fits the application's structural requirements.
For industrial control applications, or even a simple assembly line, that machine can go almost 24/7 without a break. But what happens when the task is a little more complex? That’s where the “smart” machine would come in. The smart machine is one that has some simple (or complex in some cases) processing capability to be able to adapt to changing conditions. Such machines are suited for a host of applications, including automotive, aerospace, defense, medical, computers and electronics, telecommunications, consumer goods, and so on. This radio show will show what’s possible with smart machines, and what tradeoffs need to be made to implement such a solution.