DN Staff

December 7, 2012

3 Min Read
Heterogeneous Multicore & the Future of Vision Systems

It's tricky writing about the future of vision systems because the performance of vision systems has largely been overpromised and underdelivered. Setting unrealistic expectations for what vision systems are truly capable of can hinder the growth of the industry. Rather than getting into the easily imagined future that science fiction is continually depicting, the focus here is on the processing technology that will improve the vision systems of today and turn them into the advanced vision systems of tomorrow. This processing technology is known as heterogeneous multicore.

False positives
In order to understand the need for multicore processors, it is necessary to look at what is driving the processing need in vision systems. In many applications, the thing preventing the use of vision systems is high accuracy in real-world conditions. There are many great demonstrations of what a vision system can do, but putting a system together that works outside of the lab environment has halted progress on many fronts. Face recognition is a good example of what works well in the lab, but a quick look at the number of conditions needed to get the best results quickly shows that it only operates best under conditions fairly removed from the real world.

132416_007561.jpg

In the field of video security, making sure a security system can execute its proper duties (sound alarms, record events, etc.) when someone is breaching it is relatively easy. What is extremely difficult is triggering only when someone is breaching the system and not generating a false positive when there isn't a real incident. These false positives are holding vision systems back, and it's not an easy problem to solve. Many factors exert an influence on the performance of a vision system including lighting, weather, background, level of environment activity, distance, and camera angle, to name just a few.

There are several different ways to improve the accuracy of a vision system:

  • Increasing the amount of information;

  • Improving the way the information is used;

  • Overlapping approaches.

All three of these improvements are pushing the need for heterogeneous multicore processors.

Increasing information
Perhaps the easiest way to try and improve vision systems is to increase what the system actually sees. Increasing the resolution of the sensor(s) feeding the system information is an ongoing trend. While some areas are able to get all the information they need from a CIF or QVGA resolution image, the trend is to go to higher megapixel sensors and increase resolution. With each increase from CIF to D1 to 720p to 1,080p and beyond, the amount of data is at least doubling at each step, mapping directly to a similar increase in needed processing capability.

Resolution is just one dimension increasing the amount of data. Other dimensions are:

  • Temporal: increasing frame rates;

  • Color: from gray scale;

  • Number of vision inputs: from mono to stereo to multi-view vision;

  • Mode of inputs: from vision-only to multi-modal that can combine audio input with video input.

As vision systems' inputs expand along each of these dimensions, the demands on computation will only continue to increase.

Improving information usage
Once you have the image, you can improve a vision system by extracting better information from it. This is the art of vision processing; taking image data and returning useful information. Newer and more advanced algorithms are continuously being developed that perform vision functions such as color analysis, motion estimation, feature detection, shape calculation, object detection, object tracking, pattern matching, event detection, etc., and turning out information with size, shape, 3D position, orientation, motion, surface properties, classification, identification, etc.

Sign up for the Design News Daily newsletter.

You May Also Like