Embedded vision is about to take off, enabled by tiny smart camera development modules like the SmartVue, which combines an image sensor with a high-powered programmable image processor. (Source: CogniVue)
This is a great new development. It is interesting that it uses the ARM architecture. Chalk up another one for ARM. It also opens up new applications The vendor often lists target applications, but, as the author mentions, the form factor and other specs will get engineers thinking about other apps. I already have some ideas.
I agree Naperlou. There is a wide range of applications for a smart camera this small. In manufacturing alone, these cameras could help with track and trace as well as data collection and verification.
Actually, the chip has two ARM9 processors; one associated with the image-processing components, and one "on the side" for what I assume are general-purpose operations. Cognivue provides a development kit and a software development suite of tools, but the company's Web site doesn't supply more than a one-page summary of the tools available for developers. Still, that second ARM9 processor looks like a good way to customize the chip to many applications. The chip has many unused I/O pins and internal peripherals, too. ARM has designed an excellent debugging and trace section for processor licensees. I'd like to know if the Cognivue chip makes them available for the embedded ARM9 processors. Looks like the software "kit" includes an RTOS.
Jon - Yes you are correct that the CV2201 Image Cognition Processor has 2 ARM9s, but the real performance comes from programming the parallel processing engine (APEX). From a software standpoint, we provide developers with an SDK, APEX tools (compiler&simulator for those looking to develop their own proprietary algorithmic functions executing on the APEX), Toolkits: Video/Audio Player-Recorder toolkit, Image Processing Toolkit (includes pre-optimized kernels, primitives and algorithmic components executing on APEX for advanced image cognition applications), Camera calibration toolkit, and complete Applications. We're in field-trials now with an aftermarket automotive smart backup camera appliccation - a single camera doing dewarp, perspective correction, object detection, distance estimation and graphic overlay - rendering the data to the driver-side display in real-time to prevent backover accidents. It's another application that is taking off in a big way with automotive OEMs and aftermarket suppliers. Re ARM debugging - we support Lauterbach Trace32 JTAG debugger in addition to Amontec JTAGkey2 and Segger J-Link debuggers.
Sounds like a very powerful device. Nice to see more advanced activity in both intelligent vision and embedeed vision technologies. From my perspective, people who want to apply vision don't want to get bogged down in coding algorithms; they just want to use them to accomplish something. Placing everything--hardware and software--in an easy to use package should give designers a quick start. Nicely done.
Engineers from the auto industry will take a hard look at this technology, if they aren't looking already. Lane keeping, adaptive cruise control, collision avoidance, rear-view assist, traffic sign recognition, and blind spot detection are only a few of the applications that might use this. It's said that middle- and upper-class vehicles could soon contain as many as 15 cameras apiece.
Charles you are spot on. In fact CogniVue has demonstrations for the following driver assistance systems: lane departure warning, forward collision avoidance and blind spot detection.
Readers can check out our video demos on YouTube at the following link:
Spot on, Chuck. I also see potential applications in perimeter protection and in airport and city center security. Most of use know about London's 10,000 cameras (or whatever the specific number is), which monitor activity to keep an eye on crime and terrorist threats. For perimeter and airports, the TSA stuff we see isn't where the cutting-edge research activity is. Here's a piece I did a couple of years ago about some interesting IBM stuff. (Who knew IBM was into perimeter and airport protection?)
From someone in the automotive camera business, yes, this is 'old hat'. One inch square was the old standard. The new form factor that we are designing to is a 18-20mm sided cube. The automotive smart cameras tend to have a module with the imager on it, video goes parallel out of the imager chip into a DSP on the next board of the camera. Usually, it is located very near the camera to avoid signaling issues. In the case of a front view smart camera, the lens peek out of the windshield above the rear view mirror. The DSP is on on a circuit board directly above it. I think most car cameras, rear view or forward view (usually a smart type), are based on 1/3" imagers. Now we are moving on to 1/4, 1/5" and smaller. This is where the German car camera market is at currently.
These cameras are and will be for the hi end market, Audi, BMW etc. One of the basic differences that I see between machine vision and automotive is that, with the forward looking smart cameras, the requirements are the camera be spacially accurate. With machine vision, the camera must be accurate enough to do the job in 2D. With hi end automotive vision applications heading towards being able to do dynamic collision avoidance (moving car vs. moving object), the modern camera must work with scene recognition, the 3D brother of 2D pattern recognition. So one camera, using multiple frames of video will generate a moving 3D 'map' of the scene ahead, two cameras are not required, which simplifies the calibration and hardward required. Scene recognition for automotive aplications is a new frontier, obviously the robot industry has been working on it awhile. The autonomous vech. competition was very interesting. Next the car will have to figure out if the object is okay to run over or, apply the brakes determining that the car behind has time to brake as well! ;^) And that joke alludes to the newest systems for autos that allow a top or adjustable 360 degree view of the car on the dash display.
Thanks for the detailed reply, Craig. Your description of automotive 3D sounds like it's the type where multiple 2D cameras create stereo images that make up 3D images. There are some other methods used in industrial MV that are more complex and costly. And 2D is not always sufficient for MV, which is why there's more 3D happening there.
The reason I guessed that the small cameras you had described were for high end cars is because you said the one in the story was old news. But several other sources I found, as well as comments here from the manufacturer, described possible use of the cube camera in the story for automotive apps, meaning mainstream ones. Anyway, thanks for all the input.
Just wanted to address your comment on stereo vision for cameras. To be spacially aware, all you need is two pictures taken from different positions. To get that, you can have two cameras spaced at a known distance and compair synced frames. OR, the computer looks at two successive frames and, knowing the speed of the car, calculates all of the distances in the critical part of the scene to the function running. Typically a function like collision aviodance.
As all our parts get smaller, 3D MV should see a real upswing in usage. Machine placement tolerances are getting so low (5um), 3D will be required to compensate for temperature variations in the parts and equipment.
Right, that's stereo 3D, using two or more 2D cameras. There are other methods for achieving 3D in machine vision, though, done with a single camera using, for example, image triangulation as I mention briefly here:
Seems like a nice development package, but I wonder why they chose the 7690 imager instead of a more capable one like the OmniVision 5642. I've used the 7690 and it's image quality is marginal at best, whereas the 5642 is razor sharp. Perhaps the ARM processor couldn't process anything better than VGA, but the 7690 built-in optics are subpar.
btwolfe - Just to clarify, the SmartVue development camera module uses OV7962 a wide dynamic range VGA sensor (not 7690). The CV2201 Image Cognition Processor in the camera (the brains so to speak) is sensor-agnostic and can interface to a number of different sensors including megapixel.
Tina - The only difference I see between the 7962 and the 7690 is the MIPI interface and support for 50/60Hz illumination compensation. Regardless, I think they would have gotten more milage from a better imager. Perhaps a future rev of the cube design? Incidentally, I only noticed this because I'm working on a similar compact imager concept, except that my system does passive stereo processing to generate depth information. Of course, it wouldn't be the same small form factor, but the all-in-one concept is the same. It's good to see products like this come to market.
Tina, thanks for all the input on the SmartVue camera, especially from the app development perspective. My experience accords with Jon's, that in vision system engineers are interested less and less in coding and more and more in faster, easier app development.
The difference in requirements between cameras for automotive applications and macine vision, for inspection or gaging, are large. Watching for a car in a blind spot, keeping an eye on the lane edge marker, or checking the position of the right-side passenger are much easier than inspecting a part for proper threads or correct dimensions. Also, dtermining part orientation is a demanding application as well. My point being that the two applications are very different and as a result, comparisons between them, (the two different types), are of marginal value.
The engineers and inventors of the post WWII period turned their attention to advancements in electronics, communication, and entertainment. Breakthrough inventions range from LEGOs and computer gaming to the integrated circuit and Ethernet -- a range of advancements that have little in common except they changed our lives.
The age of touch could soon come to an end. From smartphones and smartwatches, to home devices, to in-car infotainment systems, touch is no longer the primary user interface. Technology market leaders are driving a migration from touch to voice as a user interface.
Soft starter technology has become a way to mitigate startup stressors by moderating a motor’s voltage supply during the machine start-up phase, slowly ramping it up and effectively adjusting the machine’s load behavior to protect mechanical components.
A new report from the National Institute of Standards and Technology (NIST) makes a start on developing control schemes, process measurements, and modeling and simulation methods for powder bed fusion additive manufacturing.
If you’re developing a product with lots of sensors and no access to the power grid, then you’ll want to take note of a Design News Continuing Education Center class, “Designing Low Power Systems Using Battery and Energy Harvesting Energy Sources."
Focus on Fundamentals consists of 45-minute on-line classes that cover a host of technologies. You learn without leaving the comfort of your desk. All classes are taught by subject-matter experts and all are archived. So if you can't attend live, attend at your convenience.