HOME  |  NEWS  |  BLOGS  |  MESSAGES  |  FEATURES  |  VIDEOS  |  WEBINARS  |  INDUSTRIES  |  FOCUS ON FUNDAMENTALS
  |  REGISTER  |  LOGIN  |  HELP
Product News
Automation & Control
Cube Camera Eyes Embedded Vision
2/7/2012

Embedded vision is about to take off, enabled by tiny smart camera development modules like the SmartVue, which combines an image sensor with a high-powered programmable image processor.   (Source: CogniVue)
Embedded vision is about to take off, enabled by tiny smart camera development modules like the SmartVue, which combines an image sensor with a high-powered programmable image processor.
(Source: CogniVue)

Return to Article

View Comments: Threaded|Newest First|Oldest First
naperlou
User Rank
Blogger
Opens up new applications
naperlou   2/7/2012 8:54:36 AM
NO RATINGS
This is a great new development.  It is interesting that it uses the ARM architecture.  Chalk up another one for ARM.  It also opens up new applications  The vendor often lists target applications, but, as the author mentions, the form factor and other specs will get engineers thinking about other apps.  I already have some ideas.

Rob Spiegel
User Rank
Blogger
Re: Opens up new applications
Rob Spiegel   2/7/2012 11:30:37 AM
NO RATINGS
I agree Naperlou. There is a wide range of applications for a smart camera this small. In manufacturing alone, these cameras could help with track and trace as well as data collection and verification.

Ann R. Thryft
User Rank
Blogger
Re: Opens up new applications
Ann R. Thryft   2/7/2012 12:33:34 PM
NO RATINGS

I also thought it was interesting that this camera uses an ARM processor, something you don't exactly see a lot in camera platforms. naperlou, are you willing to share some of those ideas for apps?


Jon Titus
User Rank
Blogger
Two ARM9 Processors
Jon Titus   2/7/2012 1:09:08 PM
NO RATINGS
Actually, the chip has two ARM9 processors; one associated with the image-processing components, and one "on the side" for what I assume are general-purpose operations. Cognivue provides a development kit and a software development suite of tools, but the company's Web site doesn't supply more than a one-page summary of the tools available for developers. Still, that second ARM9 processor looks like a good way to customize the chip to many applications.  The chip has many unused I/O pins and internal peripherals, too.  ARM has designed an excellent debugging and trace section for processor licensees.  I'd like to know if the Cognivue chip makes them available for the embedded ARM9 processors. Looks like the software "kit" includes an RTOS.

Tina Jeffrey
User Rank
Iron
Re: Two ARM9 Processors
Tina Jeffrey   2/7/2012 4:43:41 PM
NO RATINGS
Jon - Yes you are correct that the CV2201 Image Cognition Processor has 2 ARM9s, but the real performance comes from programming the parallel processing engine (APEX). From a software standpoint, we provide developers with an SDK, APEX tools (compiler&simulator for those looking to develop their own proprietary algorithmic functions executing on the APEX), Toolkits: Video/Audio Player-Recorder toolkit, Image Processing Toolkit (includes pre-optimized kernels, primitives and algorithmic components executing on APEX for advanced image cognition applications),  Camera calibration toolkit, and complete Applications. We're in field-trials now with an aftermarket automotive smart backup camera appliccation - a single camera doing dewarp, perspective correction, object detection, distance estimation and graphic overlay - rendering the data to the driver-side display in real-time to prevent backover accidents. It's another application that is taking off in a big way with automotive OEMs and aftermarket suppliers.
Re ARM debugging - we support Lauterbach Trace32 JTAG debugger in addition to Amontec JTAGkey2 and Segger J-Link debuggers.

Jon Titus
User Rank
Blogger
Re: Two ARM9 Processors
Jon Titus   2/7/2012 4:59:33 PM
NO RATINGS
Sounds like a very powerful device.  Nice to see more advanced activity in both intelligent vision and embedeed vision technologies.  From my perspective, people who want to apply vision don't want to get bogged down in coding algorithms; they just want to use them to accomplish something. Placing everything--hardware and software--in an easy to use package should give designers a quick start.  Nicely done.

Charles Murray
User Rank
Blogger
Automotive applications
Charles Murray   2/7/2012 7:15:48 PM
NO RATINGS
Engineers from the auto industry will take a hard look at this technology, if they aren't looking already. Lane keeping, adaptive cruise control, collision avoidance, rear-view assist, traffic sign recognition, and blind spot detection are only a few of the applications that might use this. It's said that middle- and upper-class vehicles could soon contain as many as 15 cameras apiece.

Tina Jeffrey
User Rank
Iron
Re: Automotive applications
Tina Jeffrey   2/7/2012 8:12:51 PM
NO RATINGS
Charles you are spot on. In fact CogniVue has demonstrations for the following driver assistance systems: lane departure warning, forward collision avoidance and blind spot detection. Readers can check out our video demos on YouTube at the following link: http://www.youtube.com/user/cognivue/videos

Charles Murray
User Rank
Blogger
Re: Automotive applications
Charles Murray   2/8/2012 8:12:36 PM
NO RATINGS
The blind spot detection video is interesting, Tina, in that it incorporates the processing inside the camera module. Where was it previously?

Tina Jeffrey
User Rank
Iron
Re: Automotive applications
Tina Jeffrey   2/8/2012 9:40:41 PM
NO RATINGS
Charles, my understanding is that existing systems do the processing inside or near the camera as well. These systems are however still larger than a CogniVue-based BSD solution.

Alexander Wolfe
User Rank
Blogger
Re: Automotive applications
Alexander Wolfe   2/11/2012 1:33:21 PM
NO RATINGS
Spot on, Chuck. I also see potential applications in perimeter protection and in airport and city center security. Most of use know about London's 10,000 cameras (or whatever the specific number is), which monitor activity to keep an eye on crime and terrorist threats. For perimeter and airports, the TSA stuff we see isn't where the cutting-edge research activity is. Here's a piece I did a couple of years ago about some interesting IBM stuff. (Who knew IBM was into perimeter and airport protection?)

Craig
User Rank
Iron
Re: Automotive applications
Craig   2/15/2012 4:40:17 PM
NO RATINGS
From someone in the automotive camera business, yes, this is 'old hat'.  One inch square was the old standard.  The new form factor that we are designing to is a 18-20mm sided cube.  The automotive smart cameras tend to have a module with the imager on it, video goes parallel out of the imager chip into a DSP on the next board of the camera.  Usually, it is located very near the camera to avoid signaling issues.  In the case of a front view smart camera, the lens peek out of the windshield above the rear view mirror.  The DSP is on on a circuit board directly above it.  I think most car cameras, rear view or forward view (usually a smart type), are based on 1/3" imagers.  Now we are moving on to 1/4, 1/5" and smaller.  This is where the German car camera market is at currently.

Ann R. Thryft
User Rank
Blogger
Re: Automotive applications
Ann R. Thryft   2/16/2012 1:02:37 PM
NO RATINGS

Craig, thanks for the info. A question: were all the specs you discussed from the German car market? If so, were they for high end cars or for more mainstream vehicles?


Craig
User Rank
Iron
Re: Automotive applications
Craig   2/16/2012 1:49:45 PM
NO RATINGS
Hi Ann,

These cameras are and will be for the hi end market, Audi, BMW etc.  One of the basic differences that I see between machine vision and automotive is that, with the forward looking smart cameras, the requirements are the camera be spacially accurate.  With machine vision, the camera must be accurate enough to do the job in 2D.  With hi end automotive vision applications heading towards being able to do dynamic collision avoidance (moving car vs. moving object), the modern camera must work with scene recognition, the 3D brother of 2D pattern recognition.  So one camera, using multiple frames of video will generate a moving 3D 'map' of the scene ahead, two cameras are not required, which simplifies the calibration and hardward required.  Scene recognition for automotive aplications is a new frontier, obviously the robot industry has been working on it awhile.  The autonomous vech. competition was very interesting.  Next the car will have to figure out if the object is okay to run over or, apply the brakes determining that the car behind has time to brake as well!  ;^)  And that joke alludes to the newest systems for autos that allow a top or adjustable 360 degree view of the car on the dash display.

Ann R. Thryft
User Rank
Blogger
Re: Automotive applications
Ann R. Thryft   2/16/2012 2:55:49 PM
NO RATINGS

Thanks for the detailed reply, Craig. Your description of automotive 3D sounds like it's the type where multiple 2D cameras create stereo images that make up 3D images. There are some other methods used in industrial MV that are more complex and costly. And 2D is not always sufficient for MV, which is why there's more 3D happening there.

The reason I guessed that the small cameras you had described were for high end cars is because you said the one in the story was old news. But several other sources I found, as well as comments here from the manufacturer, described possible use of the cube camera in the story for automotive apps, meaning mainstream ones. Anyway, thanks for all the input.


Craig
User Rank
Iron
Re: Automotive applications
Craig   2/16/2012 4:50:11 PM
NO RATINGS
Hi Ann,

Just wanted to address your comment on stereo vision for cameras.  To be spacially aware, all you need is two pictures taken from different positions.  To get that, you can have two cameras spaced at a known distance and compair synced frames.  OR, the computer looks at two successive frames and, knowing the speed of the car, calculates all of the distances in the critical part of the scene to the function running.  Typically a function like collision aviodance.

As all our parts get smaller, 3D MV should see a real upswing in usage.  Machine placement tolerances are getting so low (5um), 3D will be required to compensate for temperature variations in the parts and equipment. 

 

 

Ann R. Thryft
User Rank
Blogger
Re: Automotive applications
Ann R. Thryft   2/17/2012 12:27:44 PM
NO RATINGS

Right, that's stereo 3D, using two or more 2D cameras. There are other methods for achieving 3D in machine vision, though, done with a single camera using, for example, image triangulation as I mention briefly here:

http://www.designnews.com/author.asp?section_id=1386&doc_id=235514

or CT-assisted, as mentioned briefly here:

http://www.designnews.com/author.asp?section_id=1386&doc_id=235400

and as Chuck Murray explores more thoroughly in an upcoming feature (online), already out in this month's DN print edition.


vimalkumarp
User Rank
Gold
Cube Camera Eyes Embedded Vision
vimalkumarp   2/7/2012 11:52:59 PM
NO RATINGS
Charles you are absolutely right in predicting the possibilities of the cube camera in automotive industry. This product will defnitely make an impact in many domains.

btwolfe
User Rank
Gold
Wrong camera choice?
btwolfe   2/8/2012 9:27:43 AM
NO RATINGS
Seems like a nice development package, but I wonder why they chose the 7690 imager instead of a more capable one like the OmniVision 5642. I've used the 7690 and it's image quality is marginal at best, whereas the 5642 is razor sharp. Perhaps the ARM processor couldn't process anything better than VGA, but the 7690 built-in optics are subpar.

Tina Jeffrey
User Rank
Iron
Re: Wrong camera choice?
Tina Jeffrey   2/8/2012 10:59:24 AM
NO RATINGS
btwolfe - Just to clarify, the SmartVue development camera module uses OV7962 a wide dynamic range VGA sensor (not 7690).  The CV2201 Image Cognition Processor in the camera (the brains so to speak) is sensor-agnostic and can interface to a number of different sensors including megapixel.

btwolfe
User Rank
Gold
Re: Wrong camera choice?
btwolfe   2/8/2012 12:25:48 PM
NO RATINGS
Tina - The only difference I see between the 7962 and the 7690 is the MIPI interface and support for 50/60Hz illumination compensation. Regardless, I think they would have gotten more milage from a better imager. Perhaps a future rev of the cube design? Incidentally, I only noticed this because I'm working on a similar compact imager concept, except that my system does passive stereo processing to generate depth information. Of course, it wouldn't be the same small form factor, but the all-in-one concept is the same. It's good to see products like this come to market.

Ann R. Thryft
User Rank
Blogger
Re: Wrong camera choice?
Ann R. Thryft   2/9/2012 1:47:03 PM
NO RATINGS

Tina, thanks for all the input on the SmartVue camera, especially from the app development perspective. My experience accords with Jon's, that in vision system engineers are interested less and less in coding and more and more in faster, easier app development.


William K.
User Rank
Platinum
Cube camera with internal processing
William K.   2/15/2012 11:28:02 PM
NO RATINGS
The difference in requirements between cameras for automotive applications and macine vision, for inspection or gaging, are large.  Watching for a car in a blind spot, keeping an eye on the lane edge marker, or checking the position of the right-side passenger are much easier than inspecting a part for proper threads or correct dimensions. Also, dtermining part orientation is a demanding application as well. My point being that the two applications are very different and as a result, comparisons between them, (the two different types), are of marginal value. 

Partner Zone
Latest Analysis
Enabling the Future is designing prosthetic appendages modeled more like superhero arms and hands than your average static artificial limbs. And they’re doing it through a website and grassroots movement inspired by two men’s design and creation in 2012 of a metal prosthetic for a child in South Africa.
In order to keep an enterprise truly safe from hackers, cyber security has to go all the way down to the device level. Icon Labs is making the point that security has to be built into device components.
Senior Technical Editor Chuck Murray gets the skinny on Harting Inc.'s 3D MID technology, which allows users to create a three-dimensional circuit board out of molded plastic.
Three days after NASA's MAVEN probe reached Mars, India's Mangalyaan probe went into orbit around the red planet. India's first interplanetary mission, and the first successful Mars probe launched by an Asian nation, has a total project cost of nearly $600 million less than MAVEN's.
Siemens PLM Software has made an in-kind donation of software to Central Piedmont Community College in North Carolina for its science, technology, engineering, and mathematics (STEM) Division.
More:Blogs|News
Design News Webinar Series
9/25/2014 11:00 a.m. California / 2:00 p.m. New York
9/10/2014 11:00 a.m. California / 2:00 p.m. New York
7/23/2014 11:00 a.m. California / 2:00 p.m. New York
7/17/2014 11:00 a.m. California / 2:00 p.m. New York
Quick Poll
The Continuing Education Center offers engineers an entirely new way to get the education they need to formulate next-generation solutions.
Oct 20 - 24, How to Design & Build an Embedded Web Server: An Embedded TCP/IP Tutorial
SEMESTERS: 1  |  2  |  3  |  4  |  5  |  6


Focus on Fundamentals consists of 45-minute on-line classes that cover a host of technologies. You learn without leaving the comfort of your desk. All classes are taught by subject-matter experts and all are archived. So if you can't attend live, attend at your convenience.
Next Class: October 1 - 30
Sponsored by Gates Corporation
Learn More   |   Login   |   Archived Classes
Twitter Feed
Design News Twitter Feed
Like Us on Facebook

Sponsored Content

Technology Marketplace

Copyright © 2014 UBM Canon, A UBM company, All rights reserved. Privacy Policy | Terms of Service