HOME  |  NEWS  |  BLOGS  |  MESSAGES  |  FEATURES  |  VIDEOS  |  WEBINARS  |  INDUSTRIES  |  FOCUS ON FUNDAMENTALS
  |  REGISTER  |  LOGIN  |  HELP
Comments
You must login to participate in this chat. Please login.

hello all from Edmonton, Alberta.

Iron

Good comments and discussion by attendees and lecturer -- thanks for archiving! ... Also, there should be no Green Orion Slave Girls ... just sayin' ...

Dave: Open the pod bay doors Hal ...

Hal: I'm sorry Dave, I can't do that ...

 

 

For those of you interested in learning more about embedded vision, I recommend the website of the Embedded Vision Alliance, www.embedded-vision.com, which contains extensive free educational materials. 

For those who want to do some easy and fun hands-on experiments with embedded vision first-hand, try the BDTI OpenCV Executable Demo Package (for Windows), available at www.embedded-vision.com/platinum-members/bdti/embedded-vision-training/downloads/pages/introduction-computer-vision-using-op

And for those who want to start developing their own vision algorithms and applications using OpenCV, the  BDTI Quick-Start OpenCV Kit (which runs under the VMware player on Windows, Mac, or Linux) makes it easy to get started: www.embedded-vision.com/platinum-members/bdti/embedded-vision-training/downloads/pages/OpenCVVMWareImage

Blogger

really great presentation Michael. Thank you very much, it was a great summary of common vision gotchas.

Iron

Good presentation.  Again, alot of information and could use a bunch of follow up courses on just this one presentation.

Iron

It is amazing what you can do with a FPGA and a good optical sensor.

Iron

I'm sure there are better and faster implementations of these algorithms out there that are proprietary.

Iron

There are asics that specialize in this type of image processing that have been developed.

Iron

good session, thanks

Iron

My understanding is that professional level three-chip cameras are great for high quality requirements, but may be too expensive for most embedded imaging applications.  They are worth considering, though, to determine if the resulting quality difference is worth it.

Then there are special cameras mounted on constantly moving platforms -- satellites or aircraft -- that use a 'pushbroom' array of sensors instead of a matrix on a chip, to continuously image a ribbon path over a planet's surface.  (Studied such for my Space Systems Engineering Masters degree).

Ah, I should close my chatting at this time.  Thank you, all.

Iron

Thank you Michael and all.

Thanks, Alaskaman66.  I thought it might exist (the algorithm).  I also read with interest a relatively new kind of DoD radar mounted on a helicopter that supposedly could see human bodies (combatants) through brush.  Acronym was "FOREST, something like that.  Very expensive, and very limited availability of course.  Radar Image processing of such a radar signal would be interesting.  Then there is the topic of "Sensor Fusion".

Iron

clintpatton: I am wondering, what differences are at DIP and ISP? Does a book covers that? 

Thanks for the presentation, Michael. Uncovered a lot of stuff going on "under the hood." (or bonnet)

Sturio color cameras get around the entire bit averaging overhead by using a set of dichroic filters that seperate out the incident light image into red, green and blue colors. This triple image is then applied to three (B&W) sensor chips. Wouldn't this be a better solution for a machine vision camera. And don't forget that the color seperation could probably be something other than RGB, or just two colors, or more than three. How would one determine from the application what the best pre-image conditioning should be?

ok I will look into that, yes that's the one I just found on amazon, 3rd edition, thanks!

Iron

Authors Rafael C. Gonzalez and Richard E Woods

noor811:  I got the book used by many universities called Digital Image Processing.  I have found it good so far

ok thanks for the presentation and chat

Iron

My current project is to interface a Digilient Atlys Xilinx FPGA board with an Aptina Image Sensor that Digilent also supplies, first I would like to put frame data into DDR3 frame buffer, then display out from frame buffer to HDMI monitor.  My previous attempt with the Spartan3ADSP1800 starter board and an Aptina/Micron image sensor headboard failed since the sensor did not want to talk back (no ACK) signal coming from I2C interface when analyzed with a logic analyzer, I assume the problem could by the wiring and the voltage requirements of the headboard, I should have better luck with the Vmod sensor by Digilent

Iron

I need to sign off now, but thanks again for listening and for all your comments. I enjoyed it.

Blogger

Wahaufler: Removing fixed images from scenes.. This sounds like the methods used for moving target detection in look down, shoot down radar. So I think the signal processing algorithms already exist. A simple version is used in automatic asteroid tracking, where non moving point sources in three consecutive images are tossed out, and the only thing the computer identifies is whatever has moved.

noor811: Believe it or don't the Wikipedia articles on these topics are actually pretty good, giving the various formulae to convert between the various color spaces.

 

Iron

What are some good references (online or books) that explain ISP blocks such as Bayer to RGB converter and/or RGB to YUV converter, enough to implement an algorithm myself? A great book that I found on Amazon Kindle: Design for Embedded Image Processing on FPGAs

Iron

I thought I read about a camera that provided access to the RAW data before the ISP; a tap as it were.  Anybody know of such?
wahaufler

Yes, most DSLRs will let you do this. Also, if your code runs embedded in on a main application processor, you might get a tap to RAW data

Blogger

wahaufler: Many of the camera chips can provide "raw" data if you can figure out their data sheets. Omnivision cameras certainly can.

Iron

In the Star Trek (TOS) pilot "The Cage", when Vina plays the Green Orion Slave Girl, the dailies kept coming back with her "not green", no matter how green they made her makeup. They couldn't figure out what was going wrong until they talked to the film processing lab. It turns out the lab thought something was going wrong in the photography or lighting so they were color-correcting Vina back to something approaching normal flesh tones. Whoops! ;-)

IRON
That's good to hear. Automatic white balance accuracy is always challenge, so good to hear humans don't always get it right either!

 

Blogger

I thought I read about a camera that provided access to the RAW data before the ISP; a tap as it were.  Anybody know of such?

Iron

wahaufler: You bring up a good point: For many embedded vision applications, "automatic anything" is often the wrong choice. We often have full control over lighting, exposure, focus, etc. and anything the camera attempts to do to "automate" these is probably wrong.

Iron

Understood, thank for answering my question :)

Iron

What about lossless compression algorithms, like Tiff or bitmap? Can one go back to the original raw image from such an image type?
Alaskaman66

These do not lose any data, but they are almost always applied after the data has gone through the main ISP, which converts the RAW data to RGB or YUV data. You can't go all the way back to RAW.

Blogger

Interesting story about an "Automatic White Balance" failure:

In the Star Trek (TOS) pilot "The Cage", when Vina plays the Green Orion Slave Girl, the dailies kept coming back with her "not green", no matter how green they made her makeup. They couldn't figure out what was going wrong until they talked to the film processing lab. It turns out the lab thought something was going wrong in the photography or lighting so they were color-correcting Vina back to something approaching normal flesh tones. Whoops! ;-)

Iron

@Michael, whats a good kernel size for the ISP pipeline, does the size stay the same for every ISP block? for example 3x3, 5x5, 8x8 ?

That's a complicated question, and it depends on the algorithm, the sensor resolution and the optics. For example, a good Bayer demosaic needs at least 7x7, a sharpening kernel probably 3x3 or 5x5, a spatial NR kernel could be much bigger --our biggest is 128x64! It depends on the characteristic scale of the feature being processed. Obviously with high megapixel sensors, a given feature requires bigger kernels.

Blogger

I'm going to open a can o worms here.  As one interested in the problem of capturing images of elusive creatures -- that's right - sasquatches -- my mind goes to the problem of focussing on a furry and thus already somewhat fuzzy body behind branches which may fool the auto-focus of a camera to lose focus on the object of interest in the background.  I suppose avoiding the use of auto-focus in this context is the lesson here, but is there any other kind of special processing that might be useful in this context, say to avoid losing detail on the fur in the noise reduction processing.  I played with the idea of stitching images across time or space to somehow remove the foreground branches and leaves.  This would require a special multi-camera setup, I suppose.  Not likely in this context.

Provided for your serious thinking of a potentially silly topic.

Iron

Xilinx and probably Altera's ISPs are expensive, I would like to create my own..

Iron

@Michael Tusch  What kind of DRC algorithmes are usually used in ISP?
moulay

It depends on the camera sophistication. The general term is "tone mapping", and this can mean fixed gamma, content-adaptive gamma, histogram modification or very complex algorithms which try to mimic the human vision system, such as "Retinex" and our own one we call iridix

Blogger

Very good exposition. Any references for HDL models of algorthms for FPGA implementation?
lvruffino

Both Altera and Xilinx offer pretty good basic ISPs with some of their development kits --I don't have the exact reference but they are described on their respective sites. 

Blogger

@Michael, whats a good kernel size for the ISP pipeline, does the size stay the same for every ISP block? for example 3x3, 5x5, 8x8 ?

 

Iron

I heard you expand/define ISP as Image Signal Pipeline whereas the slide defined it as Image Signal Processors.  I suppose either is valid and interchangeable?  Sorry if this was already asked.

Great Presentation!  Thanks!
wahaufler

it's actually interchangeable. I think is the Processor which has a Pipeline inside it, meaning a sequence of transforms to a data stream

Blogger

What about correction for optical, mechanical, and pixel vignetting effects?  In analog days this kind of shading error was corrected with scaled parabola signals added to the baseband video at horizontal and vertical rates.  Gradient shading errors were corrected with scaled saw tooth signals added to the baseband video at horizontal and vertical rates.  Feature extraction such as that required for OCR might benefit from shading correction.  OCR binarization algorythms attempt to deal with adjacent pixel levels changed due desired feature brightness dealing with both noise and background brightness changes (perhaps due to background artwork).  I haven't seen anything about shading correction outside of telecine/live studio video camera enviornments.

What, if anything, is done either in the camera itself or in digital post processing to correct shading errors or, indeed, is shading correction even desirable in computer vision processing?

Thank you for a very informative presentation.

Silver

Are there different sets of algorithms that would produce a given image protocol, say a Tiff image?
Alaskaman66

This is really independent of algorithm --it's just a file format. The native data inside a TIF is either RGB or YCbCr, the same as JPEG, H.264 etc

Blogger

Are some colors easier to work with than others?

jack212

There are some characteristic colors, like skin tones, grass etc which can be used as cues. But actually the most useful thing is see what the AWB algorithm is doing as it searches for the correct colors. This info is handy for vision algorithms

 

Blogger

I enjoyed this presentation. Missed day 1 and Day 2, but so far, definitley my favorite Digikey class so far.

Iron

@Michael Tusch  What kind of DRC algorithmes are usually used in ISP?

 

Iron

@jack212 - Black and white are easiest.

Iron

Very good exposition. Any references for HDL models of algorthms for FPGA implementation?

Iron

Why "Image Signal Processors"? Is there really any "signal" processing involved - image is digitized before any processing is done...
caa028

I have to agree with you: it's all digital! I guess it's an analogy with DSPs

Blogger

I heard you expand/define ISP as Image Signal Pipeline whereas the slide defined it as Image Signal Processors.  I suppose either is valid and interchangeable?  Sorry if this was already asked.

Great Presentation!  Thanks!

Iron

Are some colors easier to work with than others?

Iron

@Atlant, demosaic...gotcha!

Very Informative.  Thank you

Iron

Thank you Michael, Very interesting

Iron

Michael, Chuck and Digikey

Thanks for a great presentation!

Iron

Great. thanks everyone for joining and hope it was useful

Blogger

@Atlant, although of course you might want to cluster multiple physical pixels together to construct each produced pixel, for example to improve a camera's low-light performance, or to use digital zoom techniques in lieu of an optical zoom. This is what Nokia's PureView technology does, for example: www-dot-embedded-vision-dot-com/news/2012/03/05/nokias-808-pureview-technical-article-and-several-videos-you

Thank you Dr. Tusch

Iron

dipert: If your demosaicing algorithm (for each pixel) is R + ( G1 G2 )/2 B, then if you're dealing with the image at teh RGB level, you *CAN'T* get the individual values for the G1 and G2 pixels; they were lost when you averaged to two green pixels.

Iron

Thanks Michael and Chuck.

Iron

thanks! very interesting presentation

Iron

Thanks for the insights.

Iron

thansk for the webinar Michael

 

Iron

Great content, Michael. Gives me ideas on what to look for in new cameras being evaluated.

Excellent lecture! Thank you Michael.

Iron

For the next webinars: The JW Player audio plugin seems to have a problem, in that it stops streaming, and has to be manually started. Any suggestions on whether this is a JW Player issue or a browser issue?

@Atlant, I don't understand your point. The "native" color (portion of the visible spectrum) for each pixel is the color of the filter above it. It's not common (though I suppose there may be reasons for it in some cases) for further interpolation of that particular color for that particular pixel to occur. So if you throw away the remainder of the data for each pixel, which by definition has been interpolated, you can reconstruct the original RAW data.

dipert_bdti: That's only true if your Bayer filter somehow allocated only one raw pixel to each produced pixel. But if, for example, your produced pxels include a red pixel, a blue pixel, and the average of the two diagonal green pixels, you *CAN'T* recover the data from the two individual green pixels; it was lost forever.

Iron

@Alaskaman66, see here for more info on Bayer pattern sensors, as well as the Foveon alternative: www-dot-embedded-vision-dot-com/platinum-members/bdti/embedded-vision-training/documents/pages/selecting-and-designing-image-sensor-

Thanks for holding this class

 

Iron

@Alaskaman66, with a conventional Bayer pattern sensor, each pixel natively captures only one of the three primary colors (red, blue, or most commonly green). Interpolation algorithms from nearby pixels' captured data are used to calculate an approximation of the "missing" data at each pixel, thereby transforming RAW into a lossless BMP or TIFF equivalent (and assuming that no other image processing takes place, such as dynamic range expansion or compression, etc). I suppose that if you knew the the original Bayer pattern, you could throw away the interpolated "missing" data and resurrect the original RAW...

audio seems steady now

Iron

jon: Me too, just very recently.

Iron

my audio is continuosly dropping out

Iron

what are the hot topics in image processing these days?

Iron

@caa028: Why "Image Signal Processors"? Is there really any "signal" processing involved - image is digitized before any processing is done...
The ISP applies algorithms to sequences of pixels, very similar to how more familiar digital signal processing algoirhtms (like audio filtering) work.

 

Blogger

Compression always changes data

Is DRC done by time gating individual pixels?

caa028: Nobody say that "signal" processing has to be done in the analog domain.

Iron

@caa028: Yes, ISP = Image Signal Processor
From Aptina's web site: "Digital image signal processors (ISPs) and SOCs use algorithms, or well-defined step-by-step instructions, to adjust the raw data an image sensor collects so that the processed image or video is more visually pleasing than the original. Put another way, SOCs and ISPs make the image look more like what the mind's eye sees, eliminating image blemishes, compensating for poor lighting conditions, or even correcting for a shaky hand or for bad focus." (see www-dot-aptina-dot-com/products/image_processors_soc/ (change each "-dot" to ".").

Blogger

Why "Image Signal Processors"? Is there really any "signal" processing involved - image is digitized before any processing is done...

Iron

Are there different sets of algorithms that would produce a given image protocol, say a Tiff image?

Alaskaman: Yoou can decompress losslessly, but other steps in the image pipeline have already taken steps that threw away data. E.g., even demosaicing (may) lose a certain amount of data irrecoverably because it (may) average neighboring pixels.

Iron

What about lossless compression algorithms, like Tiff or bitmap? Can one go back to the original raw image from such an image type?

I got audio to work... Needed to update flash.

Iron

too many ambiguous abbreviations... ISP - image signal processor/internet service provider/...

Iron

Gatineau, Quebec.  A little closer to the mic please!

NSL22, Be sure the player's speaker is not muted.  Try turning up its volume

Gold

Made it on time today...

Iron

Hello from Newport Beach, CA

Iron

NSL22: Audio's up, but somewhat muffled.

Iron

Audio is loud and clear...

Iron

Atlant: No we're fine Got about 250 inches so far this year with another big storm on the way

I hear you Atlant!  Plenty of snow in my yard! from outside Boston

Iron

Hi all - Audio is live!

Hello from barcelona

Iron

Hey Alaskaman, do you need some spare snow? We've got some extra today here in New ENgland.

Iron

Hello from Panama City, FL

Iron

Great day in sunny Valdez, but lots of wind last night.

 

Hello from Placentia CA.

Iron

Good day everyone, from Ottawa ON

Iron

Hello from Fort Worth, TX

Iron

Oh, and Hi from Beautiful Roxborough PA.

Iron

Hello from Edmonton, AB

Iron

The Lego Kiosk is really neat.

Iron

The spring weather in Aurora Ontario is too kool.

Iron

Another heart-rate-from-video application for iOS-based hardware is Cardiio (www-dot-cardiio-dot-com). And there are any number of conceptually similar apps (such as Azumio's Instant Heart Rate, see www-dot-embedded-vision-dot-com/news/2011/07/28/azumio-successfully-takes-pulse-investors-and-phone-users-alike) which leverage a smartphone's camera and flash for close-quarters analysis

Sure would be fun to visit the salvage yard and part out a 2013 Cadillac XTS and all of its electronics, on a sunny summer afternoon!!   :)

Hello from Eugene, Oregon

Hi Everyone. In case anyone missed it yesterday, I'll take this opportunity to respond to a question from Monday's session. Several people asked for references on the example applications I described on slide 7 in Monday's presentation. Please note that there are multiple commercially available products in each of these categories. I'm including just one example of each here. (Please note that I have mangled the URLs since valid URLs are apparently rejected by the chat software. Replace the "-dot-" with a simple "." In each instance and you're good to go.)

Heart rate from video: www-dot-vitalsignscamera-dot-com/

Pill identification: www-dot-pillidscan-doot-com

LEGO augmented reality kiosk: www-dot-metaio-dot-com/kiosk/lego/

Surgical robot for hair transplantation: restorationrobotics-dot-com

Automotive safety (2013 Cadillac XTS, one example of many): www-dot-youtube-dot-com/watch?v=AsHQ5ORXinE

Automatic panorama image stitching: www-dot-cs-dot-bath-dot-ac-dot-uk/brown/autostitch/autostitch.html

 

Blogger

In from chilly (but sunny) Chicago.

Iron

Hello from Cloudy SE Lake Simcoe Ontario Canada.

Iron

Howdy folks!!  :)  {The sun is bright and the wind brisk.  It's a great day to be kickin'!!}

Good evening from Iasi, Romania

Iron

Hello from Lake Oswego OR

 

Iron

Hello from Sun City, El Paso TX

 

Iron

Good morning from beautiful Chicago -- sunny and 17 degF.

Blogger

Hello from the Windy City

Iron

Hello from Binghamton, NY

Iron

Sunny & 10 degF in Minneapolis today...

Iron

Hi Michael and everyone, good morning.

Iron

Good morning from Scottsdale, AZ

Iron

Good Morning from Tenneessee

Iron

Morning from North Pole, Alaska

Iron

Good Morning All; Spring Forward!

Iron

Good morning, everyone

Iron

It's a beautiful day in the neighborhood...

Good mornig from Mobile, AL

Good morning from Mobile, AL

Good morning from Portland Oregon

Iron

Reviewing day 3 slides - good caveats against wasting resources on futile efforts at data necromancy.

See you tomorrow.

Sl;ides now!

Iron

Why ask the question "Is it ok to share data with Digi Key." If only answer is yes?

Why not just say info will be shared with digikey? And, maybe say what info.

Iron


Partner Zone
Latest Analysis
Some of our culture's most enduring robots appeared in the 80s. The Aliens series produced another evil android, and we saw light robot fare in the form of Short Circuit. Two of the great robots of all time also showed up: The Terminator and RoboCop.
Two students have created a voice-command system for our homes, based on the simple and affordable Raspberry Pi.
Optomec's third America Makes project for metal 3D printing teams the LENS process company with GE Aviation, Lockheed, and other big aerospace names to develop guidelines for repairing high-value flight-critical Air Force components.
This Gadget Freak review looks at a cooler that is essentially a party on wheels with a built-in blender, Bluetooth speaker, and USB charger. We also look at a sustainable, rotating wireless smartphone charger.
Texas Instruments is rolling out a new microcontroller that could make the design of sensor networks and data logging systems simpler and less costly.
More:Blogs|News
Design News Webinar Series
7/23/2014 11:00 a.m. California / 2:00 p.m. New York
7/17/2014 11:00 a.m. California / 2:00 p.m. New York
6/25/2014 11:00 a.m. California / 2:00 p.m. New York
5/13/2014 10:00 a.m. California / 1:00 p.m. New York / 6:00 p.m. London
Quick Poll
The Continuing Education Center offers engineers an entirely new way to get the education they need to formulate next-generation solutions.
Jul 21 - 25, Design Products With Bluetooth Low Energy
SEMESTERS: 1  |  2  |  3  |  4  |  5  |  6


Focus on Fundamentals consists of 45-minute on-line classes that cover a host of technologies. You learn without leaving the comfort of your desk. All classes are taught by subject-matter experts and all are archived. So if you can't attend live, attend at your convenience.
Next Class: August 12 - 14
Sponsored by igus
Learn More   |   Login   |   Archived Classes
Twitter Feed
Design News Twitter Feed
Like Us on Facebook

Sponsored Content

Technology Marketplace

Copyright © 2014 UBM Canon, A UBM company, All rights reserved. Privacy Policy | Terms of Service