HOME  |  NEWS  |  BLOGS  |  MESSAGES  |  FEATURES  |  VIDEOS  |  WEBINARS  |  INDUSTRIES  |  FOCUS ON FUNDAMENTALS
  |  REGISTER  |  LOGIN  |  HELP
Comments
You must login to participate in this chat. Please login.

hello all from Edmonton, Alberta

Iron

Sophisticated, Complex and Challenging ... Resources, Flexibility and Speed ... soon, we will have a robot from Berkeley that folds a towel ... an entered apprentice as it were ... more to come in the next ten to twenty years ... RoboCop and Star Wars to follow ...

For those of you interested in learning more about embedded vision, I recommend the website of the Embedded Vision Alliance, www.embedded-vision.com, which contains extensive free educational materials. 

For those who want to do some easy and fun hands-on experiments with embedded vision first-hand, try the BDTI OpenCV Executable Demo Package (for Windows), available at www.embedded-vision.com/platinum-members/bdti/embedded-vision-training/downloads/pages/introduction-computer-vision-using-op

And for those who want to start developing their own vision algorithms and applications using OpenCV, the  BDTI Quick-Start OpenCV Kit (which runs under the VMware player on Windows, Mac, or Linux) makes it easy to get started: www.embedded-vision.com/platinum-members/bdti/embedded-vision-training/downloads/pages/OpenCVVMWareImage

Blogger

Great lecture, thanks Jose

Iron

This is just wetting the apetite and begs for more indepth information on this.

Iron

These solutions can be implemented in hardware and software, depending on your abilities and speed requirements.

Iron

A lot of information in this presentation, in the form of algorythms and other forms of data processing.

Iron

Thank You Jos'e Great lecture

 

 

Iron

Thanks a lot for your GREAT representation!

Iron

@Alaskman: on depth-of-field focus. I briefly mentioned in my talk that the other major alternatives to mosaic subsampling are the co-sited foveon approach and the lytro approach, also known as light-field camera or plenoptic camera where you have control of depth-of-field concurrently from  the same picture. Give them a look if you are interested.

Iron

Dear Jeff Bier, Thanks for your kind help! :-)

Iron

By The Way, the Embedded Vision Alliance you mention is a fabulous resource. Thanks.

Back in the day, I maintained starlight PTZ cameras for Alyeska Pipeline. Of course, security also had video cameras everywhere, and process control used cameras for everything from boiler fireboxes to pipe examination. It became apparent that one of the achilles heels in all these systems was the bunch of neurons at the end of the chain.. People would miss the most obvious things happening. What finally stood out was an evolving collection of software that responded to video changes automatically. A security camera can archive hours of video, but software can find the instances where a person enters the FOV. This is a machine vision application! I think THE most important role will be performed by post processing software modules, all using the same image sequence input.

@Alaskaman66: on sensors optimized for particular applications.

Great question, and indeed there are infrared sensors and other types used for particular applications, but in the end it comes down to whether a market is large enough for sensor manufacturers to create such specialized sensors. In the meantime, as I alluded in my talk, you can create specialized hardware/software blocks that take advantage of your application. For example, extracting edges and background information directly and early in the sensor pipeline in order to better do object segmentation. Many of these blocks are available as IP cores.

Iron

@Anatoliy1086: embedded vision kits available. I encourage you to go to www.xilinx.com and look at the "Applications" section where you will find several kits available. They are not specifically labelled 'embedded vision' but are arranged by industry. For example, in the Broadcast industry you would look at the RTVE (Real Time Video Engine) and in the Industrial section to the IVK (Industrial Video Kit). Thanks!

Iron

Thank you Jose for your presentation.

Iron

Thanks for the presentation, Jose.  Learned a lot.

Iron

Thank you Jose and all participants

This is a great presentation. Thanks everyone.

@JimmyL: at this time there is no MIPI core available directly from Xilinx. Sorry :(

Iron

Many thanks dipert_bdti.

Silver

jralvarez: Thanks!

Iron

@jbswindle, no problem. Check out, then, the various Apical articles on the site, specifically those dealing with dynamic range processing (Apical will also be presenting tomorrow, and you can re-ask your question then)

@atlantl: on licensing IP cores: IP components are licensed not only by Xilinx but from its partners. It depends on the particular function whether there is a fee. Typically, all the embedded IP is provided as part of the Xilinx design tools and ready to use to build embedded systems. A Xilinx IP-core (LogiCore) is supported for multiple families and clearly specified in the documentation and the tools.

Iron

Alaskaman: That's probably the "Lytro" camera.

Iron

jbswndle: The "Lens correction" that several of us referred to is exactly what you're speaking of: Correcting for the non-uniform sensor illumination that comes out of a real lens.

Iron

Speaking of lens issues, I seem to remember someone came up with a method of addressing depth-of-field focus problems in digital processing. Anyone see that?

Thanks dipert_bdti.  That page, though quite interesting, seems to deal with optical distortion such as fish-eye lens distortion.  I'm talking about undesired attenuated video levels, not undesired pixel displacement.

Silver

@btwolfe: on 'pipelined' bus. Well, indeed, Xilinx has adopted AXI-4 as interconnection infrastructure. It is not technically a 'bus' but a system of interconnection that promotes interoperability for all IP cores / processing blocks. AXI is an industry standard and therefore accessible to all (e.g. non-proprietary)

Iron

Perhaps one should work backward from the application: lets look at facial recognition software. What information does it need? Can the software be implemented earlier in the image processing chain? If we can toss out some of the "bells and whistles" at the sensor/image end, I would bet the bandwidth, data handling requirements,and power needs would drop substantially. Of course, the acquired working images might be unrecognizable to the human eye.

@hdw5d6: on pipeline approaches. I'm afraid I do not understand the question... sorry. The good thing about implementation using an 'all-programmable' approach in an FPGA is that you can do almost anything with your data formats. In most 'popular' approaches, there is always an IP core (pre-packaged core) that will do the job. Also, there are open source cores that while not fully verified/validated in a specific device provide good starting points

Iron

@jbswindle, regarding lens aberration correction, check this out: www.embedded-vision.com/platinum-members/bdti/embedded-vision-training/documents/pages/lens-distortion-correction

I asked about shading correction for its potential contribution to feature extraction such as that required for OCR.  OCR binarization algorythms attempt to deal with adjacent pixel level changed due to both noise and background brightness changes (perhaps due to background artwork).  I haven't seen anything about shading correction outside of telecine/live studio video camera enviornments.

 

Thanks for your presentation.

Silver

Thanks very much Jose!

Iron

@jbswindle: on optical correction.

Great question. Due to the time limitation, I did not go into lenses, but you are correct. Lens aberrations are very important to address, especially in applications where you are forced to use small lenses. These two-dimensional aberrations can be corrected digitally. What you want to look for is an IP core that does general de-warping or it is dedicated to a specifc correction like: vignetting or lens shading correction (especially in the corners)

Iron

@pixclautotech: FPGA voltages, mounting

These vary with the interfaces you need. But typically for devices like Xilinx Series 7, the choices are 1.2V, 1.35V, 1.5V, 1.8V, 2.5V, 3.3V and are available in a variety of package options.

Iron

interesting session

Iron

Atlant: Unfortunately, we do seem to be at the mercy of the market..

 

how are basic vision IP blocks licensed? I'd like to get an idea of what it costs to get started on a relatively simple machine vision prototype?

Iron

Alaskaman: One opportunity would be a monochrome camera (with all the resolution of the current cameras but no Bayer color filter in front of the sensor). Unfortunately, you and I would be buying hundreds or thousands but the mobile phone camera vendors buy millions of full-color cameras.

Iron

@jbswindle: on integrated camera ISP.

Many sensors are provided with an integrated signal processing (ISP) module. In some systems if you have access to the raw data or if you need to implement a specific algorithm, then an external processing device (FPGA) can be used. It really depends on your application.

Iron

@jralavrez same question as Anatoliy1086, but specifically a developement kit for stating FPGA vision programming

Iron

Much of the technical characteristics of sensors and video processing are related to the peculiarities of human vision: tri-stimulus response ratios; gamma correction, color gamut spaces, frame rates, etc. Obviously, sensor manufacturers cater to the "human use" market. Why not go back to square 1 and design the sensor system optomized for machine vision? For example, if speed (frame rate) is necessary for a fast production line, maybe one doesn't need to bother with high resolution or color. Anyone out there using such an approach?

test (seems that some posts are lost)

 

 

Iron

I'll take this opportunity to respond to a question from yesterday's session. Several people asked for references on the example applications I described on slide 7 in Monday's presentation. Please note that there are multiple commercially available products in each of these categories. I'm including one example of each here. (Please note that I have mangled the URLs since valid URLs are apparently rejected by the chat software. Replace the "-dot-" with a simple "." In each instance and you're good to go.)

Heart rate from video: www-dot-vitalsignscamera-dot-com/

Pill identification: www-dot-pillidscan-dot-com

LEGO augmented reality kiosk: www-dot-metaio-dot-com/kiosk/lego/

Surgical robot for hair transplantation: restorationrobotics-dot-com

Automotive safety (2013 Cadillac XTS, one example of many): www-dot-youtube-dot-com/watch?v=AsHQ5ORXinE

Automatic panorama image stitching: www-dot-cs-dot-bath-dot-ac-dot-uk/brown/autostitch/autostitch.html

Blogger

Are there any Xilinx embedded vision kits available? If yes, what are those?

Are the Xylinx "IP Core" components licensed for $ or for free to Xylinx users? Which FPGA families do they work with?

Iron

Thank you, Jose. Very nice overview.

Iron

are there any microcontrollers besides the STM32F4 series which include digital image sensor interfaces?

Iron

Haven't seen a MIPI core.

Gold

Thanks! Does Xilinx Zynq has the IP core for MIPI CSI?

Iron

(This chat system loses posts during times of high activity!)

Iron

Hello everyone. Thanks for attending. First on the question of IP cores: these are logic cores that simplify your design and are available from FPGA vendors or from third-party partners. Sometimes, they are bundled in design suites

Iron

Xilinx offers a pipeline image "bus" just for this purpose. It's called an AXI stream

Gold

BGA is very common for high I/O count FPGAs.

Iron

Nice overview; thanks!

Iron

Good Info. Faster next time please. ;-)

 

Iron

Thanks a lot Jose, it's a great lecture.

Iron

Thanks for interesting talk!

Iron

are there pipeline approaches to processing that are bitstream oriented?  this would be more focused on feature extraction than actual video storage.

Iron

Thanks, Jose! Fascinating discussion.

Iron

Thanks Jose.  Nice job.

Iron

Thank you Jose & Check

Iron

Thanks Jose - nice overview.

Iron

very good presentation

Thank you José

Xylon provides a "Perspective Transformation and Lens Correction Image Processor" IP core for Xilinx devices

http://www.logicbricks.com/Products/logiVIEW.aspx

Iron

There is no typical, as the voltages vary depending on the product line and the I/O settings you select when configuring your FPGA.

Gold

What voltage do FPGAs typically require? 5vdc, 3.3vdc?  What about chip mounting eg BGA?

Yes, both FPGA vendors provide this IP, both free and for money

Gold

jbswindle: Some vendors call it "Lens correction".

Iron

Does Altera or Xilinx provide thes IP core?

Iron

Haven't seen it referenced.

Silver

Optical correction, etc are also available. Some cameras, e.g., OmniVision, have these functions built in so you don't have to do it in the FPGA.

Gold

jbswindle: Some camera modules will do those corrections for you in their DSPs. (Well, if you can figure out their data sheets!)

Iron

What about correction for optical, mechanical, and pixel vignetting effects?  In analog days this kind of shading error was corrected with scaled parabola signals added to the baseband video at horizontal and vertical rates.  Gradient shading errors were corrected with scaled saw tooth addition at horizontal and vertical rates.  What, if anything, is done in the digital domain to correct shading errors?

vignetting

Silver

And there's your answer...

Gold

pixclautech: Xylinx (or your favorite FPGA vendor) will license to you FPGA modules they've already developed.

Iron

Yes, that is what an IP core is. It's a pre-defined design that implements a specific function.

Gold

DurhamHusker: Their player does that :-(.

Iron

What's an "IP Core"? A design implementation file for an off-the-shelf FPGA?

Had to restart audio again, just now.

Audio perfect for me!

Iron

same audio problem here (levis, qc)

Iron

FWIW, I had to restart the player a few times during yesterday's presentation, also.

I also had to restart the player a few times.

Iron

@caa028: Yes. I had to restart the player a few times.

@vsrollins - thx. must be my connection then...

Iron

Yes, it comes and goes

I'm also email-forwarding the questions to José as you post them, so he's sure to not overlook them. Keep 'em coming!

@caa028. Nope. Sorry.

Iron

Anyone elese experiencing audio interruptions (hiccups)?

Iron

Good qustion. Don't forget to ask Jose again during the chat, Grouby!

Blogger

Do the Xilinx FPGA Devices (Spartan-6 or Zynq-7000) support the following electrical standards, used by image sensor interfaces ?

- Aptina HiSPI

- MIPI CSI-4

Iron

A bit late for the live session today...

Iron

I guess a humming noise is coming from Jose's mobile phone.

What type of market is for 4k format?

mahmood: There is some background hum, audible during quiet moments.

Iron

@mahmood, I'm not personally hearing any egregious background noise

audio is goog now.

 

Iron

Some small projects in concept stage.

Iron

is there noise in the audio, or is it just my connection?

Iron

PC peripheral for medication identification.

digital night vision monocular and currently we are using xilinx FPGA but I am thinking to migrate to Zync because I want to add some image processing techniques in my device

 

Iron

No current work, just studying vision imaging

Not currently pursuing image products but am hopeful.  Have done some in the past.

Iron

Working on USB endoscopes, smart USB display panel with I/O and image sensor.

Working on a tiny video recorder/transmittor for UAVs

Iron

I'm interested in low cost sensor for weld inspection.

Iron

Nothing embedded vison for the moment

Iron

no projet for the moment

nothing with vision yet

Iron

Some background noise from Jose

(Some background hum, though.)

Iron

Hello from Centennial, Colorado

Good volume for José.

Iron

Ahh! Success with the audio, new computer and all!

Iron

Audio is up and running

it is starting, i hear the audio

 

Iron
Audio is up, here we go!
Iron

Hello from Pasadena, Ca

 

Iron

Hello from Vancouver, BC, Canada

Iron

Hello from Sacramento

Iron

Hello from Waterloo, Ontario

Iron

Hello from Montana

Gold

Hello from Placentia CA.

 

Iron

Hi from Panama City, FL

Iron

Good day from Phoenix

Iron

Hello from Wichita, KS (Learjet), recently from Pittsburgh, PA, previously from Houston, TX (Boeing).

The wandering SW Engineer.

Iron

Hello from Binghamton, NY

Iron

Hello from Troy, OH

Iron

Hello from sunny St. Louis!

Iron

Hello from sunny AZ

Iron

Hello from Ottawa ON

Iron

Good Morning from Sunny and WINDY Valdez.

 

Greetings from California

Iron

Hello dipert_dbti

Just now the slides got downloaded.

I have tried thrice n then got it. Thanks.

Iron

It's a shame the website doesn't offer a "test audio" player; today I'm on a different computer and would hate to be surprised to discover at 14:00 that the audio player is incompatible with this computer, this browser, or my brwser's particular plug-ins.

Iron

@cvl, what is the download problem you're having? The download worked fine for me. They're in PDF (Adobe Acrobat) format

Hello from Albuquerque.

Iron

I am not able to download the slides.

Pls assist me.

Iron

The webinar will begin in ~25 minutes

Hello Sir,

How much time is left to start the webinar?

Thanks.

 

Iron

Hello from Chicago

Iron

Good afternoon everyone from snowy Toronto

Iron

hellp from Columbus Ohio

Iron

hello from Maryland!

Iron

Good Afternoon for Snowy-slushy Boston!

Iron

Hi from San Jose

 

Iron

Please feel free to post questions and desired presentation topics to this chat, and I'll pass them along to José so that he can accordingly tailor his delivery of the material

Hello from Edmonton, AB

Iron

Greetings from Snowy SE Lake Simcoe area of Ontario Canada.

Iron

Greetings from Snowy Boston!  Yes, it snowed again!  8-10 inches this morning!

Iron

Good evening, cghaba.

Blogger

Still Cold with one day to spring.

Iron

Hello from Chicago, where it's sunny and 19 degrees F (-6 C).

Blogger

Good evening from Iasi, Romania

Iron

Hello from Tennessee

Iron

Hello from Tornoto.

Iron

Hello from sunny - spring already - Atlanta

Iron

Good morning, everyone

Iron

Greetings from Chicago!

Iron

it's a wonderful day

Iron

Good morning from Portland Oregon

Iron

Good Morning from GA

Iron

Good morning from Canada

Iron

Would be interesting if the presenter touches on open source, instructional FPGA IP, parameterizable IP, and high level system modeling for IP & FPGA selection.

Good morning from Mobile, AL

Downloading tomorrows slide deck

Iron


Partner Zone
Latest Analysis
A Silicon Valley company has made the biggest splash yet in the high-performance end of the electric car market, announcing an EV that zips from 0 to 60 mph in 3.4 seconds and costs $529,000.
The biggest robot swarm to date is made of 1,000 Kilobots, which can follow simple rules to autonomously assemble into predetermined shapes. Hardware and software are open-source.
The Smart Emergency Response System capitalizes on the latest advancements in cyber-physical systems to connect autonomous aircraft and ground vehicles, rescue dogs, robots, and a high-performance computing mission control center into a realistic vision.
Tolomatic ERD actuator provides high-tolerance, high-force capabilities at a low cost to innovative medical therapy machine.
The diesel engine, long popular on European roads, is now piquing the interest of American automakers.
More:Blogs|News
Design News Webinar Series
7/23/2014 11:00 a.m. California / 2:00 p.m. New York
7/17/2014 11:00 a.m. California / 2:00 p.m. New York
6/25/2014 11:00 a.m. California / 2:00 p.m. New York
5/13/2014 10:00 a.m. California / 1:00 p.m. New York / 6:00 p.m. London
Quick Poll
The Continuing Education Center offers engineers an entirely new way to get the education they need to formulate next-generation solutions.
Sep 8 - 12, Get Ready for the New Internet: IPv6
SEMESTERS: 1  |  2  |  3  |  4  |  5  |  6


Focus on Fundamentals consists of 45-minute on-line classes that cover a host of technologies. You learn without leaving the comfort of your desk. All classes are taught by subject-matter experts and all are archived. So if you can't attend live, attend at your convenience.
Next Class: September 30 - October 2
Sponsored by Altera
Learn More   |   Login   |   Archived Classes
Twitter Feed
Design News Twitter Feed
Like Us on Facebook

Sponsored Content

Technology Marketplace

Copyright © 2014 UBM Canon, A UBM company, All rights reserved. Privacy Policy | Terms of Service