for anyone still in the archives, Digikey offers a really affordable FPGA developer's kit. The DE0-NANO has an Altera Cyclone IV with a lot of interesting features to experiment with. It sell for just over $90 !
This presentation was higher level than expected (or needed). National Instruments has a great knowledge base in it's Labview vision support. Particulars of using configurable FPGA applications, Image Processing Algorithms, and Application Experience would fit well with N.I.
Would have preferred XILINX to present FPGA details, with N. I, on the above topics that might use FPGA's but are application oriented and focused on vision related designs and implementations.
(side note) Labview support is usually very good ...
The highly integrated OV3640 incorporates an extremely advanced Image Signal Processor (ISP) with new features such as an advanced image stabilization/anti-shake (AS) engine that requires no external components. An embedded microcontroller supports the internal auto focus (AF) engine and the programmable general purpose I/O modules enable external auto focus control.
automatic image control functions:
– automatic exposure control (AEC) – automatic white balance (AWB) – automatic band filter (ABF) – automatic 50/60 Hz luminance – detection, and automatic black level calibration (ABLC)
image quality controls: color saturation, hue, gamma, sharpness (edge enhancement), lens correction, defective pixel canceling, and noise canceling
Digi-Key has many image sensor, camera, CPLD /FPGA development kits and related support products on their website. One interesting Image Sensor / Camera / Microcontroller is a 3MPixel CMOS device under $10:
The Lego Mindstorm has many 'vision' applications documented online in text and on youtube. A new version will be released Autumn 2013. Good educational sensor, mechatronics platform ... it can use Labview Programming (and many other application languages).
For those of you interested in learning more about embedded vision, I recommend the website of the Embedded Vision Alliance, www.embedded-vision.com, which contains extensive free educational materials.
For those who want to do some easy and fun hands-on experiments with embedded vision first-hand, try the BDTI OpenCV Executable Demo Package (for Windows), available at www.embedded-vision.com/platinum-members/bdti/embedded-vision-training/downloads/pages/introduction-computer-vision-using-op
And for those who want to start developing their own vision algorithms and applications using OpenCV, the BDTI Quick-Start OpenCV Kit (which runs under the VMware player on Windows, Mac, or Linux) makes it easy to get started: www.embedded-vision.com/platinum-members/bdti/embedded-vision-training/downloads/pages/OpenCVVMWareImage
Is there a particular procesor architecture, that lends itself to image processing ?
It depends on what kind of image processing you are doing. Some algorithms are highly repetitive, and can take advantage of simultaneous execution (SSE). Some algorithms (kernels, for example), could take advantage of the special multiply/accumulate structures in a DSP... Some algorithms (like randomly searching through an image or following an edge), require quick access to large portions of the image, so the size of the L3 cache is important (it really helps performance if you can store the entire image on-chip).
So, I guess it really kind of depends.
Yep -- better late than never. Thanks for coming in!
Is there any advantage to using an FPGA over ASIC's? My understanding is that the trade off is at about 1,000,000 units between the two to pay off the costs of development.
Low-quantity cost and time-to-market are two advantages of FPGAs over custom ASICs. I think you've got the price trade-off point about right, although it could vary by a few factors of 10, depending on your volumes and the technology of ASIC. As far as time-to-market, you can begin prototyping with an FPGA as soon as you can get an evaluation board. Conversely, with an ASIC it will be only AFTER all your development that you can see how it works in hardware.
Also, since FPGAs are reprogrammable, they are significantly lower-risk than an ASIC (the cost of making a mistake in an FPGA is just a few hours; a mistake on an ASIC costs $$$ and months of time).
Software from Xilinx is pricey for hobby and dev kits usually have evaluation period for 30 days or so.
From my experience, you can get a version of the Xilinx or Altera software the supports simple FPGAs for free (no expiration); they make their money on the corporations that buy the bigger FPGAs or get site licenses for advanced features.
I remeber a FPGA expert in my previous project group wanted to try this kind of porting out, but the project was cancelled by the management before I could find out how useable such an approach is at the moment (or if it produces even more work)
Unfortunately, I guess that I cannot make it. I've got a load of work to do at the moment which I probably will not finish until May.
But I already marked the event in my calendar, so I can attend online at least. Thank you for the hint!
@tscheffe: Can you recommend any hardware/software for one to introduce/discuss/teach vision technology to elementary school children, or Boy Scouts pursuing a merit badge?
To "introduce" kids to computer vision, try the BDTI OpenCV Executable Demo Package, available for download from the Embedded Vision Alliance website. bit-dot-ly/YIz9EH
Kids can play with the sliders and get immediate feedback on how it alters the algorithm.
It's not too different from instagram, letting kids play with parameters and view an immediate response. They do not need to understand the details of what the parameters do in the algorithm, just the effect the parameters have on the result.
National Instruments doesn't make FPGAs, although we use FPGAs in our products. If you're looking to buy a rad-hard FPGA, I think that Xilinx is the only player on the market (does anyone else know differently? I haven't looked thoroughly.)
@ADiewi: As OpenCV uses C code internally and also has a C interface, has anyone heard of or ever tryed out C-to-HDL converters? (I know there would still a lot of work to do with the generated HDL code ;) )
There will be a presentation on this topic at the Embedded Vision Summit on April 25th in San Jose. If you can't make the event in person, the presentations will be available in video form after the event. See www-dot-embedded-vision-dot-com/embedded-vision-summit
As OpenCV uses C code internally and also has a C interface, has anyone heard of or ever tryed out C-to-HDL converters? (I know there would still a lot of work to do with the generated HDL code ;) )
You already show your maturity by pointing out that there's still a lot of work to do with the generated HDL. Since software (instruction-centric) is different than hardware (data flow centric), the C-to-gates process is obtuse and buggy at best. At worst, it's just totally impossible. (There's no straightforward way to implement a linked list or a heap or a dynamic array in hardware.)
There are some high-level tools that can help. Being a National Instruments employee, I've used and really like LabVIEW FPGA, but then again I'm biased, so take that for what it's worth.
Daniel, do you use Xilinx? What do you think of the tools with regards to using their IP for imager interfacing and image processing?
I've used both Altera and Xilinx FPGAs. There are lots of other smaller players in the FPGA market (Lattice, Actel, Acronix, for example) as well. I've never actually used Xilinx or Altera image processing IP, but my overall feel is that they are close to equivalent -- they tend to be building blocks for creating an FPGA-based video-conferencing system. As I mentioned in class, vision applications tend to be really varied and interesting and exciting -- it's tough to define a common set of blocks for what you want to do.
Can you recommend any hardware/software for one to introduce/discuss/teach vision technology to elementary school children, or Boy Scouts pursuing a merit badge?
I'd recommend using a webcam, and looking for some free software (afraid I don't know which ones to recommend). You can do a lot with not very much. If you want repeatability or reliability, you can get much better quality, but for teaching kids and helping them get excited, you don't need much.
I would try developing some circuits with FPGA's through development boards with Altera and Xilinx. Altera's website has some good beginning training and there are some other training you can find on other websites. Get some VHDL or Verilog design books to help. All Programmable Planet has some good resources too.
Thank you Daniel! Is there any open source IP core for MIPI CSI-2 camera input?
I don't know of any. However, MIPI is a very simple protocol for hardware to interact with. If you can find implementation requirements, you can develop your own pretty easily (it's much less complicated than USB, but not quite as simple as serial).
Can you recommend some good tutorials on FPGA technoology and programming?
I'd recommend Googling for college classes -- you can find quite a bit, including on MIT's OpenCourseWare and others. Digilent and others have some good (cheap and functional) FPGA eval boards. FPGA vendors also have their own boards.
Also, see my earlier post about some recommended readings for FPGA beginners.
Vendors like Altera and Xilinx have pretty good image processing toolkits. But, I've found that there's no complete "OpenCV for FPGA" that has almost everything you'll ever need. Most image processing applications also have a significant portion of home grown FPGA code added to standard blocks.
Thanks Daniel and Chuck. Is there a way of getting FPGA experience as a hobbyist? Otherwise, how can one convince a potential employer to get a position working with FPGAs without previous FPGA experience. I'm coming from a senior level; entry level would be a different situation.
Audience members, for additional information on the topics that Daniel is discussing today, please see: www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/wilding-interview
@mark.browne, regarding your GPU question, check out this presentation on OpenCL, which enables GPGPU applications such as embedded vision in an industry-standard API fashion: www.embedded-vision.com/platinum-members/bdti/embedded-vision-training/videos/pages/july-2012-embedded-vision-alliance-membe
I've recently found "The Design Warrior's Guide to FPGAs" by Clive Maxfield to be a really good overview of FPGA technology (not vision specific). "Design for Embedded Image Processing on FPGAs" by Donald G. Bailey is a more detailed look at FPGA implementations of image processing algorithms.
I am interested in FPGA development kits for embedded vision and I would like to know the criteria by which the selection should be made to choose FPGA based dev. kit. I do not have any experience with embedded vision. Could you provide some references - dev kits, books etc?
Both Xilinx and Altera (together, they control 80% of the FPGA market) have really good vision development kits for FPGA.
However, if you're new to embedded vision, I'd recommend just playing on your CPU, or get a cheap microprocessor-based kit.
As far as guidance on choosing an FPGA dev kit: it kind of depends on what you care about. You may not know that 'til you've experimented a bit. Buying something cheap and playing around with it can teach you a lot.
You are covering FPGA now - any comments on using GPU / Nvidea CUDA technology to do the image processing?
I'll only cover briefly GPU technology. Certainly, it is a very interesting technology, and many people have used it successfully to speed up image processing. Both FPGAs and GPUs have a radically different interface than CPUs (in terms of how you program them, and how you get data on and off the processing unit).
My presentation today will focus mainly on FPGAs and how they work for embedded vision -- I think that GPUs work well as a coprocessor, although there are definitely a wide variety of industry and research engineers that have very strong feelings on the FPGA vs GPU discussion.
@Daniel Wilding: I am interested in FPGA development kits for embedded vision and I would like to know the criteria by which the selection should be made to choose FPGA based dev. kit. I do not have any experience with embedded vision. Could you provide some references - dev kits, books etc?
@khitoshi, you can download the slides from the link "Today's Slide Deck", just above the chat text entry window on this page. The audio player widget will automatically appear on this page when the live presentation begins, at 11 AM PDT (California Time) today.
The first Tacoma Narrows Bridge was a Washington State suspension bridge that opened in 1940 and spanned the Tacoma Narrows strait of Puget Sound between Tacoma and the Kitsap Peninsula. It opened to traffic on July 1, 1940, and dramatically collapsed into Puget Sound on November 7, just four months after it opened.
Noting that we now live in an era of “confusion and ill-conceived stuff,” Ammunition design studio founder Robert Brunner, speaking at Gigaom Roadmap, said that by adding connectivity to everything and its mother, we aren't necessarily doing ourselves any favors, with many ‘things’ just fine in their unconnected state.
Focus on Fundamentals consists of 45-minute on-line classes that cover a host of technologies. You learn without leaving the comfort of your desk. All classes are taught by subject-matter experts and all are archived. So if you can't attend live, attend at your convenience.