HOME  |  NEWS  |  BLOGS  |  MESSAGES  |  FEATURES  |  VIDEOS  |  WEBINARS  |  INDUSTRIES  |  FOCUS ON FUNDAMENTALS
  |  REGISTER  |  LOGIN  |  HELP
Comments
You must login to participate in this chat. Please login.

hello all from Edmonton, Alberta

Iron

The presentation alluded to some vision concepts; edge detection, feature extraction, etc.  I would have liked to see some of the general concepts laid out in a few slides with examples and a walk-thru in order to set up the introduction for the more detailed discussions to come

Good intro ... Great application topic ... looking forward to the coming presentations.  Thanks for archiving these sessions. 

 

@Alaskaman66: Do you know what the state of the art is for ASL sign language recognition? Anything for lip reading or language type detection?


Maybe a motion sensing glove worn by the 'signer', converting motions to letters and words would be a good application for you to consider.

For those of you interested in learning more about embedded vision, I recommend the website of the Embedded Vision Alliance, www.embedded-vision.com, which contains extensive free educational materials. 

For those who want to do some easy and fun hands-on experiments with embedded vision first-hand, try the BDTI OpenCV Executable Demo Package (for Windows), available at www.embedded-vision.com/platinum-members/bdti/embedded-vision-training/downloads/pages/introduction-computer-vision-using-op

And for those who want to start developing their own vision algorithms and applications using OpenCV, the  BDTI Quick-Start OpenCV Kit (which runs under the VMware player on Windows, Mac, or Linux) makes it easy to get started: www.embedded-vision.com/platinum-members/bdti/embedded-vision-training/downloads/pages/OpenCVVMWareImage

Blogger

Really great presentation, thanks Jeff!

Iron

Hello from Windsor, UK

Iron

THanks Jeff, Chuck, Digikey et. al.

 

Iron

Thanks Jeff, interesting presentation

Iron

Just finished the recorded audio (the portion that I missed live). Very interesting lecture... Thank you Jeff!

Iron

For those who are looking for app names in slide 7, why dont you clip those images out and do a google image search.

Iron

What would be an ideal technology for simple embedded vision system and what for advanced system (GPU, CPU, FPGA etc) considering current market needs?

 

Thanks, everyone, for your interest and excellent questions.  If I've missed any burning questions, feel free to email me via the Embedded Vision Alliance website, info@embedded-vision.com.  I hope you'll attend the sessions for the rest of the week -- we have world-class experts giving those presentations.

Blogger

@Ahmed:  Could you please let us know the name off apps in Slide 7 or we should do a search for them?
I have posted the info on the first one.  I don't have the others handy, but I will get them and provide them later in the week.

 

Blogger

@Fuzz: I hope there is some info on open source solutions
There are lots of good open source resources.  The BDTI Quick-Start OpenCV Kit, available free on the Embedded Vision Alliance web site, is a great starting point.

Blogger

@wahaufler: Is it feasible/possible/easy/common to use iphone/android smartphones and tablets to prototype such embedded systems?  Where the CPU power is not enough, interfacing with a separate external FPGA or somesuch device for the vision computations?  Or is the data bandwidth too much for USB or Bluetooth?
It is indeed increasingly feasible to use smartphones and tablets as embedded vision platforms.  However, I am doubtful that it will be feasiable to add processing power beyond what is built in.  The good news is that the application processors in these devices have lots of processors beyond the CPU.  E.g., they all have GPUs, and GPUs can often be used as programmable coprocessors for vision.

 

Blogger

I look forward to the rest of the weeks presentations.

Iron

Thank you for the presentation Jeff!

Iron

Artificial intelligence will play a big role in camera vision capabilities in the future.  Distinguishing between facial expressions and other things moving in the background and foreground will be a big accomplishment.

Iron

@Alaskaman66  Terahertz sensors are big monsters in size and cost > 200K$. Hyperspectral cameras are mostly linear scanners, the 2D ones like Telops Hypercam cost in the 500k$ range and are also big in size. For now of course.

Iron

@fwjava: Are there any low cost cameras that are good for performing low light/no light vision (mostly object tracking)?
I saw a DIY-type video online recently where a guy took a run-of-the-mill webcam and removed a plastic IR filter, creating a low-cost IR-sensitive camera.  And, of course, in some applications, artificial illumination can be used.

Blogger

@Alaskaman66: And what about sensors outside the normal range of (human) visible light, which is roughly 450 to 850 nm. Say Terahertz range. Or millimeter microwave? Hyperspectral applications? Seems like the surface is just being scratched.
It seems that this is fairly exotic technology today, but exotic technologies have a way of becoming pedestrian technologies over time.

 

Blogger

@AZ@designnews: Thank you Jeff for the illuminating presentation, would you mind to repeat the name of the embedded-vision company you mentioned in the presentation that analyses face expressions.
The company with the emotion-reading technology is called Affectiva.  Their technology is based on research done at MIT in the research group of Professor Rosalind Picard.

 

Blogger

@Alaskaman66  For capturing the temperature data you will need a costly infrared thermal cameras operating in MWIR and LWIR wavelengths. Maybe the detection can be done on the NIR spectrum (like the ones used in night vision cameras). I saw a paper about something similar for pedestrians in a high end car (cadillac or buick I think) a couple of years ago in Vision system Design magazine.

Iron

Are there any low cost cameras that are good for performing low light/no light vision (mostly object tracking)?

Iron

For safety, HDR-sensors are quite the right choice. With the correct lenses, they can even detect near-IR light.

There are also publications, which propose on-chip logarithmic light-intensity measuring, meaning you do not have to "do" HDR over exposure times. (But this will take a while ;) )

Iron

And what about sensors outside the normal range of (human) visible light, which is roughly 450 to 850 nm. Say Terahertz range. Or millimeter microwave? Hyperspectral applications? Seems like the surface is just being scratched.

@Alaskaman66: Here's another one... Someone mentioned IR detection. A vision application that identifies (warm) animals about to step onto the road would be very helpful. Alaska has hundreds of car-moose collisions each year.

I know!  I was born in Alaska and my dad still tells the story of his encounter with a mama moose and her calf. :-)  This is another example of where multiple sensor types used together can be really helpful.  And there are in fact vision systems today that do exactly that: combine inputs from IR and visible-spectrum cameras.

 

Blogger

@Alaskaman66
Your comment reminds me of a short film (which I recently saw at the Banff Mountain World Fllm Festival here in Denver) about wildlife crossings on the Trans-Canada Highway...

Thank you Jeff for the illuminating presentation, would you mind to repeat the name of the embedded-vision company you mentioned in the presentation that analyses face expressions.

@moulay: Is there available benchmarks for typical embedded processors used in image procesing/machine vision applications? for example benchamrks for OpenCv in some embedded processors?

There's nothing yet that's widely used, unfortunately.

Blogger

@Alaskaman66: Jeff, are all machine vision systems taking basic scanned image formats as input? How about staring focal planes?
Great question.  To keep things brief today I spoke only about conventional image sensors.  But in reality, vision systems use many different kinds of sensors, depending on what they're trying to do.  E.g., if you see one of the Google self-driving cars (which I tend to see regularly here in the San Jose area), you'll see a rotating laser scanner on the roof.  I don't know specifically of anyone using staring focal planes, but I wouldn't be surprised.

 

Blogger

IR should be easy to implement in Alaska since everything else is cold up there anyway... :-)

Iron

@wahaufler

Another challenge of vision applications for space is the huge expense and low tolerance for errors.  "Prettiness" ranks pretty low when a mistake could cost billions of dollars.  But I think you're right -- even with modern algorithms, if the lighting is really bad (e.g. non-controllable, low, or non-uniform), you sometimes have to be okay with "ugly" markings to get the system to work.

 

Blogger

Here's another one... Someone mentioned IR detection. A vision application that identifies (warm) animals about to step onto the road would be very helpful. Alaska has hundreds of car-moose collisions each year.

@jbswindle: Cost/capability contiues to amaze me.  A cheap NTSC color camera with little better than QVGA resolution sold in 1970 (Sony DXC-5000) for 30,000 inflation adjusted dollars.  In a recent teardown report I noticed the HD color camera included with a particular cell phone was estimated at $5.  Then there's processing cost: a cheap 16 bit minicomputer with no secondary storage and perhaps 8K of magnetic core RAM in 1970 cost about 21,000 inflation adjusted dollars.
We've come a long way, baby.
Exactly!  When you can get a capable camera for $5 and a suitable processor for $10 (which you can do today, in high volume), you can start thinking about putting embedded vision into even very cost-sensitive products, like toys.

 

Blogger

jbswindle: Moore's Law is still in effect for a few more years. ;-)

Iron

@Ron Clyde
Still hoping for those application names...


 

 

Blogger

Jeff, are all machine vision systems taking basic scanned image formats as input? How about staring focal planes?

Cost/capability contiues to amaze me.  A cheap NTSC color camera with little better than QVGA resolution sold in 1970 (Sony DXC-5000) for 30,000 inflation adjusted dollars.  In a recent teardown report I noticed the HD color camera included with a particular cell phone was estimated at $5.  Then there's processing cost: a cheap 16 bit minicomputer with no secondary storage and perhaps 8K of magnetic core RAM in 1970 cost about 21,000 inflation adjusted dollars.

  We've come a long way, baby.

Silver

The app in the upper right of slide 7 is a dedicated Lego kiosk, found in retail settings

Thanks all, As a small business it is hard to get going when we only using tiny quantities, but this presentation is proof that embedded cameras are becoming more established. I just hope someday adding a camera to a design becomes as easy as adding an accelerometer

Iron

@Alaskaman66: Do you know what the state of the art is for ASL sign language recognition? Anything for lip reading or language type detection?

I have seen some published research on both lip reading and sign language reading, but I am not up to date  on the latest, sorry.

 

Blogger

Still hoping for those application names...

Iron

@akoby: The main problem I am having is finding good information/sources for purchasing camera modules... it is much harder to get started when you need to sign an NDA to get a pinout
I feel your pain... we have experienced this problem first hand.  The challenge here is that the sensor and camera module suppliers are mostly geared up to support a few ultra-high-volume customers -- and only those customers.  

 

Blogger

"buring" -> "buying"

Iron

I was involved in developing the GUI for the Advanced Space Vision System by Neptec used aboard the Space Shuttle and ISS.  This required all payloads to have big ugly black and white spots (like measles) mounted at precisely known locations for tracking say an ISS assembly module's attitude and location to avoid collisions with existing space station elements.  More modern algorithms probably do not require such visual targets, but the lighting conditions are very extreme and challenging in space, and such targets might still be needed in such situations.  Do you think so or not?

Iron

Thanks, Jeff. I'm going to listen to the archived audio over again. Do you know what the state of the art is for ASL sign language recognition? Anything for lip reading or language type detection?

akoby: The real trick comes later! Unless you're Apple (etc.) buring millions of units per yeear, the sensor vendors really aren't interested in spending time on heloing you learn about all of the "tricks" inside their designs. You'll spend a lot of time trying to discover how these sensors *REALLY* work (as compared to how the data sheets and app notes claim they work).

Iron

@jeff

Thanx for the presentation!!

I am satisfied with your answer. Thank you. Any suggested readings?

Iron

@MazianLab
Jeff, how do you integrate embedded vision and system-on-a-chip?
That's a very broad question. I'll address one aspect:  Vision applications typically comprise a series of processing steps.  The front-end steps (nearest the sensor) process extremely high data rates, but use relatively simple algorithms.  These steps are typically implemented on some sort of highly parallel programmable processor, such as an FPGA, GPU, or DSP.  Later algorithm steps work on reduced data rates (e.g., features rather than pixels) but use much more complex algorithms.  These steps can often run efficiently on general-puropose CPUs.  So an SoC that does embedded vision will usually have a combination of one or more programmable parallel processing engines (they have to be programmable because the algorithms tend to change quickly) and a general-purpose CPU.

 

Blogger

@akoby

Try talking with Arrow Electronics or Avnet vendors.  They are very helpful when looking for information about modules and other products.

Iron

@Jeff: You're right. There are a lot of advances in the processing equipment, which make vision more and more interesting, as it also fits nicely to our human perception.

It's currently (like always) a question of weighting, what is important - resolution, frame-rate ...

Iron

Is there available benchmarks for typical embedded processors used in image procesing/machine vision applications? for example benchamrks for OpenCv in some embedded processors?

Iron

@ADiewi
2 notes (from my experience in image processing):  For blind spot object detection additional sensors are needed, because vision-only systems have too high error rates
You're right:  For safety-critical systems operating in difficult environments (including darkness), vision is typically used along with other sensor types, such as radar, to create a highly reliable system.

 

 

 

Blogger

The main problem I am having is finding good information/sources for purchasing camera modules... it is much harder to get started when you need to sign an NDA to get a pinout

Iron

Could you please let us know the name off apps in Slide 7 or we should do a search for them?

Thanks

Iron

Thanks! Nice and presentation.

@Fuzz: Try OpenCV, it's very up-to-date and fast

Iron

@ADiewi: Stitching is currently only possible for images or video post-processing as it is computationally highly expensive (even with cutting-edge algorithms)
For blind spot object detection additional sensors are needed, because vision-only systems have too high error rates
Embedded vision is typically quite computationally expensive. But we're fortunate that now we have processors that can deliver 10s of billions of operations per second for 10s of dollars and few watts -- which is why we are now starting to see sophisticated vision functions in cars, smartphones, etc.

Blogger

Jeff the hair transplant application is amazing. Wish i could try it out

 

Iron

I hope there is some info on open source solutions

Iron

@Atlant: Of course, stiching became really fast, but these algorithms are often limited. A video frame-rate of, let's say, 25fps which means a processing time of 40ms can almost not be reached. If the stitching of your devices takes maybe 800ms, it's very hard to notice.

But of course it also depends on resolution etc. A wide topic ;)

Iron

Is it feasible/possible/easy/common to use iphone/android smartphones and tablets to prototype such embedded systems?  Where the CPU power is not enough, interfacing with a separate external FPGA or somesuch device for the vision computations?  Or is the data bandwidth too much for USB or Bluetooth?

Thanks, great presentation! 

Iron

Recorded audio is available now

Iron

In this course will be covered also IR or night vision applications?

Iron

thanks, looking forward to next presentations

 

Iron

@raulamit: Doubts on the first application. How can a front camera provide better resoution than our eye?
Many engineers who see this app running are skeptical at first, but the fact is that it works, and there are peer-reviewed published papers describing the underlying techniques.  Try the app and you will see that it does actually work.

Blogger

Thanks! Very appreciated. embedded vision really is an exciting field with huge applications. Looking forward to more lectures.

Blast, missed the presentation.  Hopefully tomorrow.

Iron

Dear MikeH, yes they will be available online. You can also find a video archive (and PDF file download) of the presentations from last September's premier summit on the Alliance website, in the Embedded Vision Academy area

Jeff and Chuck thanks for the excellent information,presentation! 

Iron

Thank you for the overview.

Iron

Thanks for the introduction Jeff. Looking forward tomorrow's presentation!

Iron

Thanks for the informative introduction.  i'm looking forward to the remaining lectures.

Iron

excellent presentation Jeff

 

Iron

Thank you both for the presentation.

Iron

Thanks Jeff & Chuck

Iron

Many thanks, Jeff and Chuck, for an excellent lecture!

Iron

Interesting presentation!

 

Iron

Waiting for the recorded audio to show up...

Iron

Thanks for the great talk Jeff !

Iron

Thank you Jeff, the information is very exciting!

Iron

Thanks Jeff.  Nice job.

Iron

Jeff,

Thanks for the intro.

Iron

Thanks ... see you tomorrow!

Iron

Excellent presentation... thanks!

Iron

Looking fwd to the Cognivue presentation.

Will the presentations from the Embedded Vision Summit be posted on the Internet?

Iron

ADiewi: I don't udnderstand your point about stitching. Both my Nikon and my iPhone "stitch" essentially in real time (rather than needing, e.g., Photoshop).

Iron

Late for the live session today...

Iron

2 notes (from my experience in image processing):

Stitching is currently only possible for images or video post-processing as it is computationally highly expensive (even with cutting-edge algorithms)

For blind spot object detection additional sensors are needed, because vision-only systems have too high error rates

Iron

As I wrote just last night in an article I'm working on:

Electronic systems are also adept at detecting and accentuating minute image-to-image variations that the human visual system is unable to perceive, whether due to insufficient sensitivity or inadequate attention. As research at MIT and elsewhere has showcased, for example, it's possible to accurately measure a person's pulse rate simply by placing a camera in front of him or her and logging the minute facial color changes over time that are reflective of capillary blood flow. Similarly, embedded vision systems can precisely assess respiration rate by measuring the periodic rise and fall of the subject's chest.

This same embedded vision precision can find use in providing early-warning indication of neurological disorders such as ALS (Amyotrophic Lateral Sclerosis, also known as Lou Gehrig's disease) and Parkinson's disease. Minute trembling, so slight that it may not yet even be perceptible to the subject, is less likely to escape the perceptive gaze of an embedded vision-enabled piece of medical gear. Slight gait aberrations are another possible neurological abnormality warning sign.

Probably subtle color shifts over time is easier for a computer to detect.

Iron

Never underestimate the power of vanity to drive tech. Cool!

Gold

Just tried Vital Signs. It worked great.

Iron

MIT has done lots of work on calculating pulse rate (and other minute frame-to-frame changes)...Cardiio is another commercially available software program for iOS that implements the technique (along with the Philips software that Jeff mentioned. web.mit.edu/newsoffice/2010/pulse-camera-1004.html

raul: Not spatial resolution but color detection. And the camera can pay attention for a long time, whereas our attention wanders.

Iron

Jeff, do you have the name of the apps?

Iron

Better color resolution

Gold

I'm assuming Goldilocks is right.

Blogger

Doubts on the first application. How can a front camera provide better resoution than our eye?

Iron

It is working now! I am using IE and the voice is very good!

All good here on both IE and Firefox (latest versions with all updates on both).

Iron

Jeff, how do you integrate embedded vision and system-on-a-chip?

Iron

Reminiscent of Goldilocks and the Three Bears...too quiet, too loud, just right...;-)

Be sure you have your computer volume up and use the green bar slider on the audio player.

Gold

Volume's good.  Chrome on ubuntu.

Iron

Chuck is quite a bit louder than Jeff.

Iron

OK in Gatineau, Quebec

 

Here sound is perfect.

Thanks

Iron

TomBee:

Same problem with IE.  Overmodulation too.

 

Iron

audio is perfect for me

Iron

It's just right for me

Iron

it is too loud now

Iron

Atlant:

How should I try to keep the 'play' on? I tried both firefox and chrome!

Volume is fine for me on chrome on the Mac

Iron

Jeff is too quiet to hear properly.

I am using Chrome and have both the audio controls to max.

Iron

audio is perfect for me

Iron

Omar:

Make sure the "Play" button stays in "Play"; this player tends to pop back to "paused" all by itself.

Iron

The audio keep stopping by itself randomly, sometimes last only for a few seconds only. I have to keep restarting it.

Iron

Folks, I'm using Firefox 19 (for Mac OS X, in my particular case) and the audio's working fine for me. You'll need to have the Flash Player installed in order to access the audio player plugin. Try a different browser if you're having audio problems?

have you downloaded latest adobe flash player?

Iron

after lots of clicks of the play button audio is now working.

Iron

I pressed on the play buttom, but I can not hear anything!

 

your content seems to be blocked from my computer

 

Iron

click on play on the audio player

Iron

ceaxyz: please download slides from... 

Special Educational Materials

Refresh the page and you will see the audio player right below the headline.

 

Iron

right under the headline

Iron

I must be missing something, no audio. Whre do you click on webpage to listen in?

Gold

Speak quieter please! J/K

Iron

Audio's good in Maine.

 

Iron

hello and good afternoon 

 

Iron

Hard to hear...please speak louder.  Thx.

Iron

Please get closer to the mike, Jeff!

Iron

hello from Timisoara, Romania

Iron

Good morning from sunny Austin, Texas

Blogger

Audio up, here we go!!

Iron

Hello from Cape Cod

 

Iron

Hello from Troy, OH

 

Iron

Good afternoon everyone from stormy Toronto!

Iron

Hello from Houston, Texas

Gold

hello everyone!

 

Iron

Hello from Placentia CA.

 

Iron

Hi everyone from Sweden

Iron

Gatineau, Quebec, on-line

Hello from El Dorado Hills

Alex cohen

 

Iron

Hello from Lilburn GA

 

Iron

Hello from Albuquerque.

Iron

Howdy from Fort Worth !

Iron

Greetings from Chicago!

Iron

Greetings pgminin from Italy!

Blogger

Thank you Charles!   (It's great to live in the Ozarks!!)   :)

Greetings from Italy!

Iron

Hello to Atlanta and the Ozarks.

Blogger

The presentation has not started yet.  It will begin in approximately 35 minutes.

Blogger

If you haven't already, download the Powerpoint from above. To the right of Jeff's photo, there's a heading that reads "special educational materials." Right click on the link that says "Today's Slide Deck," save it to your desktop, then open it when the session starts.

Blogger

Hello from rainy Atlanta...

Iron

Overcast and 28 degrees F in Chicago.

Blogger

Sessions starts in 40 minutes.

Blogger

Hello Sir/Mam

Have the session started?

Iron

Greetings from Chilly Boston!

Iron

Hello from Montana.

Gold

It seems like my message was sent twice ?? I did not hit any key here? a website bug? :)

Iron

Hi from Levis, Qc (Canada).

As asked by M. Jeff Bier, I will post my question prior to the presentation. I am working in machine vision, mainly developing PC based systems. In our team we have already worked with some embedded systems like smart cameras (CPU and DSP based). We currently use GPGPU for high performance processing and I am wondering if there are GPU embedded technologies available in the market? (The only one I came across was from GE Intelligent Platforms that targets the defense sector)

Iron

Strictly speaking, OpenGL CAN (clumsily) be used to do limited GPGPU stuff. But yes...I meant OpenCL ;-)

@dipert_bdti, Thanks, I was about to ask if it was OpenCL instead of OpenGL :)

Iron

OpenCL, not OpenGL. My apologies ;-)

The most common graphics core providers in the ARM ecosystem are Imagination Technologies (PowerVR), Qualcomm (Adreno), Vivante (used by both Freescale and Marvell, among others) and ARM's own Mali. All four claim varying degrees of OpenGL compatibility. And as Jeff notes, the AMD/ATI, Intel and Nvidia GPUs found (along with PowerVR) in the x86 world also support OpenGL (along with DirectX Compute for Windows O/Ss). in some cases, this takes the form of partial-to-full CPU-based software emulation in lieu of GPU-based hardware acceleration.

Morning!

Starbucks looks a long ways away!

Iron

@moulay: I am working in machine vision, mainly developing PC based systems. In our team we have already worked with some embedded systems like smart cameras (CPU and DSP based). We currently use GPGPU for high performance processing and I am wondering if there are GPU embedded technologies available in the market? (The only one I came across was from GE Intelligent Platforms that targets the defense sector)
Great question.  Yes, there are more and more embedded processors that incorporate general-purpose GPU capabilities (graphics processing units that can be programmed to perform other parallel processing tasks).  For example, Freescale's i.MX6 series of ARM-based processors include a GPU that supports OpenCL programming.  And AMD's "APU" processors combine an x86 CPU plus GPU with OpenCL support.  Here's an AMD white paper describing a smart camera based on these processors: http://www.amd.com/us/Documents/AMDXIMEACaseStudy.pdf

Blogger

@tomchu, Shouldn't that be "A everyone"

Hey everyone, from Tornoto.

Iron

From the Embedded Vision Alliance website home page, go to "Embedded Vision Academy" (menu item at the top of each page) => Provider => BDTI to find the materials that Jeff is referencing (and others). You'll need to register on the site (button in the top right corner of any site page), if you're not already a registered user, prior to accessing Academy content. And then feel free to continue exploring, as you wait for the webinar to begin in a couple of hours' time!

Hello from Binghamton, NY.

Iron

Hi from Levis, Qc (Canada).

As asked by M. Jeff Bier, I will post my question prior to the presentation. I am working in machine vision, mainly developing PC based systems. In our team we have already worked with some embedded systems like smart cameras (CPU and DSP based). We currently use GPGPU for high performance processing and I am wondering if there are GPU embedded technologies available in the market? (The only one I came across was from GE Intelligent Platforms that targets the defense sector)

Iron

Thanks I can listen to it from home then

 

Iron

Of course it's snowing in Minneapolis - it's monday.

 

@mahmood  It'd be 9:00PM Jordanian time if its now 7:12 PM

Iron

Hi all.  28 degF and snowing in Minneapolis today.

Iron

Hello from Aurora Ontario. Light flurries with cloudy weather today to get us in the mood for spring.

Iron

I know that, but don't know what time Jordanian time

Iron

Hello from Reston, VA

Iron

The class starts at 11 AM California time (PDT), 2 PM New York time (EDT), 6 PM London time (GMT).

Blogger

It starts in a little less than 2 hours from now.

 

Iron

Good Morning from overcast (but at least not snowing) New Jersey

Iron

it is 7:08 pm in here does tyhe class start in 1 hour time?

 

Iron

Hello from Chicago

Iron

Good morning from Texas

Iron

Hello all from Jordan

 

Iron

And from there a search for "OpenCV demo" -- with no quotes -- finds the executable demo.

Now people can run it before the class.

Iron

Greetings from Fresno, CA

Iron

Good Afternoon, everyone

Iron

Old and sneaky beats young and smart any day of the week. :-)

Iron

Let's try this...  Site: embedded-vision.com

Iron

Hello from sunny SE Lake Simcoe area of Ontario, Canada. i.e. out back of beyond

Iron

Hi Everyone.  I'll be presenting today's session, and I'm looking forward to it!  Feel free to post your questions prior to the session start, and I'll do my best to address them before or during the session.  Also, if you want to do some warm-up prior to today's session, there are some excellent free resources on the Embedded Vision Alliance web site.  I particularly recommend the Embedded Vision Academy area of the site, where you will find great technical articles and free downloads, such as the BDTI OpenCV Executable Demo Package and the BDTI Quick-Start OpenCV Kit.  (All require free registration on the Embedded Vision Alliance web site.)  The chat system doesn't seem to allow me to post URLs, so let me suggest that the easiest way to find the site is to do a Google search on "Embedded Vision Alliance."

Blogger

GOOD MORNING from SUNNY Boston! 

Iron

Sorry, I am having some trouble with the text chat, let me try that again...

@bartholemew: Slides aren't online yet as I write this - it would be interesting if the material covers when and whether the data reduction algorithms should mimic the operation of biological systems (e.g. human/animal eye) vs. preprocessing the raw dataset in a machine-only paradigm, i.e. one where the biological system approach might end up abtracting away or losing information from raw sensor data that a machine-only paradigm could make good use of.

The slides are now available for download. Virtually all of the vision systems that I've seen do not try to mimic biological processes, but that is a very intriguing angle, and some peopel are pursuing it. E.g., Eutecus is a company that takes a biologically inspired approach.  On their web site they have a white paper describing their approach.  (I'd paste the link but the system doesn't seem to allow that.)

 

Blogger

Good morning, everyone

@bartholemew: Slides aren't online yet as I write this - it would be interesting if the material covers when and whether the data reduction algorithms should mimic the operation of biological systems (e.g. human/animal eye) vs. preprocessing the raw dataset in a machine-only paradigm, i.e. one where the biological system approach might end up abtracting away or losing information from raw sensor data that a machine-only paradigm could make good use of.


Blogger

Hi everyone, good morning from Vancouver, Canada.

Iron

Hello from Scottsdale AZ

Iron

Good Morning from GA

Iron

@Alaskaman66  Not sure what you are asking?  You should be able to download and save to your desktop

Iron

@Lawson,  Yea, I love Portland too.  Great cross section of things to do.

Iron

@mharkins Mmmmm, Portland.  Love that place!

Iron

Good morning from Portland Oregon

Iron

Good Morning from Milwaukee!

Iron

Good Morning from Valdez. I notice the slides are in PDF format. Can they be renamed and saved?

Good morning from Millington Michigan.

Iron

Good morning from Mobile, AL

Hello Sir/Madam,

Is the March 18th class is a webinar?

Iron

The slides are good,

different.

Iron

@chkarak33   The archived version of the course will be available right after the live session, and you can listen at any time.

Iron

This lecture series looks pretty interesting from reading the topic summaries.   Slides aren't online yet as I write this - it would be interesting if the material covers when and whether the data reduction algorithms should mimic the operation of biological systems (e.g. human/animal eye) vs. preprocessing the raw dataset in a machine-only paradigm, i.e. one where the biological system approach might end up abtracting away or losing information from raw sensor data that a machine-only paradigm could make good use of.

Thanks for the information! But what if i can not be on-line on Monday, March 18 at 2.00 p.m EDT????
Can I participate in the classroom??

Iron


Partner Zone
Latest Analysis
This Gadget Freak Review looks at a keyless Bluetooth padlock that works with your smartphone, along with a system that tracks your sleep behavior and wakes you at the perfect time in your sleep cycle to avoid morning grogginess.
Siemens released Intosite, a cloud-based, location-aware SaaS app that lets users navigate a virtual production facility in much of the same fashion as traversing through Google Earth. Users can access PLM, IT, and other pertinent information for specific points on a factory floor or at an outdoor location.
Since 1987, teams of engineers around the world have built solar cars to participate in a road race around Australia called the World Solar Challenge, being tested on the race time, kilometers traveled, practicality, and energy used by the vehicles they invent.
An Israeli design student has created a series of unique pieces of jewelry that can harvest energy from default movements of the body and even use human blood as a way to conduct energy.
Made By Monkeys highlights products that somehow slipped by the QC cops.
More:Blogs|News
Design News Webinar Series
7/23/2014 11:00 a.m. California / 2:00 p.m. New York
7/17/2014 11:00 a.m. California / 2:00 p.m. New York
6/25/2014 11:00 a.m. California / 2:00 p.m. New York
5/13/2014 10:00 a.m. California / 1:00 p.m. New York / 6:00 p.m. London
Quick Poll
The Continuing Education Center offers engineers an entirely new way to get the education they need to formulate next-generation solutions.
Aug 18 - 22, Embedded Software Development With Python & the Raspberry Pi
SEMESTERS: 1  |  2  |  3  |  4  |  5  |  6


Focus on Fundamentals consists of 45-minute on-line classes that cover a host of technologies. You learn without leaving the comfort of your desk. All classes are taught by subject-matter experts and all are archived. So if you can't attend live, attend at your convenience.
Next Class: September 30 - October 2
Sponsored by Altera
Learn More   |   Login   |   Archived Classes
Twitter Feed
Design News Twitter Feed
Like Us on Facebook

Sponsored Content

Technology Marketplace

Copyright © 2014 UBM Canon, A UBM company, All rights reserved. Privacy Policy | Terms of Service