@MazianLab Jeff, how do you integrate embedded vision and system-on-a-chip? That's a very broad question. I'll address one aspect: Vision applications typically comprise a series of processing steps. The front-end steps (nearest the sensor) process extremely high data rates, but use relatively simple algorithms. These steps are typically implemented on some sort of highly parallel programmable processor, such as an FPGA, GPU, or DSP. Later algorithm steps work on reduced data rates (e.g., features rather than pixels) but use much more complex algorithms. These steps can often run efficiently on general-puropose CPUs. So an SoC that does embedded vision will usually have a combination of one or more programmable parallel processing engines (they have to be programmable because the algorithms tend to change quickly) and a general-purpose CPU.
As energy efficiency becomes more and more a concern for makers of electronics devices, researchers are coming up with new ways to harvest energy from sound vibration, footsteps, and even electromagnetic fields in the air.
The government wants to study your brain, and DARPA wants to use similar information to give robots true autonomy beyond any artificial intelligence developed to date. Sound like science fiction? It's not.
A quick look into the merger of two powerhouse 3D printing OEMs and the new leader in rapid prototyping solutions, Stratasys. The industrial revolution is now led by 3D printing and engineers are given the opportunity to fully maximize their design capabilities, reduce their time-to-market and functionally test prototypes cheaper, faster and easier. Bruce Bradshaw, Director of Marketing in North America, will explore the large product offering and variety of materials that will help CAD designers articulate their product design with actual, physical prototypes. This broadcast will dive deep into technical information including application specific stories from real world customers and their experiences with 3D printing. 3D Printing is