You are covering FPGA now - any comments on using GPU / Nvidea CUDA technology to do the image processing?
I'll only cover briefly GPU technology. Certainly, it is a very interesting technology, and many people have used it successfully to speed up image processing. Both FPGAs and GPUs have a radically different interface than CPUs (in terms of how you program them, and how you get data on and off the processing unit).
My presentation today will focus mainly on FPGAs and how they work for embedded vision -- I think that GPUs work well as a coprocessor, although there are definitely a wide variety of industry and research engineers that have very strong feelings on the FPGA vs GPU discussion.
A new service lets engineers and orthopedic surgeons design and 3D print highly accurate, patient-specific, orthopedic medical implants made of metal -- without owning a 3D printer. Using free, downloadable software, users can import ASCII and binary .STL files, design the implant, and send an encrypted design file to a third-party manufacturer.
For industrial control applications, or even a simple assembly line, that machine can go almost 24/7 without a break. But what happens when the task is a little more complex? That’s where the “smart” machine would come in. The smart machine is one that has some simple (or complex in some cases) processing capability to be able to adapt to changing conditions. Such machines are suited for a host of applications, including automotive, aerospace, defense, medical, computers and electronics, telecommunications, consumer goods, and so on. This discussion will examine what’s possible with smart machines, and what tradeoffs need to be made to implement such a solution.