AnandY, Cabe, yes I agree. I recently led a roundtable at COFES (the Congress on the Future of Engineering Software) on the subject of software development for embedded systems. One new insight came from from a participant involved in some non-consumer applications of heterogeneous compute systems. It was that these environments seemed to be pushing software developers back towards low-level coding (assembler/machine coding). This was not a deliberate strategy, it was just that individuals trying to solve synchronization problems seemed to want to work at this level. I see this as a natural engineering reaction - something along the lines of "it's a difficult problem, let me get my hands on everything that might be relevant", and, in this case, the lower level software tools require you to look at (and handle) more of the workings of the electronics. However, I believe that eventually, higher level tools will help solve these problems. As teams gain experience, more issues (including synchronization between co-operating processors) will be defined or solved at an architecture level. If the architecture defines the solution, then a software engineer will be able to use high level language and models, and implement according to the architecture. If the software implements the architecture, then the lower-level synchronization 'must' be OK (if the architecture is right). Of course, architecture level solutions tend to be quite general, so someone wanting better performance in a special case may find themselves handling a trade-off of development effort and re-usability against product price performance. Sorry for long post, there's plenty to do to deliver heterogeneous compute potential!
It makes sense to combine GPU and DSP processors alongside CPU processors within a Smartphone or a tablet, but especially the latter, since both mobile devices are graphic intensive. Giving them a dedicated graphics processor alongside an independent CPU gives them more overall processing power as the graphics are offloaded from the CPU making it more efficient. This idea is not so far removed from the main principles of parallel computing, if you really break it down.
Creating any software for use in heterogeneous computing will, at least at this stage, require that the software leans more on CPU processing and less on the GPU. Of course, as mentioned in the article above, the specific extent to which the software will rely on either processor will depend on the specific tasks that will need to be completed using the software. But, given that most of the software that the market needs right now are not very specialized, adopting a CPU-centric solution to heterogeneous computing will give the software much more power and versatility when it comes to the execution of complex tasks.
Just when you thought mobile technology couldn’t get any more personal, Proctor & Gamble have come up with a way to put your mobile where your mouth is, in the form of a Bluetooth 4.0 connected toothbrush.
The grab bag of plastic and rubber materials featured in this new product slideshow are aimed at lighting applications or automotive uses. The rest are for a wide variety of industries, including aerospace, oil & gas, RF and radar, automotive, building materials, and more.
Focus on Fundamentals consists of 45-minute on-line classes that cover a host of technologies. You learn without leaving the comfort of your desk. All classes are taught by subject-matter experts and all are archived. So if you can't attend live, attend at your convenience.