The IRS is coming! In fact, it is already here and unfortunately, embedded system engineers have been caught off-guard and snoozing. As you might have guessed, I’m not talking about the internal revenue service but Intelligent Real-time Systems (IRS). Intelligent Real-time Systems are microcontroller-based devices that have the ability to learn to use data by running a resident, artificial intelligence algorithm (AI).
There have always been two different ways that teams could leverage artificial intelligence in their products. The first, and the most realistic for the last decade, has been to execute the AI algorithms in the cloud. The cloud has provided a unique platform where processing power seems limitless when compared to the processing available on a microcontroller. Machine learning (ML) algorithms could be provided with data and trained to recognize patterns that would otherwise have been nearly impossible for a developer to program (think handwriting character recognition).
Systems that use machine learning in the cloud can still use a real-time embedded system to collect data, but that data is then sent to the cloud for processing and any return-response then would be relayed back to the embedded system. As the reader can imagine, this is hardly a real-time or deterministic operation. Using the cloud, though, has worked and will continue to work in applications for the foreseeable future.
The second approach, which has generally been out of reach for most systems, is to process the data and execute the machine learning algorithm on the microcontroller. This is a far more interesting solution because it removes latency that would otherwise exist if the data needs to be processed in the cloud. The potential for businesses here is huge for several different reasons such as:
- No longer requiring an internet connection which could reduce bill of material (BOM) costs and system complexity
- Decrease in operating costs for cloud services and data processing plans
- Offline product differentiation
- Reduction in processing latencies and energy consumption
- Improved product reliability and potentially security
- The use of machine learning in deterministic, real-time systems
The buzz around AI for microcontrollers as of this writing has only been around for about a year. The push for intelligence to the edge seems like it would be a better fit for application processors, which have a lot more horsepower than a microcontroller. So how close are we really to having intelligent real-time systems?
The answer will greatly vary depending on who you talk to and what your end application will be but let me provide a few quick examples of what’s currently available for developers.
First, ARM has released CMSIS-NN, which is a C library designed for running low-level, optimized neural network algorithms on a Cortex-M processor. This allows developers to design and train their high-level machine learning algorithms and then deploy them onto a microcontroller. This can be considered the required foundation in order to run machine learning in an efficient manner, locally without the cloud.
Second, there are already several products that are available that leverage CMSIS-NN within the resource constrained environment. A great example is the OpenMV which is a camera module based on the STM32 and provides local processing for capabilities such as”
- Face detection
- Eye detection
- Color tracking
- Video recording
Machine vision is a leading intelligence capability that many real-time embedded systems will require.
Finally, the infrastructure is being put in place by several different silicon manufacturers to make machine learning more accessible on microcontrollers. An announcement that was just made by ST Microelectronics in January 2019 was the availability of a STM32CubeMX AI extension that provides a neural network toolbox for AI. These types of tools will not just make AI accessible on microcontrollers but will quickly drive a revolution in the types of systems that can be developed and may very well challenge the business models that many companies are currently implementing for their products.
As we have seen, intelligence is quickly making its way from the cloud to the edge. While there may be some of us who have been hoping that intelligence at the edge is just a marketing fad and something that will fade away, the technical facts are quickly showing that these capabilities will be available soon if they aren’t already available. Now is the time to start understanding these technologies and how they can be integrated into your roadmap before it’s too late.
Jacob Beningo is an embedded software consultant who currently works with clients in more than a dozen countries to dramatically transform their businesses by improving product quality, cost and time to market. He has published more than 200 articles on embedded software development techniques, is a sought-after speaker and technical trainer and holds three degrees which include a Masters of Engineering from the University of Michigan. Feel free to contact him at [email protected], or at his website, and sign-up for his monthly Embedded Bytes Newsletter.