How to Build Better Applications with Edge AI
Edge AI computes locally, often on a real-time operating system, which excels in providing fast responses.
August 16, 2024
At a Glance
- Edge AI has been solving problems for years, including the advanced driver assistance system in your car.
- Unlike the generative AI craze in data centers, edge AI is lower power and runs near the endpoint.
- The world of edge AI is currently focused on machine learning, a subset of AI.
Edge AI deploys artificial intelligence (AI) algorithms directly on local devices, such as smartphones, IoT devices, and driver assistance systems. Edge AI is an alternative to centralized cloud servers. It’s faster at processing real-time decision-making. There is reduced latency, which is critical in applications like autonomous vehicles, industrial automation, and smart cities. By processing data at the edge, Edge AI reduces the need for constant internet connectivity and minimizes data transmission costs, while enhancing privacy and security.
Edge AI is particularly beneficial when immediate responses are required and bandwidth is limited. Edge AI enables efficient, responsive, and secure operations, which makes it well suited for future AI-driven technology. The ability to analyze and act on data locally can drive significant advances in electronics, aerospace, healthcare, and manufacturing.
We caught up with Shawn Luke, technical marketing engineer at DigiKey to get insight into the value of building applications with Edge AI.
What are the advantages of building applications by using Edge AI?
Shawn Luke: Edge AI is well suited for finding patterns. The patterns could be sensing a human is present, that someone just spoke a “wake word” for a smart home assistant, or a motor is starting to wobble. Unlike the generative AI craze, which largely runs in data centers, edge AI is lower power and runs near the endpoint (including the sensors). Edge AI has been solving problems for years, including the advanced driver assistance system (ADAS) in your car, which beeps when you’re drifting out of your lane. For the smart home assistant, wake words like “Alexa” or “Hey Siri” are models that run at the edge (without sending your voice to the cloud). It wakes up the device and lets it know it’s time to dispatch further commands.
What do you see has the greatest value of using edge AI?
Luke: I think the greatest value of edge AI is the speed it can provide for things like critical applications. Unlike the cloud/data center AI, edge AI does not send data over network links and then hope for a reasonable response time. Rather, edge AI is doing computation locally, often on a real-time operating system (RTOS), which excels in providing fast responses. For situations like doing machine vision on a factory line and knowing that product can be diverted, even within a second, edge AI is well equipped. Likewise, you wouldn’t want signals from your car to be dependent on the response times of the network or servers in the cloud.
What’s at the core of edge AI?
Luke: The world of edge AI is currently focused on machine learning (ML), a subset of artificial intelligence (AI). The ML models are loaded with training data, and the parameters are adjusted to get the intended outputs.
DigiKey is well positioned to assist in edge AI implementations, as they generally run on microcontrollers, FPGAs and single board computers (SBCs), all of which DigiKey sells and partners with top suppliers. Additionally, DigiKey has put together a dedicated page on edge AI that houses top videos, articles, products, and further answers.
Explain the use of Edge AI for applications in automotive ADAS.
Luke: Automotive ADAS is a time-critical application. ADAS systems have trained machine learning models that run safety features to assist the driver. Some of those features include collision warning, blind spot detection and movement detection (when the automobile is in reverse).
The block diagram and related components for ADAS can be seen here:
Explain the use of edge AI for applications in patient monitoring.
Luke: Edge AI implementations for patient monitoring are based on anomaly detection using medical sensors. One example could be a bed that re-adjusts the incline when a patient starts to snore to improve the patient’s quality of sleep.
Explain the use of edge AI for applications in predictive maintenance.
Luke: Predictive maintenance can help an organization proactively life-cycle equipment before it burns out. Edge AI does anomaly detection to sense when motors are starting to become unbalanced or wobbling. Servicing a motor before it burns out can save money by avoiding unnecessary downtime.
How are edge AI models being generated for ML software solutions and software as a service solution?
Luke: There are three likely paths to getting the right software for your edge AI hardware:
Getting a pre-flashed device with the ML model on it is more common with a dedicated AI sensor. This could be a people counter, for example.
Use a service (SaaS) solution like Edge Impulse to upload your data set and get the model for your target device.
Downloading the software libraries, like TensorFlow, to generate your ML model.
Explain the use of AI in sensors, whether audio or visual sensors.
Luke: Some AI sensors come preloaded with an ML model ready to run. For example, this Spark Fun eval board for sensing people is preprogrammed to detect faces and return info over the QWiiC I2C interface. Some AI sensors, like Nicla Vision from Arduino or this OpenMV Cam H7 from Seeed Technology, are more open-ended and need to have the ML model trained to what they are looking for (defects, objects, etc.).
About the Author
You May Also Like