Edge AI is finally starting to get the attention of the technical trade press. It's been a real thing for a while – particularly in autonomous driving applications and wearables – but other applications are starting to get some attention too.
A couple of recent demonstrations by startups have also driven some press. One has shown a limited visual recognition neural network capable of identifying the presence of people from a small camera, all on a chip powered only by a single solar cell. Another claims gesture recognition on Cortex®-M0 that does as well as a deep learning model requiring 10x the memory footprint.
Cool Demos for PR, not Products
Cool as they are, what both of these examples leave off is any indication of actual accuracy or real-life applications that would use the machine learning models demonstrated. The results they show make for good PR but are perhaps not fully deployable products meeting a robust set of requirements.
It turns out that it's much much easier to do cool demonstrations on constrained hardware if you can exclude all the difficult parts of the problem and not worry too much about hitting accuracy thresholds needed for viable deployment.
In 2019, Reality AI (now Renesas) unveiled its Edge AI demonstration featuring the NXP i.MX RT edge anomaly detection hardware platform. This board from NXP is intended for use by companies developing real solutions to real problems — allowing them to develop, prototype and deploy on a single board, or to drive specifications for a custom board/SoC for inclusion in a mass-market product.
Real Problems, Real Solutions, Reality AI
The Reality AI demo shows an actual solution to an actual problem: a predictive maintenance application on a piece of rotating equipment.
This industrial fan with an in-line filter is monitored by a single i.MX RT board running Edge AI inference software automatically generated from test data by Reality AI Tools®. That software identifies whether the fan is idle, coming up to speed, running normally, seeing a blockage, or an unbalanced condition.
Simultaneously, it delivers an anomaly score showing whether the current condition of the fan resembles "normal" and generates an alert if any previously unseen anomaly is detected (e.g., a fan blade suffering damage, a bearing problem, etc.).
It also delivers a simultaneous, real-time prediction of the remaining useful life of the in-line filter, indicating how long before the filter needs to be replaced.
All on the same board. All with production-level accuracy.
These are not contrived demonstrations for purposes of generating some favorable publicity. These are real solutions to the kinds of real engineering problems that our customers use Reality AI software to solve every day, running on real hardware they might actually deploy.
Quick, Automatic & Explainable AI
The software used in the demonstration was created with the Reality AI cloud-based system for feature discovery and automatic machine learning model creation, Reality AI Tools.
In Reality AI Tools, an engineer can expose powerful algorithms to their data, which search tens of thousands of different feature spaces and identify those that can be used to create the most accurate machine learning models. They can then automatically generate machine learning models using those feature spaces, allow the algorithm to automatically tune parameters/hyper-parameters, and export that code to the Linux or embedded chipset target of their choice.
Most importantly though, engineers can use Reality AI Tools to explore those feature sets in order to understand exactly what the machine learning model is doing and on what it is basing its decisions. They can generate a time-frequency heatmap to see what parts of the spectrum and what time structure is most important for driving model performance, and they can use this information to explain how their machine learning solution works and why.
Reality AI Tools delivers solutions that are quick, automatic, and fully explainable.