Skip to main content

New MPU Platform for Vision AI Applications Delivers Performance, Power Efficiency, and Customer Ease of Use to the Network Edge

Image
Mohammed Dogar
Mohammed Dogar
Vice President & Head of Global Business Development & Ecosystem
Published: April 4, 2024

Vision AI is among the fastest growing embedded artificial intelligence disciplines, joining AI-enhanced voice tools and real-time analytics as a means to quickly gather, process and train massive amounts of data. According to an ITR Economics forecast, the vision AI market is forecast to grow from $500 million in 2020 to $1.3 billion in 2025 – a compound annual growth rate of 22 percent.

Demand for embedded vision AI solutions is being driven by the industry's continued movement away from a heavy reliance on cloud-connected communications in favor of AI solutions at the network edge. While edge-based AI systems allow end-users to make informed decisions at a scale and speed never before imagined, they must do so while delivering optimized processing speed, energy consumption, and customer ease of use.

As an embedded technology solutions leader committed to bringing the benefits of AI to a broad range of customers, Renesas recently unveiled our latest vision-based AI solution, the RZ/V2H microprocessor (MPU) platform. Developed for industrial, home, office, and smart city applications, this new approach to robotics automation enables designers to embed vision sensing systems quickly and easily at the edge or endpoint without the cost, latency, and power penalties of cloud-based solutions.  

Advancing Vision AI Processing Performance

The new Renesas quad-core RZ/V2H MPU single platform accelerates multi-image processing and improves the accuracy of automated factory equipment, robotics controls, transportation systems, and a host of other end applications by offering support for up to four cameras – six when using the included USB port.

From a raw performance perspective, the RZ/V2H platform includes Renesas’ third-generation dynamically reconfigurable processor (DRP). The proprietary DRP-AI3 AI accelerator delivers a huge 10-fold performance improvement over previous models, which enables the new MPU platform to push processing speed to 80 trillion operations per second (TOPS) – up from 0.5 - 1.0 TOPS for previous-generation MPUs.

Renesas also leaned into its proprietary DRP technology to develop the OpenCV Accelerator, which speeds up the processing of OpenCV, an open-source industry standard library for computer vision processing. The combination of the DRP-AI3 and the OpenCV Accelerator enhances both AI computing and image processing algorithms by processing data 16 times faster than a conventional CPU.

Image
RZ/V2H MPU with Integrated AI Accelerator

 

Power Efficient Design Eliminates Fans and Heat Sinks in AI Vision Systems

Thanks to the DRP-AI3 accelerator’s advanced design, the new MPU platform increases power efficiency to 10 TOPS/W, a 10X energy savings over earlier solutions. This extremely power-efficient design removes the need for fans and heat sinks required by competing solutions, saving significant space, cost, and design time for AI applications operating at the power-sensitive network edge.

Renesas achieved this breakthrough using a novel approach to hardware and software that includes coordination between the AI accelerator and main processor in order to quickly process a variety of algorithms. The DRP-AI3 accelerator's other power-saving innovations include AI model light-weighting, including quantization to lower bit weights for neural network data, and pruning, which is a technique for skipping calculations by setting weight information to improve machine learning model efficacy.

Renesas' Vision AI MPU Platform Makes Customers' Lives Easier

To promote customer ease of use, Renesas released the RZ/V2H evaluation board, in addition to an AI Applications library of pre-trained models and an AI SDK. Together, these new tools help engineers evaluate applications easily and earlier in the design process – even if they lack extensive AI knowledge. This included the preparation of more than 50 application examples that can be downloaded and used free of charge across multiple end uses. With another 50 application examples soon to be released, designers can avail themselves of a vast array of potential use cases, which include:

  • Defect Inspection: Monitors factory production to detect visual faults in products
  • Touchless Industrial Controls: Replaces physical controls with hand gestures
  • Crop Defense: Alerts farmers to stray or wild animals before they damage crops
  • Elevator Use: Enables touchless controls and passenger counting
  • Parking Reservations: Real-time data tracking for parking spot vacancies
  • Smart POS: Optimized retail check-out

In the future, we foresee vision AI being supplemented by generative AI at the edge, which will bring a new level of design complexity depending on specific data execution needs and desired performance levels. Today, generative AI is still a costly and power-hungry option used predominantly for processing massive data sets, but in time we believe the two will work in concert to facilitate flexible, scalable, cost-effective decision-making.

The two together could enable more complex image processing and even the integration of embedded vision systems with other AI processing models. Either way, the trend is inevitable: AI is coming to the network edge.

Share this news on