画像
Kurt Dekoski
Kurt Dekoski
Business Development Manager
掲載: 2024年10月4日

With the growing use of autonomous vehicle capabilities, there is less reliance on drivers or vehicle operators which means products with higher levels of autonomy will need to replicate all the various activities that a driver or operator would perform. Vehicles and robotics will have to utilize all the senses a human driver or operator uses, not just vision or touch, but also sound and smell.

According to SAE:

  • Level 2 Autonomy indicates that the driver must… monitor the environment.
  • Level 3 Autonomy indicates that the driver is not required to monitor the environment.
  • And when we reach Level 4 Autonomy, the vehicle is capable of… monitoring the environment on its own.

Reality AI software from Renesas is already helping to monitor that environment today. Not just the environment outside of the product or vehicle for advanced driving or operational capabilities, but also the product itself to facilitate preventative and predictive maintenance. With the advanced signal processing techniques available to analyze the different signals generated from the environment, Reality AI tools can facilitate a higher level of autonomy and make the vehicle or product safer.

For various industries driving to autonomy, Renesas offers the Seeing with Sound solution which augments the current ADAS/AD sensor suite by including the sensed modality of sound. When we drive, we're not only using our sense of sight but also relying on hearing and even vibrations to navigate our surroundings. We hear a lot of sounds from the environment around us that we may not be able to see the source of immediately or at all.

With SWS, we use the vehicle's microphones or vibro-acoustic sensors to listen for emergency vehicles, other vehicles on the road, pedestrians, and even cyclists, combined with a Renesas MCU like the RH850/U2A.

画像
Automotive Seeing with Sound (SWS)

 

The Seeing with Sound (SWS) application is the perfect solution to a problem reported recently in an article from the New York Times, when a robot vehicle passed in front of an emergency vehicle with sirens on, thus delaying its arrival to an emergency. The robot vehicle had a clear line of sight to an oncoming fire truck and didn't recognize it, but it certainly would have heard the sirens if our SWS solution had been implemented. The audible environment around that vehicle would have provided the information that the visual environment had missed, delivering a better image of the total environment.

The Seeing with Sound solution from Renesas allows a vehicle to hear its surroundings and make decisions based on the noise of interest, like the sound of emergency sirens. Sometimes, we overlook how much we utilize our sense of hearing when we drive. Even within current vehicles, we see that distracted driving shifts the focus away from the road and with the passive and active noise cancellation techniques being deployed, it is more difficult for human drivers to recognize all inputs from the environment. With the inclusion of SWS into vehicles it will make them safer and less reliant on alert drivers. Regulations and Move Over laws for emergency vehicles, or automatic emergency braking are driving the implementation of various technologies like SWS.

As industries continue to trend toward autonomy the use of AI for condition-based monitoring of vehicle or product operational status and predictive and preventive maintenance will continue to rise. We as drivers and operators typically monitor a product or vehicle's health and schedule required maintenance or move the vehicle to a safe location to get a better look. Reality AI Tools from Renesas can analyze data from various sensors like accelerometers, gyros, IMUs, and microphones to name a few, to develop ML models utilizing frequency/magnitude or time domain signal metrics to calculate feature sets based around mathematical, statistical, and logarithmic formulations. Even pressure and temperature sensors can be employed. Generating results like State of Health and Remaining Useful Life metrics for various applications ranging from filters to water or washer pumps to tire wear to name a few.

Consider all the different variables we monitor when we drive, even anomalies like small impacts, bumps, dings, and scratches which can be felt as well as heard. Reality AI Tools software can analyze these different signals to classify what they may represent. What if a stone or rock impacts your windshield? After the initial shock, drivers are capable of determining what precautions should be taken. How will the car of the future know that the impact occurred? There are many different situations where the inclusion of sound or even touch or feel can improve autonomous capabilities. As industries strive for autonomy, how will autonomous products be able to adjust to the different conditions to keep drivers and passengers safe and comfortable?

Visit the Automotive Sound Recognition (SWS) page or read the "Seeing with Sound: AI-Based Detection of Participants in Automotive Environment from Passive Audio" white paper on deploying passive audio sensing to learn more. If you're ready to see SWS in action, request a demo today.

この記事をシェアする