Going Dubhack's theme for social change, we decided to build a driving aid for deaf drivers and individuals hard-of-hearing as no such product currently exists. According to AAA Seniors Driving, many seniors are unable to hear high pitches such as ones from emergency sirens on the road which puts road users at risk. In addition, numerous published articles have elucidated the risk that deaf individuals take when driving, as well as the need for a deaf driving aid. Inspired by console FPS video games where players receive haptic feedback and visual directional indicators when fired upon, we realized we could easily design a device to effectively fill this need. After giving it more thought, we also concurred that such a device would be incredibly useful to general road users as well, such as those that drive with loud music, or are easily distracted. These concerns and publications thereby reinforced our desire to create eDrive, in hopes not only to improve transportation security but also promote equity and driving access to more people.

What it does

eDrive uses microphone modules on the four corners of a car to detect noise alerts (using an algorithm that samples background noise and checks for sound signatures). (such as emergency sirens police, ambulance). When an emergency siren is picked up or another loud honk/siren, the user is notified with (1) a slight vibration pulse integrated in the steering wheel, (2) an LED lighting up showing which direction the sound is coming from, and (3) an LED matrix display informing him/her of what type of siren/honk it is (e.g., cross for ambulance, officer badge for police, exclamation mark for unspecified honk).

How we built it

Arduino constantly gets input from sound, and uses simple moving average to detect a non-anomaly spike that could potentially be a road hazard. The Arduino then sends a packet of input (time and displacement) to a Java program via serial. The java program parses it, and applies fourier transform to get an output of (frequency and displacement). After receiving this input, the java program then runs a hashing function to map each frequency and displacement to a color, which is saved as an image. The image is then sent to a custom trained Clarifai Model to detect different sound sources. Currently, our model is only able to differentiate a police car from an ambulance. It then outputs the data back to the arduino via serial to display what hazard is present, and also which direction the hazard is coming from.

Challenges we ran into

On hardware side:

  • Transferring sound input packages from Arduino to Java applet. Due to the safety aspect of this project, we needed to be able to transfer input packages from the arduino to the java service to allow for quick evaluation. This is also very important problem because the data points needed to be close enough to keep the sound wave's properties.
  • Calibrating sensitivity. We needed to calibrate the sensitivity to not abuse the Java program by sending it continuous streams of input. Instead, we have to create a trigger that will send input to the Java program only when there is a abrupt change in input. We use simple moving average to remove data anomalies. On software side:
  • Applying fourier's theory. Fourier transforms requires a lot of calculation, and is often done in a analogue medium. We needed to run this fourier transform from a digital input, whose java library is not easily found.
  • Machine Learning. The hash function needs to convert the frequency and amplitude graph to an image that differs between sound inputs.

Accomplishments that we're proud of

We are proud of how we managed to finish a complicated product within 24 hours as well as the positive impact it could have on the world. We managed to complete all of our goals in addition to our stretch goals, even after facing large engineering and design challenges. We are also proud to have all worked so effectively and efficiently as a team comprised of students of various educational backgrounds and levels.

What we learned

We all learned a lot from each other. Our team was comprised University of Washington students from a variety of academic backgrounds and levels. We all contributed significantly to the project and all understood what was going on with all parts of the project. In other words, while some of us did not know some concepts with hardware, electrical engineering, software engineering, and/or machine learning, at least one person on the team knew and took the time to teach everyone.

What's next for eDrive

Sirens for police cars, ambulances, and other emergency vehicles are not the same globally. Therefore, adding a location component to use the locally used tones would be useful to make eDrive a universally used product. In terms of UX/UI, finding a better display mechanism that could easily be embedded in a real car dashboard or as a car add on would be beneficial to the drivers. Perfecting the image rendition from sound waves would be another milestone in this project. We plan on adding another dimension (a y coordinates) from time. We plan to chop the input passed in to the fourier transform and the hashing function, to have a snippet of the frequencies involved in the sound wave as well so that the model is able to recognize frequency change, which allows for better model training in the Clarifai Machine Learning API.`

Built With

Share this project: