With more and more automobile companies aiming to achieve level 4 autonomy, there is a lot to be done. That being said, driver safety in vehicles for a lower level of autonomy is critical and is therefore currently our top priority in the near future. Numerous accidents have occurred due to the driver's inattentiveness caused by his/her drowsiness. So we thought, why not make a safety system integrated to the car's infotainment system such that it takes care of the driver's concentration on the road and also improves the mental and emotional state of mind before and during driving, making it a more safe and comfortable one. Also, who doesn't like personalized entertainment options?

What it does

Our software detects drowsiness of the driver, by constantly monitoring Eyes-Aspect-Ratio (EAR), and if it goes below a threshold and is maintained below for a certain duration of time, a voice alert is sent to the in-car infotainment system. The frequency and intensity of the alerts increase until the person is woken up by the alerts. The infotainment system also recommends nearby places for refreshments which include cafes', restaurants, rest areas, hotels along the distance once the driver becomes attentive after the constant voice and sound alerts for him/her to rejuvenate. In addition, the software constantly monitors the driver's face to detect the emotions and based on these emotions, it recommends songs that can be added to the playlist queue, thereby personalizing the ride for the driver and co-passengers.

How we built

The entire software focuses on two main issues which include driver's drowsiness and emotion detection. We use OpenCV and a pre-trained ResNet-18 model to predict the Eyes-Aspect-Ratio (EAR) from the continuous video feed of the driver's face, based on which the Drowsiness is detected. The car's GPS coordinates are extracted from the Ford SDL API and are provided to the Google Map's Places API, which recommends the top 3 nearest rest-stops on the route along with the distance from the current location. Google Cloud Vision API's pre-trained face detection was employed to detect emotions on the same video feed used for drowsiness detection. We integrated these on the Ford's SmartDeviceLink (SDL) backend API. Once the drowsiness detector is triggered, it sets off the voice alerts which go on until the driven is awake, and once this is detected by the drowsiness detector, it sends a signal to the Places API, which provides the distance to the rest-stops and is displayed on the infotainment console. While this is happening, the emotions of the driver are constantly monitored and songs are recommended based on the emotions. We generated three different playlists for three different emotions i.e Joy, Sad and Neutral for the initial tests using Spotify's Music API, and these playlists are created in such a way that it suggests songs which lift up the emotion of the driver if he/she is detected to be sad, and not play songs which makes him/her dwell in it. These playlists are suggested on the car's infotainment console and are shuffled based on the driver's command to play.

Challenges we ran into

  • Setting up and working on Ford's SDL API's Android SDK.
  • Integrating Google Vision API, Google Maps API, Spotify's Music API to Ford's SDL API
  • Working on backend stack using Java
  • Prioritizing function calls which include drowsiness detection prioritized over emotion detection.
  • Limited API buffer capacity
  • Cross-platform communication

Accomplishments that we're proud of

  • Built a working model of our envisioned goal.
  • Integration of several APIs
  • Cross-platform integration

What we learned

  • How to use different Google Cloud APIs and Ford SDL API, Spotify API.
  • Android Development using Java
  • JSON file exchange

What's next for IDEAS

Distracted driver detection (not limited to drowsiness), IoT based home management through car system, complete voice controls over the car features.

Built With

Share this project: