DBSE Monitor: Drowsiness, Blind Spot and Emotions monitor.

Drowsiness, emotions and attention monitor for driving. Also detects objects at the blind spot via CV and the NVIDIA Jetson Nano.

Follow this link for direct instructions on how to run our demos for the three applications (Also very cool individual video demos for the three):

https://github.com/altaga/DBSE-monitor#laptop-test

Remember that this is an embedded solution, so for the complete experience you'll have to build your own in which you can find instructions to build it in our Github: https://github.com/altaga/DBSE-monitor

Inspiration and Introduction

We will be tackling the problem of drowsiness when handling or performing tasks such as driving or handling heavy machinery and the blind spot when driving. With some features on the side. In adition to that

But let's take this on from the beginning we first have to state what the statistics show us:

Road Injury is the 8th cause of death worldwide:

More than most Cancers and on par with Diabetes. Huge area of opportunity, let's face it autonomy is still has a long way to go.

A big cause is distraction and tiredness or what we call "drowsiness".

The Center for Disease Control and Prevention (CDC) says that 35% of American drivers sleep less than the recommended minimum of seven hours a day. It mainly affects attention when performing any task and in the long term, it can affect health permanently.

According to a report by the WHO (World Health Organization) (2), falling asleep while driving is one of the leading causes of traffic accidents. Up to 24% of accidents are caused by falling asleep, and according to the DMV USA (Department of Motor Vehicles) (3) and NHTSA (National Highway traffic safety administration) (4), 20% of accidents are related to drowsiness, being at the same level as accidents due to alcohol consumption with sometimes even worse consequences than those.

Also, the NHTSA mentions that being angry or in an altered state of mind can lead to more dangerous and aggressive driving (5), endangering the life of the driver due to these psychological disorders.

Solution and What it does

We createda system that is able to detect a person's "drowsiness level", this with the aim of notifying the user about his state and if he is able to drive.

At the same time it will measure the driver’s attention or capacity to garner attention and if he is falling asleep while driving. If it positively detects that state (that he is getting drowsy or distracted), a powerful alarm will sound with the objective of waking the driver.

Additionally it will detect small vehicles and motorcycles in the automobile’s blind spots.

In turn, the system will have an accelerometer to generate a call to the emergency services if the car had an accident to be able to attend the emergency quickly.

Because an altered psychological state could and will generate dangerous driving, through Pytorch we will analyze his facial features to determine their emotional state and play music that can generate a positive response to the driver.

How we built it

This is the connection diagram of the system:

The brain of the project is the Jetson Nano, it will take care of running through both of the Pytorch-powered Computer vision applications, using a plethora of libraries in order to perform certain tasks. The two webcams serve as the main sensors to carry out Computer Vision and then Pytorch will perform the needed AI in order to identify faces and eyes for one application and objects for the other and will send the proper information through MQTT in order to emmit a sound or show an image in the display. As features we added geolocation and crash detection with SMS notifications. That done through twilio with an accelerometer.

Notice how depending on the task at hand we will perform different CV analysis and use diferent algoritms and libraries. With of course different responses or actions.

The first step was naturally to create the three Computer vision applications and run them on a Laptop or any PC for that matter before going to an embedded computer, namely the jetson nano:

Performing eye detection after a face is detected:

And then testing Object detection for the Blind spot notifications on the OLED screen:

After creating both of the applications it was time to make some hardware and connect everything:

This is the mini-display for the object detection through the blind spot.

The acceleromenter for crash detection.

Now here's is how we perform the Emotion detection:

The emotion monitor uses the following libraries:

OpenCV: Image processing. (OpenCV) Haarcascades implementation. (OpenCV) Face detection (Pytorch) Emotion detection VLC: Music player. The emotion detection algorithm is as follows:

Detection that there is a face of a person behind the wheel:

Once we have detected the face, we cut it out of the image so that we can use them as input for our convolutional PyTorch network.

The model is designed to detect the emotion of the face, this emotion will be saved in a variable to be used by our song player.

According to the detected emotion we will randomly reproduce a song from one of our playlists:

If the person is angry we will play a song that generates calm If the person is sad, a song for the person to be happy If the person is neutral or happy we will play some of their favorite songs Note: If the detected emotion has not changed, the playlist will continue without changing the song.

The finished prototype.

Because it is primarily an IoT enabled device, some of the features like the proximity indicator and the crash detector are not possible to test remotely without fabricating your own.

Having said that, the Pytorch-made computer vision drowsiness and attention detector that tracks eye and faces works on any device! Even the alarm. If you will be running it on a laptop our Github provides instruction as you need quite several libraries. Here is the link, you just have to run the code and it works perfectly (follow the github instructions):

https://github.com/altaga/DBSE-monitor#laptop-test

You can find step by step documentation of how to do your own fully enabled DBSE monitor, on our github: https://github.com/altaga/DBSE-monitor

Challenges we ran into

At first we wanted to run Pytorch and do the whole CV application on a Raspberry Pi 3, which is much more available and an easier platform to use. It probably was too much processing for the Raspi3 as it wasn't able to run everything we demanded so we upgraded to an embeded computer specialized for ML and CV applications as it has an onboard GPU: the Nvidia Jetson Nano. With it we were able to run everything and more.

Later we had a little problem of focus with certain cameras so we had to experiment with several webcams that we had available to find one that didn't require to focus. The one we decided for is the one shown in the video. Despite it's age and probably lack of resolution it was the correct one for the job as it mantained focus on one plane instead of actively switching.

What we learned and What's next for DBSE Monitor.

I would consider the product finished as we only need a little of additional touches in the industrial engineering side of things for it to be a commercial product. Well and also a bit on the Electrical engineering perhaps to use only the components we need. This is the culmination of a past project that we have completely polished to reach these heights. This one has the potential of becoming a commercially available option regarding Smart cities as the transition to autonomous or even smart vehicles will take a while in most cities.

That middle ground between the Analog, primarily mechanical-based private transports to a more "Smart" vehicle is a huge opportunity as the transition will take several years and most people are not able to afford it. Thank you for reading.

Share this project:

Updates