Inspiration

Suicide rates in the US have slowly been on the rise since statistics started being recorded for them. The global pandemic has also exacerbated this mental health issue. Currently, suicide is the 10th leading cause of death in the country. This rank jumps drastically for adolescents and young adults, being the second leading cause of death for ages 10-34. In university environments, it is especially prevalent. It seems we receive messages about students committing suicide multiple times every year. We wanted to tackle this issue and decided to design an autonomous system that would be positioned on the roof of tall buildings to notify authorities and mental health aid if somebody attempts to climb the rails and jump. Although jumping isn’t the leading method of suicide, “jumping from tall buildings or high bridges seems to be reserved for those who are determined to die”. Hopefully, our system will be able to detect jumpers and alert help, potentially saving lives.

What it does

Our system, positioned on the rooftops of tall buildings, has a distance sensor slightly past the railing, which should pick up any activity past the railing, like leaning or reaching. Once the sensor detects activity, the raspberryPi will have the connected camera take a picture. This picture is first run through Google’s Vision API, which detects whether or not there is a person in the image. This will verify that no birds or squirrels or plants trigger the response. If a person is detected, the image is run through OpenPose which uses a series of joint points to determine the position of the person’s body. If the person’s body appears to be in a climbing/unnatural or unrecognizable position, an automated message is played through the speaker, alerting the person that help is on it’s way,discouraging them from continuing their actions, and sending a message to authorities and mental health professionals about the situation. Any of these conditions not being met means nobody is in danger causing everything to run normally,

How we built it

Our main hardware component was a raspberryPi, which we connected to peripheral devices, an ultrasonic sensor and a USB camera. Python was the only language we actively coded in, while using technologies such as OpenPose, AWS SNS, Google Cloud Vision API, and OpenCV. We downloaded ubuntu onto the raspberryPi as our operating system and used it to coordinate between the peripheral devices and connect to AWS and the vision API.

Challenges we ran into

While building our system, we had some problems with our peripherals and with OpenPose. Working with the camera and speaker was tricky. OpenPose works best with high-quality, well-lit pictures and our old cameras were taking either low-quality or poorly-lit pictures which resulted in very inconsistent OpenPose results. We changed our camera to be a slightly higher quality one, which fixed some issues. Getting the raspberryPi to play the audio on the speaker instead of the connected monitor was also challenging. That required downloading an MP3 player.

Accomplishments that we're proud of

We’re most proud of what we learned and successfully working with both a camera and open source machine learning algorithms. It was our first time working from source all the way to the end result, while working with unfamiliar hardware. I think we all gained new skills throughout the entire process and are proud of our accomplishments.

What we learned

This was all the team members’ first time working with RaspberryPi, so we had to learn how to set it up and work with it. In addition, we had to research and learn about human body recognition systems like the Google Vision API and the trained convolutional neural network, OpenPose to detect human presence and different parts of the body. We also learned how to use opencv to interact with our camera and manipulate images. Hardware-wise, dealing with the camera-raspberryPi and sensor-raspberryPi interactions was new for us. This was the first time we had worked with a raw camera output and large amounts of image data. Finally, a few of us learned how to code in Python for the first time. (P.S. We also learned vim)

What's next for UMatter;

The next step would be to create communication between the raspberry pi and the client on the front end. When notified, mental health professionals would be able watch a livestream of the camera. This feature would also allow for an on call counselor to help the person considering suicide in real-time. This would be accomplished by allowing the counselor to send data from their microphone to the speaker on the raspberry pi to communicate with the person.

Built With

Share this project:

Updates