The Inspiration

Earlier this year, my grandma fell and it wasn’t really good: It took her two weeks to get back on her feet because she was scared. Then, a few weeks ago, my friend’s grandma fell too as she was taking out the laundry to dry, and since she lives alone, no one was there. It took her 45 minutes after the fall before she was able to stand up and find the chunky button which elderly have to seek help. There are new solutions like the Apple watch which has an accelerometer to detect falls but for the elderly, there is the obstacle of technology and they may not be wearing them at all times.

Research

In the US, over 13.3 million elderly people live alone, which puts them at higher risk of injury and loneliness. Research from the Administration on Aging has shown that falling is the leading cause of injury-related visits to emergency rooms, and the primary cause of accidental deaths in seniors 65 and older. For seniors that live alone, falls can be especially dangerous if no one is aware of the fall or around to help. It doesn’t have to be this way.

SafetyNet

SafetyNet is a fall detection monitoring system for seniors. Instead of existing solutions that require seniors to wear chunky necklaces or wrist devices, SafetyNet uses a single camera and machine learning to identify whether someone has fallen or not.

SafetyNet is also connected to a mobile app, which notifies their loved ones when a senior has fallen. Their loved ones can monitor the senior’s activities through their mobile device, through features such as movement recordings, live stream videos and live audio transcripts, powered by our own computer vision model combined with mediapipe, Twilio, and Google Cloud NLP and speech toolkit.

We developed our own in-house model with a straightforward yet powerful approach to fall detection. We use mediapipe for pose estimation and pass the body parts coordinates to a multi layer perceptron trained on a fall detection dataset.

When a fall is detected we use Google Cloud text to speech to check if the person needs help and we use GCP’s speech recognition combined with GCP’s sentiment analysis to understand the intent, if the sentiment is negative, we contact 911 using Twilio and use GCP’s text to speech to share the details of the person.

The Team

We are team Magic4, a multidisciplinary international team that spans across 3 continents and time zones:

Amine - back-end dev from Scotland, Glasgow

Tele - back-end dev from Lagos, Nigeria

Janice - UX designer from Toronto, Canada

James - full-stack dev from Vancouver, Canada

Share this project:

Updates