Inspiration
We care deeply about our elders and want them to stay as healthy as possible, but we often do not notice in time symptoms of sickness in our elders before it is too late. On top of nursing homes and elder care in general being often short staffed, which leaves room for a better monitoring system for our elders so that we can help them as quickly as possible. Furthermore, there are often cameras setup in public areas of nursing homes that we think could provide valuable insight into the health of our elders that we could analyze. Our goal is to make it so that as many diseases are caught as early as possible to give our elders the best chance of survival.
What it does
Our project Elder Care AI helps our elders in nursing homes by analyzing footage and reporting possible symptoms of sickness so that a professional can be called quickly to help support them. You submit a facial scan of the elder such that behavior in the footage of the elder is properly mapped to them. You can look at footage from a specific day to see a detailed report of the events that occur in the footage as well as a health summary of the elder.
How we built it
There are two components for our project. The first is the frontend portion of our website build with React, Typescript, and Vite and a little bit of three-js. The frontend can show multiple angles for the footage that you can select from with a side pane that opens for the timeline of events.
Our backend Django server that analyzes the footage by first classifying each person in the footage based on the facial scan using Deepface + Opencv a lightweight facial recognition framework and then pipes the edited footage to TwelveLabs that generates detailed timelines of events for each person in the footage. The server then sends the timeline back to the frontend for you to view.
Challenges we ran into
Deepface is used for images but we're analyzing video so we had to combine it with OpenCV so that it could classify the video. We did this by sampling some interval of frames (every 10th frame) in order to process the video. Furthermore, finding videos of elders was really hard so we just used ourselves in demos of this project.
Accomplishments that we're proud of
Our backend pipeline is quite complex with two main steps of processing and analyzing for footage to actually properly get back to the frontend.
What we learned
We learned about facial recognition in images and video and how to use the TwelveLabs API to help reason about video events.
Built With
- cursor
- deepface
- django
- opencv
- react
- twelvelabs
- typescript
- vite
Log in or sign up for Devpost to join the conversation.