Alzheimer's patients suffer from chronic memory loss amongst other things, and are sometimes unable to remember even their loved ones, or find themselves in situations where they don't know what to do or where they are. This project provides a simple-to-use system that solves all three of these.
What it does
While the potential of this system is limitless, right now it has three main commands:
- A panic button, that alerts a selected number with the system's location
- A facial recognition sensor, that also says who the camera is looking at out loud
- A location detector, that says out loud where the system is
All of these are connected to a web app that can broadcast messages to the system, add new people to the facial recognition system, and track the location of the system in near real time.
How we built it
We used a raspberry pi to host a node.js server, from which we deployed the web app and made calls to the google speech and nexmo apis. We also did offline facial recognition on the pi itself, and connected the speaker, camera and microphone to it.
Challenges we ran into
Deploying some components in React, especially the design components, ate up a lot of time. Optimising the api calls and making some of the calls real-time were also very difficult.
Accomplishments that we're proud of
In the end, we got it to work and far exceeded out initial sketch. Integrating all the discrete components together was a difficult achievement, but very rewarding.
What we learned
Using React for the sake of React is bad, but that it has a lot of powerful use cases. Using the nexmo api, and an external webcam+microphone.