Inspiration

From knowing close family members with forms of dementia to forgetting where we leave our dorm room keys, our team knew that forgetfulness is a problem. Human memory is not fundamentally sufficient to remember all the intricacies of life. To combat this, we built a necklace with a companion app that constantly monitors a live video feed, live audio feed, and user location.

What it does

There are three main components to the project: the video feed, the audio feed, and the user location. The video feed is collected using a webcam and is then processed using our Computer Vision stack to identify all of the objects in real time. If a frame has high enough information density to be considered "new information", this frame is stored on the cloud as a memory along with its time stamp. A similar process is conducted for the live audio feed where static/high noise audio bytes are not considered. If there is a location delta, this change is recorded as well. The entire process ends up being very data intensive and the data is stored on a cloud server through Firebase.

The user is then able to query over this data to access memories from specific instances.

How I built it

The computer vision stack mainly relies on an algorithm published in April 2018 called YOLO v3 (You Only Look Once). This algorithm, which has been developed relatively recently, allows us to processes the camera data in realtime and identify objects.

The audio portion of the project uses Swift's built in Speech API to collect the data and then relies on a python backend to parse the information and apply our natural language processing algorithms.

The location of the phone was obtained from the CoreLocation framework.

What's next for Recall

  • Create face to name mappings
  • Answering/Inferring about more open ended questions
  • Smaller Form Factor / Lifestyle Product Approach
Share this project:
×

Updates