We developed MemoryEye, a mobile app that uses computer vision to help users with Alzheimer's remember their prized belongings, and more importantly, their loved ones.
MemoryEye allows users to write their own stories by documenting their best memories, and uses the camera as a tool to relive them. Given that seniors are more likely to suffer from Alzheimer's, we designed and implemented an intuitive interface that leverages its simplicity to offer the most straightforward user experience possible. With the number of Alzheimer patients expected to spike to 14 million by 2050, MemoryEye has the potential to save countless memories, maybe even some that we are part of.
The future of tech is accessible, so we wanted to leverage the resources offered by Hack MIT to make our first step towards that direction. Since the hackathon's theme is hack for a reason, we decided to hack for a cause that affected both our team members and many others: the Alzheimer's disease. And with the Alzheimer's awareness month right around the corner (November), we thought that Hack MIT 2019 was the perfect opportunity to build something cool and start a discussion around this disease.
What it does
MemoryEye enables the user to easily create a memory by taking pictures of an object or a person that they want to remember, and adding a description of the memory. When the user comes across this person or object again and can't remember exactly who or what they are, they can take a picture of the person or object, and the app will use computer vision to pull up the memory that they recorded earlier so that they can relive that memory and remember again.
How we built it
Once we had a general idea of the solution to the problem we were trying to solve, we split-up our team into front-end and back-end sub-teams. We then established a list of required goals, as well as a list of reach goals that we worked on for the rest of the weekend. The result was a native IOS app made with Swift on XCode, utilizing Microsoft Azure's Custom Vision API for computer vision functionalities.
Challenges we ran into
We wanted to challenge ourselves by using a new tech stack for HackMIT 2019. For most of us, it was our first time developing a native app for IOS. Therefore, gaining sufficient context on the IOS programming practices was an important challenge for all of us.
One specific challenge that we overcame was deciding how we would provide the Custom Vision model with enough pictures for it to be trained properly, without requiring the user to provide an absurd amount of pictures of their memory. Our solution was to require four pictures from the user, then manipulate the images to create new pictures out of them. This not only solves our problem of needing more data, but also increases our data variety to cover different angles, thus creating more robust classifiers.
Accomplishments that we're proud of
Although it wasn't an easy road, we came out of this project with more knowledge and experience on IOS development. We're also extremely proud to have built something that can have a positive impact on the people around us.
What we learned
- IOS development
- Azure AI tools
- Hacking for a reason
- Hacking in a limited amount of time
- Pizza is good even at 4am.
What's next for MemoryEye
We genuinely loved building MemoryEye, and we would like to see it grow in the future. Features such as live memory recognition (without requiring the user to shoot videos/photos), sharing memories with loved ones, or even integrating this tool in AR/smart glasses are extremely exciting to us.
Regardless of the hackathon's outcome, we wish that MemoryEye starts a discussion on assistive technologies in the medical field, more specifically, for the research on Alzheimer's disease. We believe in the product that was delivered for this hackathon, but we believe even more in the idea it conveys, and the incredibly exciting possibilities that it introduces.