Inspiration

Dementia, a class of neurodegenerative disorders that cause progressive impairments in memory, thinking, and behavior, is growing internationally. Dementia also results in a progressive and irreversible loss of neurons and brain function. According to the World Alzheimer Report, over 46 million people live with dementia worldwide, which is only expected to increase to over 130 million by 2050. As the disease continues to emerge, some team members have witnessed their family members get it, decreasing their family members' overall quality of life. People with dementia typically have difficulty keeping track of faces, objects, and finances as the disease progresses. The Covid-19 pandemic especially confounds this issue. People with dementia could start forgetting people in virtual and socially distanced environments. Our shared passion for helping others, coupled with our interests in cloud solutions with deep learning applications, inspired us to help those with neurodegenerative disorders while tracking their onset.

What it does

Re-mind is a mobile application specifically designed for those with dementia. It starts with a face-id login system using Azure's face API, with a traditional username and password login if a face is not detected. We also made a create account feature in case of any new users.

Once a user is authenticated, they can start scanning people and objects on their phone's camera and query scans on pages with additional time and geolocation data. These images are detected as either faces or items and will be saved on their respective pages. The individual can then go to that page and then assign names and descriptions to the face or the object.

Our application also tracks neurodegenerative onset by logging how often users request specific information on known objects and people. It does this through a logging page with times and dates of the last interaction and an extensive line plot to show the number of interactions with the same person.

How we built it

Re-mind was created using Flutter SDK and Dart programming language for front-end, Azure cognitive services, Cosmos DB, and Azure Function App, Tensorflow Light. We utilized Python to connect the front and back end.

The Azure Cognitive Services Face API was used to collect face embeddings that were then stored on an Azure CosmosDB database (found here: re-mind-db.mongo.cosmos.azure.com). The face data was stored with associated names, and object data was stored with associated locations. Further functionality is planned to incorporate financial transactions using the VISA/Paypal developer API to enable face id login for accessing financial accounts.

Challenges we ran into

It was most of our team's first time either using front-end or back-end development. We wanted to implement a Visa API for financial tracking, but it required two days' notice, which was not possible in our time frame. We also had difficulty communicating across several time zones. However, we learned how to manage cloud infrastructure with REST API calls for better security. We also learned how to quantize current state of the art face recognition frameworks such as FaceNet and SSD-mobilenet for TensorFlowLite mobile applications, but could not implement them entirely because of time constraints.

Accomplishments

We developed a new, plausible solution to an emerging medical problem by implementing an Azure cloud infrastructure with general object tracking and encrypted facial detection for face tracking.

What we learned

Our team learned REST APIs and how to integrate multiple cloud solutions in cross-platform solutions. We all learned about the Azure platform and security in API interactions. Additionally, we built up our collaboration and communication skills, as many of us had never worked on a large-scale project like this. We also learned how to implement multiple APIs in conjunction with a large-scale project. Background research on dementia patients has taught me a lot about their use cases and specific problems. These problems can potentially be solved and alleviated with modern technology available today.

What's next for Re-mind

• Test runs on target sample users • Custom face detection models for improved accuracy with people wearing masks • TTS accessibility integration • Phone contact syncing • Full VISA API/PayPal API implementation • Multiple calendar sync and alarms • Emergency contacts/care provider • Adaptive interfaces for motor-impaired users

Built With

Share this project:

Updates