Inspiration

aidn was born out of a need for better voice agents and data digitization models in the healthcare industry. We always tried to put ourselves in our users' shoes and tried to mitigate barriers to user-accessible healthcare as early as a first symptom case, to as late as when the user is already in an ambulance. the Aid Network is designed to be by your side as an asset to help you achieve wellness faster and better.

We thought that the inertia behind using medical records to get information was too high, because the medical record transfer process is antiquated and slow with lots of human effort required. We saw a need for a centralized, digital source of healthcare data securely powered by Face ID to provide healthcare providers and first responders with the information that they need to maximize quality of care instantaneously.

What it does

aidn has two major tracks of use: long-term, including secure maintenance of medical records and interactive educational Alexa skills, and short-term, which encompasses the aidn Medical Emergency feature. When users sign up, they enter their chosen credentials, an emergency contact, and are able to take or upload a user photo. When a user has signed up and is logged in, they are able to enter their medical history, as well as a customizable emergency message to display in the event of an emergency. Through the power of Azure Face, facial recognition is used on the user photo, so that in an emergency situation any individual with aidn just has to snap a photo of the injured person’s face to 1) send information to their emergency contact and 2) get access to vital medical information that the person is comfortable with sharing. Integration with Amazon Alexa allows users to register their medical information using a voice assistant approach, increasing accessibility, and adds the feature of health challenges that the user can request.

How I built it

One of the first things we did as a team was to sit down and create a task list on Trello for the entire weekend. This let us write down our thoughts and vision for aidn down long before any code had been written, and put us all on the same page for the next 36 hours.

### High Level Design Design

On creation of accounts in the system, we train our machine learning model on Azure, hashing the incoming face and making sure to save only the hash, and not the actual facial data of the user.

All data is stored on Firestore to allow easy access to our web based chatbot, built on DialogFlow. All pictures are stored on Google Cloud Storage, and services for the chatbot are spawned and run on demand on Cloud Run.

Our Alexa experience allows a user to complete a health checkup through an Alexa anywhere in the world - all data is, of course, synced back to us so we can crunch the numbers, create a diagnosis and share it with our network of doctors, should you need help from them in the future!

Challenges I ran into

One of the hardest parts of this project was figuring out the Azure Cognitive Services for facial recognition of users through a picture. This was very hard to figure out and took Edward and Subhankar almost 5 straight hours of early morning debugging to even get an MVP running.

In the end, we were able to persevere through and create a facial recognition service that detects people in a matter of seconds and solves a huge problem - we’re psyched!

Accomplishments that I'm proud of

At many points the team was uncertain in the future of this project because of the sheer magnitude of everything we did - some of us had used AWS before, but outside of that we started off of scratch learning the “big 3” cloud providers, and made them seamlessly work with each other.

Our Machine learning models are also very impressive - we stress tested them a lot and are very proud of how we were able to distribute the train time across users to ensure latency.

What I learned

Caroline had used React previously, but had never interacted with any of the APIs or other technologies implemented in this project. She was able to learn a lot about the way these technologies all get integrated and implementing APIs end-to-end.

Edward had previously worked with other cloud platform providers, but this was his first time learning Azure. He very much enjoyed using all 3 major cloud form provider technologies in this project.

Panda had no idea about anything in the cloud provider department, but learned Azure for deployment, machine learning and continuous delivery.

What's next for aidn

This is just the start of aidn’s possibilities. In the future, we can implement a network of verified healthcare professionals within the app, so that in the event of an emergency, instantaneous medical information with the features of a video call are only a tap away. We hope to expand aidn’s impact and bring accessible healthcare to all.

Share this project:

Updates