Inspiration:

We want to make healthcare more available to everyone - based on data from the AAMC, there will be a shortage of more than 100,000 doctors by 2030 and 4/10 adults with health insurance say they have difficulty affording their deductible. These challenges affording care result in people delaying or skipping care due to high costs.

What it does

Today there is a trend of machine learning in analyzing MRI and CT scans for doctors to determine whether a patient is sick. What we did was bring this technology into preventative healthcare for everyone at home who isn’t able to afford or don’t have time to go to a doctor through an algorithm that examines your facial features closely. When you see your face everyday, it's hard to notice change from a week ago, a month ago, or even a year ago, because your brain adapts to microscopic changes. By examining your facial features closely, you can read details in your own health - how do know if the facial change is cause for concern or harmless? For example, our app is able to detect smile droop, one of the 3 most important warning signs that someone will have a stroke. Anyone, anywhere, anytime can take pictures of themselves and Reflection helps you see your health condition trends overtime. Our algorithm utilizes machine learning to process the photos and triage your health conditions based on signs and symptoms shown on your face. Reflection will help you determine if you're healthy. If you're in need of immediate care, our app will suggest you call 911. Otherwise, we will follow up with an interactive survey of questions related to your health condition to help you better understand what's going on. All this can happen in the palm of your hand, anywhere, anytime, without having to visit a clinic or hospital.

How we built it

We used Python to build a Flask backend that acts as REST api that our front end can make requests to. We used Google’s Vision API to find facial landmarks to calculate the risk of an illness or disease based on the locations of the landmarks in relationship to each other.

In this hackathon, we implemented a stroke risk calculation. We observed the most obvious yet often overlooked sign of a stroke, the droopy mouth. Using the x and y coordinates of the eyes and mouth and with k-means clustering we discovered that the line formed by the center of two eyes and the line formed by the 2 corners of the mouth will form an angle greater than or equal to 25 degrees in most cases.

In addition to using Google’s Vision API, we also implemented a convolutional neural network with Tensorflow that is trained on a dataset that we mined and put together ourselves to detect sleep deprivation from selfies you take.

We implemented the frontend using Swift 4 and Objective-C in XCode to create an iOS app that the users will use to take photos and upload them into Google Firebase and our backend to perform neural network predictions and facial landmark analysis.

All our backend is hosted on Google Cloud Compute Engine.

Challenges we ran into

The biggest challenge is finding a dataset for machine learning. We weren’t able to find any, that’s why we created our own. By using Selenium, a web scraper, as well as OpenCV to create a dataset that trains the model to predict from an image.

Some of the other challenges we ran into was putting everything in the VM and testing it. There were many compatibility issues, as well as classes and functions our team members wrote. We were also struggling to connect and make requests to the Google Cloud APIs. Eventually we were able to overcome these issues by working as a team.

Accomplishments that we're proud of

The biggest accomplishment was getting and creating that dataset for training the machine learning model. There are no public dataset like this on Kaggle or anywhere on the web. We were able to put together labeled 1000 images and train our model with a loss of 0.14 and a accuracy of 97% on the validation set, and 88% accuracy on the test set.

Another big accomplishment was brainstorming and researching relationships between facial landmarks for a patient that is at risk of having a stroke. We were able to find a general trend using k-means clustering and manual assessments of thousands of images of stroke patients and healthy people to determine the best algorithm to calculate the risk of stroke.

Other accomplishments we were proud of is coming together as a team to solve issues and achieve milestones. We were able to work individually and as a team efficiently to build an all around incredible project.

What we learned

The biggest lesson to all of us is humbling ourselves and working together to solve problems and brainstorming ideas.

We also learned how to use and set up a backend server using Flask, and making requests to Google’s Vision API.

What's next for Reflection?

The possibilities with this app is infinite. We want to make this app open source, anyone can upload the datasets they created to triage and predict an illness or disease. Of course we would need to get professional doctors to make sure these datasets are valid before processing it for machine learning.

We also want to work with hospitals to create datasets and come up with more algorithms to find relationships from the human face. People often say eyes are the window to your soul. Your face is also a window—by looking closely at your facial features, you can read details for your own health.

Lastly, we want to improve the interface to be more user friendly, powerful, and efficient.

Built With

Share this project:
×

Updates