Inspiration

With the ongoing crisis, many people are unable to get the help they need because of how unsafe it is to visit hospitals and other physicians, and thus are deterred from getting medical help. Additionally, since most hospitals are at maximum capacity, it is very difficult for patients to get a spot at their local hospital. Because of this, we decided that instead of the patients going to the doctor, we would create a virtual platform that gives the patients access to their local physician right from home. This is also very essential since all around the world, doctors refuse to show up to appointments. For example, many Medicaid doctors rarely accept appointments because of their low salary, and many people around the world do not even have access to the necessary healthcare. On the other hand, our platform enables users to gain access to a doctor from their current location, so they are easily able to identify any issues before they expand. The user is able to record data usually collected through in-person physicals themselves and upload it to an app accessed by the doctor through our platform. The app automatically performs deep learning predictions on the patient data in order to give diagnostic suggestions to the doctor, easing the workload of the doctor.

In designing the brand name, we choose Ex Animo because “Ex Animo” means “From the heart” in Latin. Since the heart is the most essential part of the body, our platform provides a full diagnosis for the heart so that patients are able to identify any potential issues early in the process. After deciding on a name, we created our own domain, www.ex-animo.space, on from Domain.com.

What it does

Our platform essentially provides a thorough online physical examination for the patient. All you have to do is follow the instructions on the device, which will tell you how to record the data using the various sensors connected to the Pi and Arduino through an LCD screen. The data is uploaded to a GCP instance, where it is accessed by a web app. Doctors can log into a web app and view their patients’ files as well as their identifying information. Doctors can also add patients to the web app through a form. Convolutional neural network models that we trained give predictions on the patient data for heartbeat and breathing audio clips, to aid in the doctor’s job. They classify irregularities in heartbeat, as well as a multitude of lung conditions, such as pneumonia and COPD.

How we built it

The hardware platform was built using a Raspberry Pi and Arduino. The Raspberry Pi controls a camera and a custom electronic stethoscope that we made. The camera takes photos of the patient for the doctor to see. The stethoscope is made with the head of a normal stethoscope with an electret microphone in the tube to record sound. The microphone was connected using a 3.5 mm jack to usb adapter to the Pi. The Raspberry Pi uses an LCD screen to guide the patient through the process. The Arduino is connected to an MLX90614 Temperature infrared sensor through I2C to measure a patient’s temperature. The Arduino and Pi are connected through Serial. The Raspberry Pi uploads the data from the sensors as files to a Google Cloud Storage instance. The web app, which is accessed by the doctor, accesses the GCP storage and displays links to the files for each patient and examination date. The web app was built using Meteor.js, HTML, and CSS. A script running on a seperate server downloads the files from the GCP and runs the machine learning models, uploading the results back to the GCP. For machine learning, we used the Python keras library with a tensorflow backend. We used two public datasets from Kaggle to get training data for classifying heart and lung diseases based off of stethoscope audio. A python library called Librosa was used to extract the MFCC’s (mel frequency cepstrum coefficients) which were fed as the input layer into the convolutional neural network. The heartbeat and breathing convolutional neural networks were trained for 250 epochs on a batch size of 256 on a RTX 2080ti.

Challenges we ran into

We faced many dependency conflicts on the Raspberry PI for running our machine learning models. This would take too much time to resolve in the limited time we had in the hackathon. Therefore, we improvised a solution where a separate computer ran the predictions and uploaded the results to the cloud. We also ran into many issues with templating on the web app, which took some time to resolve. We also had to deal with overfitting on the ML model.

Accomplishments that we're proud of

  • Created two Convolutional Neural Network models (ML) that can interpret breathing and heart beating audio to automatically determine if the patient potentially has irregular breathing and/or irregular heart beat, and classify it into certain diseases.
  • Built a Web App for the doctor to view patient files. The UI was pretty nice! In the Web App, the doctor can login to his/her account and add each patient. The back-end for our Web App will automatically organize the patients’ files for the doctor to see.
  • Successfully calling the GCP Storage API from the Meteor.js server side.
  • Streamlining uploading files to GCP from the Raspberry Pi, and then pulling files from GCP to Web App for the user.
  • Built the electronic stethoscope using an electret microphone, stethoscope head, and silicone tubing.

What we learned

  • We learned how to write better templates for Meteor.js. Since a doctor would be using the website, we needed to design a UI that was elegant and powerful to use. Through this, we’ve improved our design skills.

  • This was the first time we used Google Cloud Storage (GCP) to upload files, and it was an interesting experience. We also learned how to preprocess audio files using the Librosa library to extract the MFCCs, the relevant features of the audio. In learning about MFCCs, we learned about audio processing and the mathematics behind MFCCs, including fourier transforms and the Mel Scale. Additionally, we learned how to use an LCD screen with the Pi.

    What's next for Ex Animo

    We want to add more sensors to more accurately mimic the in-person well visit experience. We want to scale the project by creating a microcontroller for the hardware. This would allow us to package the hardware in a small form factor and distribute it. Finally, we want to add more functionality to the web app to allow doctors to interact with their patient more as well as expand our collections to be able to take on more patients and files. Additionally, we want to improve our machine learning algorithm to increase the accuracy to nearly 100%. We also want to create machine learning algorithms for the images to identify any potential issues with other parts.

+ 17 more
Share this project:

Updates