One of the biggest disparities in healthcare is the lack of widespread knowledge of many common health issues. We wanted to create an application that would help address this and decrease the severity of this, making room for more universal access to healthcare.
What it does
Our application allows users to create user accounts, and data are taken from the signup form, including name, email, and phone number are stored in a MongoDB database. Also stored in a MongoDB database are data from a follow-up questionnaire that asks the user for their demographic information, their past health history, and the phone number of their primary care physician.
Users are then able to utilize a service where they can upload a recently taken photo to the application. The photo will then be displayed and analyzed using facial emotion recognition. Then, based on the most prominent emotion given by the face in the image, the user will be redirected to an additional questionnaire, detailing symptoms associated with health issues associated with the emotion that the user is displaying. Based on the user's answers to the questionnaire, an automated voice call will be made to the number that the user had inputted as their primary care physician, detailing the symptoms they had been experiencing and the possible health issue they could be a result of, urging a call back to the user to schedule an appointment.
Watch our video presentation here: https://www.powtoon.com/online-presentation/c1upbuXr3HG/?mode=movie#/
How we built it
We built the frontend using HTML, CSS, and Bootstrap. We built the backend mostly in Node.js, utilizing Express.js and EJS, used MongoDB to store user-inputted data, used the Microsoft Azure Face API to analyze emotions from images uploaded, and used the Twilio API to call the user's primary care physician.
Challenges we ran into
This was our first time using Node.js in any context, and we also didn't have any experience with MongoDB, so many problems arose from that. Additionally, we had trouble using the Microsoft Azure Face API to analyze a locally hosted image rather than one already on the Internet. We also ran into issues linking .js files together.
Accomplishments that we're proud of
What we learned
We basically had to learn everything we used for this project over the span of three days. We ran into lots of issues, but we also learned a lot, especially about Node.js. We hope to use these languages, frameworks, databases, and APIs in the future for other projects and hackathons.
What's next for MySympTracker
In the future, we hope to see the integration of Microsoft's image processing and object recognition to help characterize other things such as rashes and burns. We also hope to better visualize the data that is received and stored.