Inspiration

Long waiting queues! Who would like to speed up processes that take forevvveeerrrrrrrrrr.... We were imagining to ourselves:

  • What is the single-handed most annoying thing that everyone has to deal with from time-to-time?
  • Would speeding up the process make it more efficient for others?
  • How important will it be for the person to make this process speedy?
  • Who would this benefit? Both the person and organizations/companies related to them?

WE SCREAMED AT THE SAME TIME "ALLHEALTH". Not really, but you get the gist. This is when we came up with an idea

What it does

Our app is an AI-powered diagnostic tool that collects multi-modal user input—such as images, audio, and text—and processes this data to offer potential health diagnoses and advice. It’s designed to work like a virtual medical assistant, allowing users to input symptoms through a conversational interface and receive quick diagnostic feedback.

We have a family built in functionality to accommodate multiple users on one app. It also helps doctors to quickly access data from users if users wish to share to particular hospitals, providing invaluable data sets and easy response from doctors.

How we built it

Frontend

The frontend is built using React, offering a dynamic and responsive user interface where users can interact with a chatbot-like system. Users can submit data such as:

Audio files (e.g., for cough detection) via the HTML file input and conversion to base64 format. Image files (e.g., to analyze visual injuries or conditions like rashes, bruises) using a similar image upload mechanism. Text-based responses to questions about symptoms and conditions. A styled-components library is used to create customizable and adaptive UI elements, such as a body-part selector that highlights affected areas dynamically.

Backend The backend is powered by Flask, a lightweight Python web framework, that handles:

Processing user inputs: audio, images, and text are analyzed using separate AI models. The communication between the React frontend and backend via API routes. Librosa and Pydub are used for audio processing. These libraries convert audio files into frequency and decibel information, which are then analyzed to detect abnormalities (e.g., cough sound analysis).

For image analysis, the app uses a pre-trained Convolutional Neural Network (CNN) model to classify injuries based on visual input. The images are decoded and processed as inputs to the CNN.

The text-based input relies on a combination of sentiment analysis and natural language processing (NLP) models. We use the OpenAI API to analyze the text and match it with relevant diagnostic outcomes.

Data Handling Images: After receiving base64-encoded image data from the frontend, the backend decodes and pre-processes the images, which are passed through the CNN for classification. Audio: The base64-encoded audio files are converted back into waveform data, and features such as decibel levels and frequencies are extracted to detect patterns related to specific ailments (e.g., cough detection). Text: User text inputs are analyzed through sentiment analysis models to evaluate the severity of symptoms, and NLP models match these symptoms with known conditions. Diagnosis and Confidence Scores The app provides a list of potential diagnoses with confidence scores based on the user’s input data. The backend leverages machine learning models to assess the likelihood of different conditions and offers suggestions for treatment or further medical consultation.

Tech Stack Frontend: React, styled-components, HTML5, CSS3, JavaScript Backend: Flask (Python) Audio processing: Pydub, Librosa Image processing: Pre-trained CNN for classification, PyTorch, Tensor Flow Text processing: OpenAI API for NLP and sentiment analysis Data transmission: RESTful APIs using Axios for communication between React frontend and Flask backend

Challenges we ran into

With this being our first ever hackathon, we faced several challenges, from time management to technical hurdles. One of the biggest obstacles was integrating multiple technologies—such as audio processing with Pydub, image classification with pre-trained models, and using the OpenAI API for NLP—into a cohesive app. We also encountered difficulties in ensuring seamless communication between the React frontend and Flask backend, especially when handling large files like audio and images. Debugging these real-time interactions and ensuring cross-browser compatibility added additional complexity, but we persisted and learned a lot throughout the process.

Accomplishments that we're proud of

This hackathon was both of our first, and the fact that as duo, we were able to come out with a working product, that we believe has the potential to become something much bigger than just a hackathon submission is an accomplishment in itself. From waking up 13 hours late into the competition to finishing our final working demo only 30 minutes before the submission, we're proud to announce the release of Doctor Doctor. We're proud to have accomplished a convoluted model integrated into a chatgpt wrapper with a touch a Fourier on the side... We're proud to have created a front end which is stylish and cool and wow.... Lastly we're proud to have made a tool that may one contribute to making the world a better place (Ooo dramatic)

What we learned

Nothing... we're the best(Lol jk). In all seriousness, we've learned quite a bit over the last 36 hours:

1) Don't oversleep at the start of the competition(take shifts if need be) 2) Maybe have a team of 4 damn it was rough with only 2 people 3) Come to the competition with some level of understanding of whos doing what 4) General new technical concepts such as: random-forest, most of react cause wow leetcode != building a product from scratch, the fundamentals of working with and storing data to use at a later date, and a lot of css concepts I never though I'd stuggle so much on.

Okay, for real we really did learn:

  • We learned how to collaborate as a team and be good with version control like GitHub (HATE MERGE CONFLICTS)
  • Learning how to seamlessly integrate a tech stack while still including ML models was definitely a hassle, but a rewarding one
  • Using React and Flask for the first time to create a project was a tough hurdle, but we passed it
  • Scraping API from the most random resources and learning how to use them effectively felt like discovering fire

What's next for Doctor Doctor

Doctor Doctor has the potential to shake up healthcare in a big way, especially for hospitals and underserved regions. Imagine reducing those crazy long waiting times at hospitals by using quick, accurate assessments to figure out who needs urgent care first. Plus, it could bring AI-powered medical knowledge to places that really need it, like third-world countries where access to doctors is limited. This app could help doctors with initial diagnoses, give personalized health advice, and empower people to take control of their health. The idea is to make healthcare more accessible, especially for those who usually get left out, and help ease the load on already overwhelmed systems. It’s all about making healthcare faster, smarter, and available to everyone who needs it. Sponsorships from hospitals and markets could improve the drug market and make it much more accessible.

Built With

Share this project:

Updates