Inspiration
People with chronic illnesses are constantly grappling with the stress of knowing if some illness symptoms are indicative of greater harm or not, especially if they have multiple doctors with a myriad of different considerations to take into account. Communication becomes difficult as the patients have to constantly talk with different doctors about new symptoms or concerns. These difficulties coupled with the fact that chronic conditions account for 70% of all deaths in the United States, we wanted to give these people a way to communicate with doctors and receive potential diagnoses relevant to their medical background. Having an Multi-Agent LLM solution answer some of the patients' concerns is not only a cost-effective solution compared to paying high medical fees for potentially harmless symptoms, but also does a better job than self-diagnosis through personal research. We wanted to make a web app that makes it easy for chronically ill people to keep on top of their current condition.
What it does
medmap.ai gives patients and doctors the option to create their own unique accounts to manage medical history and contact each other in times of need. Patients are able to store their medical records and demographic information to build a patient medical background. When they are feeling unwell, they can describe their symptoms to medmap.ai through textual input. medmap.ai uses a multi-agent LLM where each agent in the LLM plays the role of a differently specialized doctor, including cardiologists, pulmonologists, and neurologists. There is also a diagnosis decision agent that decides which diagnosis is most relevant to the patient's medical history, a diversity consideration agent to take into account racial and cultural differences for the diagnosis, and a diagnosis delivery agent to make sure that the potential diagnosis is delivered in a patient-friendly way. These agents all work together to decide which specialist to contact, give a predictive diagnosis, and alert doctors of that specialty who have an account of the patient's needs. If the doctor wants to dispute the LLM's diagnosis or follow-up, they can then reach out on their own to better assist the patient.
How we built it
We separated our project into 4 different aspects that were individually created before being finally integrated together into one web application. We created the frontend using Streamlit, the python library, that created a website where people could create accounts and fill in their information. They would also have the ability to describe their symptoms and get a diagnosis. The backend consisted of two parts: a MongoDB database that held both medical file history in the form of txt files and general information about the patient and a LLM built with agents from Google Gemini and Anthropic AI's Claude managed with Crew.ai. These agents were given Retrieval-Augmented Generation (RAG) frameworks that allowed them to read data from txt files as part of their training data. Everything was coded in VSCode and pushed to a Github repository, with the website being hosted locally and the database being hosted on MongoDB's Atlas cloud storage.
Challenges we ran into
Our first major challenge came when first designing the multi-agent LLM's architecture. While we had a general idea of trying to get the agents to talk to each other to look for the best solution, we didn't have an exact structure, which slowed down progress coding that aspect. That combined with the recency of these Agentic LLM technologies meant that whenever bugs appeared or there were parts of the documentation we didn't understand, it took much longer than normal to figure out the root of the issue and how to fix it without disrupting the overall workflow. After we eventually figured out which agents we wanted to employ and in which order -- having the specialist agents act first to create their own diagnoses, the decision agent to decide the most appropriate diagnosis, and finally the diversity consideration and decision delivery agents to conclude -- the work became easier and we finished that aspect of the project.
Another challenge came when trying to integrate the frontend with the LLM. Working on four different devices on several different pieces meant that everyone's environment had installed different packages, many of which had conflicts with the code we wrote. We had to carefully weave together different versions, and also learn how each piece of code interacted with each other.
Accomplishments that we're proud of
For a team of first/second-time hackathon participants, we learned new technologies completely from scratch such as Streamlit and MongoDB. We were able to successfully call many LLM API keys to make more than just a simple wrapper class for a LLM, but create something structurally unique with the multi-agent model that had novel real-world usage.
What we learned
We learned how to use LLM API keys, full-stack development, use different packages and libraries in our code, make LLM agents, work with CrewAI to create a multi-agent LLM, integrate together different pieces of code, collaborate on a single Github repository to stagger our push/pull requests, persever through seemingly-impossible bugs, and much more. The entire software development process was laid forth for our group of beginners.
What's next for medmap.ai
In the future, we plan to migrate medmap.ai onto a cloud-based architecture to store even larger amounts of patient data and create an even simpler and easy to use platform. We also plan on giving doctors more tools to communicate back to patients with their advice on the platform. We would also like to add more agents such as a Hallucination detection agent that can compare final diagnosis with the thought process and reflections of the intermediary agents to ensure that all is in order.
Log in or sign up for Devpost to join the conversation.