Problem / Inspiration:

Healthcare in America is often a mystery. Unlike anything else, when we go into a store we know the price tag on what we want to buy. But when it comes to healthcare, it seems as if the price tag is random and we hope for the best. Even when we try to understand our health insurance it can be like a scavenger hunt. We go on the organization’s website and look for our specific plan. And once we do we want to find out how much deductibles or coinsurance for a dental visit would be and even that can be a hassle. This might prompt people to call the company themselves and ask questions, however what if you had an assistant that was readily available to you? This was exactly an exercise I did in my Understanding the U.S. Health Care class and it was surprisingly difficult. This was my inspiration to create this application so that others could more easily and accessibly understand their health care plan.

Solution:

Med Assist is a chat assistant that can immediately answer questions about your healthcare coverage for you.

Implementation:

Using retrieval augment generation (RAG) a technique to augment LLMs with more context data, in this case health insurance plan coverage details, it can be used to answer more context specific questions with up to date information. In my implementation my application takes in PDFs of health insurance plans which are loaded into a vector store (database of embeddings). Then when a prompt is asked, the vector store finds chunks of embeddings that relate to the user asked prompt. A prompt could be “Does my plan cover dental”. Related chunks of text are retrieved from the vector store and added into a “prompt template”. These chunks are essentially contexts that are appended to the user’s prompt to the LLM. The LLM I used is Llama3. Then the complete “prompt template” is served to the LLM and the LLM can provide a human friendly message answering the user’s question. To connect the frontend and backend function, I used Flask to create a simple API and used requests from the frontend to call the back, returning information in JSON format.

Expected Impact:

Help users better understand and navigate their health insurance in an easy and accessible way.

Future Plans:

  • With a large database of health insurance plans, we can provide users information about a variety of plans and even compare and contrast them. Potentially providing a service to help users pick a healthcare plan.
  • Deploy services on cloud infrastructure to be scalable and quick. Including using services such as OpenAI's GPT instead of running LLM locally.

What I learned:

  • I learned how to create an application using Langchain's framework to incorporate an LLM.
  • I learned how retrieval augmented generation works and why it is helpful.

Challenges:

  • Learning, understanding, and implementing a new framework like Langchain
  • Retrieving the correct chunks of context from the vector store for the correct health care plan, since the database holds several different plans. I addressed this by appending an id to each health care plan's metadata. So that when the document was eventually split into chunks, I could still filter for the chunks related to the plan via its metadata.

Technologies Used:

Frontend: NextJS - (React framework) Tailwind CSS - (utility for styling)

Backend: Flask - (to create RESTFUL API) Langchain - (framework to works with LLMs) Llama3 - (open-source LLM) Python - (entire backend)

Built With

Share this project:

Updates