Inspiration

When you're scared for someone you love, the last thing you should have to worry about is paperwork, whether or not you can afford someone to handle it for you. Yet millions of Americans, already stretched thin, are quietly losing billions in healthcare claims they're entitled to, simply because they don't know the system. We built this for them.

What it does

Three specialized AI agents work in sequence: one builds a memory of everything you tell it, one evaluates your risk and claim strength, and one tells you exactly where to go, what it'll cost, and how to negotiate. Three AI agents, each with their own role, one builds a complete picture of everything you share, one evaluates how strong your case actually is, and one gives you a clear, step-by-step plan: where to go, what it'll cost, and how to negotiate. It's like having an entire expert team working your case, without the price tag.

How we built it

The backbone is a 3-agent pipeline orchestrated with LangGraph. Each agent has a single well-defined job, the Memory Agent never gives advice, the Risk Agent never talks to the user, the Recommendation Agent never touches the memory store directly. Llama 3 via Ollama powers all three agents running entirely on local hardware with no API keys and no data leaving your machine. The backend is a Flask API with PDF text extraction via PyPDF, a shared memory store with a full audit log tracking every create, update, and retrieve event, and Supabase for persistent storage. The frontend is three connected HTML pages, login, upload, and dashboard, where the dashboard reads directly from the memory extracted by the agents and populates every element with real data.

Challenges we ran into

One of our biggest challenges was data persistence, getting information entered on one page to actually carry over to the next, and making sure everything synced properly to the dashboard in real time. It sounds simple, but managing state and keeping the data flow consistent across the whole app was a lot harder than we expected.

Accomplishments that we're proud of

A lot of what we're most proud of isn't what you see on the screen, it's the research underneath it. We spent hours digging into the actual policies, the real numbers, and the fine print to make sure everything we're telling people is accurate and genuinely useful. Most tools like this are built by the same big companies and investment firms that benefit from people not understanding the system. We built this from the other side. As a third party entity, our only priority is the person sitting in front of the screen.

What we learned

This project was our first real dive into building multi-agent AI systems. The biggest thing that we learnt was how to work with the Gemini API as a reasoning engine that could be given a specific role, a specific memory, and a specific goal. We learned how to structure system prompts so each agent had a clearly defined job, and how the quality of that definition directly shapes the quality of the output. We used LangChain to create our agents, and it taught us how agents actually function under the hood, how each step in a pipeline takes the output of the last and feeds it in as context, how memory modules work, and how fragile agent decision-making can be if your prompt boundaries aren't precise. Debugging a broken agent taught us more about LLM behavior than any amount of reading would have. On the backend, working with Flask gave us practical experience building a real API layer, handling sessions, managing file uploads, and keeping state consistent across endpoints. Wiring Flask routes to our agent pipeline and making sure data flowed cleanly between them was harder th## Inspiration When you're scared for someone you love, the last thing you should have to worry about is paperwork, whether or not you can afford someone to handle it for you. Yet millions of Americans, already stretched thin, are quietly losing billions in healthcare claims they're entitled to, simply because they don't know the system. We built this for them.

What's next for Insure

Our next step is expanding our recommendation engine to cover more policies and insurance types, so no matter your situation or provider, our product works for you. We're also refining our agent architecture, pushing each agent toward a narrower, more well-defined responsibility so every part of the pipeline does exactly one thing and does it well. Paired with that, we're reworking our memory layer to use a hash map structure, enabling fast access, updates, and deletions so agents can retrieve exactly what they need instantly. Long term, we're building toward a tool that handles every corner of the healthcare claims process end to end, because our goal has always been simple: make sure no one loses what they're owed just because they didn't know how to fight for it.

Built With

Share this project:

Updates