Inspiration

After recently hearing about an ER visit by a close relative and having the coverage denied by health insurance, we realized that we didn't know how to deal with the denial. We were confused. We paid for insurance. That's the whole purpose of insurance. Yet they still said no.

When we dived deeper, we realized this is way too common to be real. Insurance denies about 1 in every 5 claims, and there's no way 20% of insurance claims are "illegitimate." Then we found out why: insurance racks up nearly 400 billion a year as only one in a thousand ever appeal their denial. Oftentimes appeals are too miserable to write: you have to read over all the fancy legal jargon in a 200-page contract and find the exact clause that matches the case, find the medical research to literally prove that the treatment is real, format the letter to match a specific mold, and send it before a deadline. Some specialized lawyers do this at $500 an hour, for 10 hours. We realized we could do it for free, in 3 minutes.

What it does

Enter Unwritten. After logging in using Clerk authentication, we tie user data directly to MongoDB Atlas. We take the insurance denial file as an input, coupled with the option to give voice context (or typed). Then, it takes the input and sends it to Fetch.ai agents to read and parse a final draft, and it is directly sent to the insurance company. Our app goes from PDF to sent in 3 minutes max.

How we built it

Frontend: Our frontend is built with Next.js 14, TypeScript, Tailwind CSS, Framer Motion, and Lenis for smooth scroll and parallax. The UI is designed for zero friction — a patient in crisis shouldn't have to think about navigation. We built an in-app command palette (⌘K) using cmdk to allow fast searching of appeal history. Authentication is handled end-to-end via Clerk, with user sessions tied directly to their MongoDB records.

Backend: Our backend is scaffolded on Hono (TypeScript), running on Cloudflare Workers. It handles routing between the frontend, Cloudinary uploads, MongoDB writes, and agent dispatch. File uploads are piped through Cloudinary's React AI Starter Kit, which manages secure storage and retrieval of denial letter PDFs.

Agent Network: Each agent is a fully registered, discoverable entity on Fetch.ai's Agentverse, built using the uAgents Python framework. The conductor_agent acts as the network backbone — it receives the parsed intake data and fans out tasks to the policy_agent and evidence_agent in parallel, then passes their results to the drafting_agent for final letter assembly. Every agent is addressable via its unique agent:// URI and can be discovered and invoked through ASI:One.

Voice Input: We use OpenAI Whisper to transcribe voice context provided by the user, converting spoken medical history or clarifications into structured text for the intake agent.

Data Layer: MongoDB Atlas stores user profiles (linked via Clerk user IDs), appeal metadata, extracted denial information, agent output logs, and final letter drafts. This gives users a persistent history of all their appeals, queryable through the in-app ⌘K search.

Challenges we ran into

Getting the conductor_agent to reliably orchestrate parallel agent tasks without race conditions was the hardest engineering challenge of the weekend. Each sub-agent — policy_agent and evidence_agent — runs independently and returns at different times. We had to build a lightweight result-aggregation layer inside the conductor that waits for both to complete before passing the combined output to the drafting_agent, all without a shared message queue. Prompt-engineering the drafting_agent to produce letters that match real insurance appeal formatting standards (specific salutation conventions, citation numbering, claim reference headers) took far more iteration than expected. We referenced real ERISA appeal letters to build the template structure. Integrating Cloudinary's upload pipeline with our Hono backend and keeping it in sync with MongoDB on every upload — while maintaining a seamless frontend UX — required careful async handling across three systems simultaneously.

Accomplishments that we're proud of

A fully working, end-to-end appeal pipeline that goes from PDF denial letter to sent appeal letter in under 3 minutes. Five independently registered, discoverable Agentverse agents that run in a real parallel pipeline — not a mock or stub. A UI/UX that doesn't feel like a hackathon project. We wanted a patient in distress to open this and immediately feel like someone was on their side. The ⌘K command search — a feature that makes the history of your appeals feel like a second brain, not a table.

What we learned

We learned that building for someone specific makes everything clearer. Every product decision — the 3-minute promise, the voice input, the letter preview before send — came from asking "would Sarah be able to do this at midnight, stressed, with 10 days left on her appeal window?" We also learned that multi-agent orchestration is genuinely hard. Coordination between autonomous agents that each have their own inputs and outputs requires careful contract design — what each agent expects and what it returns has to be explicit and stable. The conductor_agent pattern we developed is something we'd carry into future agent projects. On the infrastructure side, Cloudinary + MongoDB as a tandem — media storage plus structured metadata — is a pattern we'll use again. They complement each other naturally.

What's next for Unwritten

Direct insurer submission routing — integrating with insurer APIs or fax infrastructure so the letter goes straight to the right department without the user having to figure out where to send it. Deadline tracking — automatically parsing the appeal deadline from the denial letter and sending the user reminders as it approaches. Case outcome tracking — letting users report back on whether their appeal succeeded, and using that data to improve the policy_agent and evidence_agent citation strategies over time. Multi-language support — millions of denial recipients are non-English speakers. Whisper already supports this; the letter generation is next. Expanded agent network — adding a negotiation_agent that can suggest alternative treatments the insurer is statistically more likely to cover, giving patients a fallback plan alongside their appeal.

For patients like Sarah, and for those who never got the chance — what was once unwritten is finally written.

Prize Track Descriptions

Track: Catalyst for Care Unwritten is healthcare advocacy infrastructure. Insurance denials kill people — not metaphorically. We built a tool that puts the full weight of legal and medical research behind every patient who gets told no, at no cost and in minutes. This is exactly what "revolutionizing the patient experience" means in practice.

Track: Agentverse — Search & Discovery of Agents Every agent in Unwritten — intake_agent, policy_agent, evidence_agent, drafting_agent, and conductor_agent — is independently registered on Fetch.ai's Agentverse and fully discoverable via ASI:One using its unique agent:// address. This isn't a single model with five prompts; it's five autonomous, discoverable agents that communicate via the uAgents messaging protocol. The conductor_agent turns user intent (fight my denial) into a real, parallel, executable multi-agent outcome.

Cloudinary Company Challenge: Cloudinary handles every denial letter that enters Unwritten. We integrated Cloudinary's React AI Starter Kit for secure, production-grade PDF upload, storage, and retrieval. Every file a user uploads is stored in Cloudinary and its metadata is linked to their MongoDB record — giving us a robust, CDN-backed media layer that scales without us managing infrastructure. Cloudinary makes the intake step of our pipeline feel instant and trustworthy, which matters when the document in question is someone's medical denial.

MLH: Best Use of MongoDB Atlas: MongoDB Atlas is the persistence layer for everything in Unwritten. User profiles (keyed to Clerk user IDs), denial letter metadata, structured data extracted by the intake_agent, agent output logs, and final appeal letters are all stored in Atlas. Our ⌘K command search queries Atlas in real time to surface a user's full appeal history. Atlas's flexible document model was critical — every denial letter has a different structure, and a rigid relational schema would have slowed us down considerably.

Fetch.ai Agent Links:

https://agentverse.ai/agents/details/agent1qdae4flz4uzuzcc3434cwr8khsx3dknxfl9ma47qtfn25uwmzlk4sxkf2xw/profile

https://agentverse.ai/agents/details/agent1qf7wkkh8a6z6ej7fyr9lyu9tyxqlzw7tnzqp4uj5n4wee77vcfxxkp0fcam/profile

https://agentverse.ai/agents/details/agent1qw9pdr9jz0qp6js5kfr4tzsgvzam0ww3t4yjud75qxpmjh89mslm6v2v7pc/profile

https://asi1.ai/chat/8783a3c3-635a-4589-946d-48e8045c915d

Built With

Share this project:

Updates