Inspiration

We were inspired by experiencing family members and relatives go through difficult processes to get help, especially when they were debilitated. Watching either our parents having to spend exorbitant amounts of time to become a legal representative or relatives struggle to fill out forms after experiencing medical emergency moved us to look into solutions.

What it does

This application is meant to speed up the process on the legal side for getting medical care. It targets those who are unable to operate at full mental capacity or don't have enough time and resources to spend taking care of paperwork. The app can simplify storing and viewing personal info and records, drafting emails to providers, guiding users through next steps, and auto completing forms. The app also has SMS support so those who want a more familiar UI or need to access it on the go will be able to receive the help.

How we built it

Project Lantern was built as a privacy-first, multimodal assistant by splitting the UI (Next.js on Vercel) from a Dockerized Express backend targeted for Cloud Run, using Google Gemini for reasoning and Twilio for phone channels. The frontend is Vercel frontend, serverless API. Chose Gemini for multimodal reasoning: converting freeform documents/audio to structured JSON artifacts (eligibility ratings, pre-filled form, todo checklists). In-memory store now, upgrade later: MVP uses Map for speed/privacy; documented migration path to SQLite/Redis/Firestore when persistence or scale is required. Validation included unit tests for parsing and extracting and Gemini wrappers; integration test for /api/sms/webhook using Twilio; beginning to end demos, upload a sample insurance letter, confirm structured outputs, send to WhatsApp webhook.

Challenges we ran into

One of the biggest challenges we experienced while building this was minimizing hallucination on form auto-fill, eligibility scores, and guidance recommendations. We also struggled to extract data from files correctly the first time and needed to generate new ones. In order to solve these challenges we created specific prompts, deceased LLM reliance when applicable, and forced strict context adherence.

Accomplishments that we're proud of

We are proud of how we were able to containerize and decouple the stack by moving the backend into a Dockerized, Cloud Run ready Express service with modular services/ and routes/, enabling reproducible builds and independent scaling. We also implemented privacy-first in-memory sessions, integrated Google Gemini for multimodal document/voice understanding, added Twilio webhooks for phone channels, and backed core flows with unit/integration tests for a robust, demo-ready system. We are proud that we were able to fully realize our idea despite this being our second hackathon.

What we learned

Something that was a big takeaway for us this time was flushing out an idea past just big ambitions. Because this was a AI hackathon targeted at more innovative ideas a lot of our initial effort was focused on developing an idea that would have been really hard to achieve at any real fidelity to the original plan. we also got to experience how important it is to clearly define work-load split. Allowing git commits and branches to be cleaner and more clear next "deliverable" for each person.

What's next for Project Lantern

Project Lantern would implement Firecrawl to improve the file generation and extraction. It would also implement a EHR lookup service to decrease manual info fill-in. The last feature would be end to end encryption to allow for information persistence along with full security for medical data.

Built With

Share this project:

Updates