đź’ˇ Inspiration
Healthcare doesn’t fail inside hospitals — it fails after patients leave.
We were struck by how patients are expected to manage medications, symptoms, and follow-ups on their own, using documents they barely understand. Research shows that patients forget up to 40–80% of what doctors tell them, and nearly 1 in 5 patients experiences harm after discharge, much of it preventable.
We realized the problem isn’t just medical — it’s about understanding, memory, and access to guidance.
That’s what inspired us to build AfterCare.
🚀 What it does
AfterCare turns medical documents into an interactive, patient-friendly experience.
- Upload a discharge summary, prescription, or report
- The system extracts structured medical information
- Patients can ask questions in natural language
- The system responds with clear, grounded answers based on their own documents
It effectively acts as a post-discharge copilot, helping patients:
- Understand medications
- Interpret test results
- Follow instructions correctly
- Know when to seek help
🛠️ How we built it
We designed a lightweight, end-to-end pipeline:
Document Processing
- Extract text using PyMuPDF
- Convert unstructured medical text → structured JSON using Mistral
Retrieval (RAG)
- Chunk documents into segments
- Generate embeddings using Mistral
- Retrieve relevant context using cosine similarity
Q&A System
- Use Mistral LLM for grounded responses
- Combine:
- Retrieved document chunks
- Structured patient summary
- Optional web snippets (DuckDuckGo API)
- Retrieved document chunks
System Architecture
- Frontend: Next.js + Tailwind
- Backend: FastAPI
- Storage: In-memory session store (no DB for MVP)
The result is a fast, hackathon-friendly system that prioritizes clarity and reliability over complexity.
⚠️ Challenges we ran into
- Messy medical documents: Real discharge summaries are inconsistent and hard to parse reliably
- Balancing simplicity vs accuracy: Keeping answers understandable without losing correctness
- Grounding responses: Preventing hallucinations while still answering naturally
- Time constraints: Building extraction + RAG + UI in a short hackathon window
- Scope control: Avoiding overbuilding (EHR integration, authentication, etc.)
🏆 Accomplishments that we're proud of
- Built a fully working end-to-end system in hackathon time
- Successfully converted unstructured medical text into structured patient summaries
- Implemented a RAG pipeline grounded in patient-specific data
- Designed a system that prioritizes safety and non-hallucination
- Created a clean, demo-ready UI that clearly communicates value
Most importantly, we didn’t just build a chatbot —
we built something that addresses a real, high-impact healthcare gap.
📚 What we learned
- In healthcare, clarity matters more than intelligence
- RAG is only useful if the input data is clean and structured
- Users trust systems that say “I don’t know” instead of guessing
- The hardest part is not answering questions —
it’s understanding messy real-world data
đź”® What's next for Untitled
- Add voice interaction (ASR + TTS) for accessibility
- Improve document parsing with OCR and better medical structuring
- Add safety layers for detecting high-risk situations
- Integrate with hospital systems (EHR)
- Add medication adherence tracking and reminders
- Enable doctor-verified summaries
Long-term, we want to build a system that ensures patients don’t just receive care —
they understand and follow it correctly.
Built With
- agents
- ai
- asr
- html5
- nextjs
- python
- rag
- tts
- tts-api.com
Log in or sign up for Devpost to join the conversation.