Inspiration

We’ve all stared at a prescription bottle, confused by the tiny text, complex medical jargon, and conflicting stickers. For the elderly, non-native speakers, or tired parents, this confusion isn't just annoying—it's dangerous.

We were inspired by the statistic that medication errors cause over 1 million emergency room visits every year. We wanted to build a tool that acts as a "translator" between the doctor's orders and the patient's daily life, making medication safety accessible to everyone, instantly.

What it does

MediscanAI is a smart prescription companion that turns a photo of a medication label into an actionable, easy-to-understand plan. Instant Analysis: Users snap a photo of any prescription bottle. Jargon Translation: It translates complex instructions (e.g., "Take 1 tablet PO BID") into plain English (e.g., "Take one pill twice a day by mouth"). FDA Black Box Warnings: We specifically engineered the AI to hunt for and highlight severe FDA "Black Box" safety warnings in a prominent red alert box, ensuring critical risks aren't missed. Smart Scheduling: It automatically generates a calendar schedule (.ics) with reminders for doses and specific spacing (e.g., "Every 8 hours"). Lifestyle Tips: The AI provides context-aware tips, like "Take with food" or "Avoid sunlight," based on the specific drug profile.

How we built it

We built MediscanAI using a modern, AI-first stack: Frontend: Built with React (Vite) and Tailwind CSS for a responsive, mobile-first interface. Design: We designed the UI in Figma and used the Figma MCP (Model Context Protocol) to translate our visual designs directly into clean React code via Cursor. The Brain: We used OpenAI's GPT-4o model. We engineered a robust prompt that takes the image, performs OCR, analyzes the medical context, cross-references FDA data, and outputs a structured JSON object. State Management: We used React State to manage the flow from "Camera" to "Analysis" to "Calendar," ensuring a smooth user experience without page reloads. Challenges we ran into Taming the AI Output: Getting the AI to consistently return valid JSON data instead of conversational text was tricky. We had to refine our system prompts and strictly define the JSON schema to ensure the app didn't crash when parsing the response. The "Figma to Code" Gap: While the Figma MCP gave us great visual components, they were "dumb." Manually wiring up the generated UI to our logic layer (api.ts) required a deep understanding of React props and state lifting. Extracting FDA Warnings: Initially, the AI would bury important warnings in long paragraphs. We had to create a specific logic branch in our backend prompt to force the AI to "hunt" for Black Box warnings and separate them into their own high-priority field.

Accomplishments that we're proud of

Real-Time Analysis: We successfully connected the frontend to the OpenAI API, making the app functional rather than just a mockup. The "Black Box" Feature: We are particularly proud of the safety feature that parses FDA warnings. Seeing that red warning box appear correctly for high-risk medications felt like a huge win for patient safety. Seamless Design-to-Code: We successfully utilized the new Figma MCP workflow, which allowed us to iterate on the design 3x faster than traditional coding.

What we learned

Prompt Engineering is Backend Engineering: We learned that in the age of AI apps, writing a good prompt is just as important as writing a good database query. The quality of our app depends entirely on how well we ask the model to structure its data. The Importance of Empathy in UI: Designing for sick or elderly users meant we had to prioritize large text, clear contrast, and very simple language over flashy animations. What's next for MediscanAI Drug Interaction Checking: We want to allow users to scan multiple bottles, and the AI will check if they are safe to take together. Pharmacy Inventory: Integrating with local pharmacy APIs to check stock availability for refills. User Accounts: Saving medication history to the cloud so family members (caregivers) can monitor adherence remotely.

Built With

Share this project:

Updates