Healthcare literacy is one of the most unequal resources in the world. Whether you're uninsured, facing a language barrier, or held back by limited access to education, knowing what is happening to your own body is a privilege that many people don't have. SnapHealth was built to close that gap. Using Claude at its core, the app lets anyone describe their symptoms in a message or upload a photo to receive a clear assessment of what they provided with no insurance card, copay, or waiting room. The Med Label feature extends this to medication literacy: point your camera at a prescription bottle or describe it in text, and Claude breaks down exactly what you're taking, why it's taken, and what to watch for. Both features work as a full conversational chatbot, so users can follow up, clarify, and dig deeper the way they would with a doctor who actually had time to talk.

We built SnapHealth with a deep awareness of the people who would use it and how we can protect them. Claude is never positioned as a replacement for professional care, every response is grounded in the understanding that some situations require immediate human intervention, and the app is transparent about its limitations. Privacy is also a non-negotiable, no photos are stored, no health data leaves the device, and everything is deleted the moment a session closes. We wanted to keep this app as open as possible so that anyone in any situation could have something to help them. This led us to the design decision to make the photo and text optional. As long as one is there, it is possible for the user to submit their prompt for analysis to allow them to submit as much information as they are comfortable sharing and not any more.

For our technical stack, we built this app using a React + Tailwind + Vite frontend, powered by a FastAPI Python backend with Claude powering the analysis through the Anthropic API. Every request goes through a modular API layer where Claude receives a structured prompt combining the user's optional profile context, their symptoms or medication label, and prior conversation turns to establish context. One of our earlier challenges was making the photo entirely optional. Early builds required an image to unlock the Analyze button, which immediately excluded anyone who simply needed to describe how they felt, and we felt that it could be too restrictive on our users. Fixing it required coordinated changes across the React components, the API service layer, and the FastAPI endpoints, which had to handle null image payloads without breaking. A separate class of challenges lived entirely in prompt design, we needed to calibrate Claude to give responses are useful, yet aren't overpromising, and making sure the app always signals clearly when a situation requires immediate professional care rather than a chatbot. To do this effectively we spent a lot of time testing prompts and creating strict rules to minimize the risk of Claude causing any sort of misinformation being spread to our users.

Built With

Share this project:

Updates