Inspiration

In the modern healthcare landscape, a patient's medical history is often fragmented. Data is scattered across different hospital portals, websites, and physical paper folders. We realized there wasn't a unified platform that allowed individuals to centralize this information securely.

The spark for MedRecord came from a simple question: Why can't I own my medical data without trading away my privacy? We wanted to build a tool that allows users to consolidate screenshots, photos of paper records, and PDFs into one structured dashboard, purely through local processing.

What it does

MedRecord is a privacy-first, on-device personal health archive. It functions as a centralized hub for all medical data without ever connecting to a cloud server.

  • Universal Ingestion: Users can take photos of paper records, upload screenshots of lab results, or import PDF files.
  • Local AI Processing: The app uses a local Large Language Model (LLM) and OCR to read, analyze, and classify the data.
  • Data Extraction & Visualization: It extracts specific values (like blood pressure, glucose levels, etc.) and plots them on interactive charts to show trends over time.
  • Zero-Knowledge Privacy: All computation happens locally. We ensure that .

How we built it

We built MedRecord with a "Local-First" architecture to ensure maximum security.

  • Frontend: Built with SwiftUI for a responsive iOS interface and WatchOS for wearable integration.
  • OCR & Vision: We utilized the Apple Vision Framework for text recognition from images and PDFs.
  • Local Intelligence: Instead of API calls to OpenAI or Claude, we integrated a quantized Local LLM (via CoreML) to parse the unstructured OCR text into JSON objects.
  • Storage: We used SwiftData for persistent, encrypted local storage of the structured records.

Challenges we ran into

The biggest hurdle was the Accuracy vs. Privacy Trade-off. We debated heavily whether to upload data to a powerful cloud model for better accuracy or stick to local processing. Local models often struggle with complex medical handwriting compared to cloud giants.

  • Optimization: We had to fine-tune the prompt engineering for the local model to handle medical terminology correctly.
  • Performance: Running OCR and an LLM simultaneously caused thermal throttling on older devices, requiring us to optimize the inference pipeline to run asynchronously.

Accomplishments that we're proud of

  • True Centralization: We successfully built a pipeline that turns a messy pile of physical paper into a searchable, digital database.
  • 100% Offline Capability: We achieved our goal of a fully functional app that requires zero internet connection to process complex medical documents.
  • Apple Watch Sync: Seeing the data extracted from a paper document appear seamlessly on the wrist was a magical moment.

What we learned

  • Medical Data Complexity: We learned that medical data is incredibly non-standardized. Normalizing units (e.g., converting to ) requires rigorous logic.
  • On-Device Limits: We gained deep insight into the limits of current mobile hardware for running LLMs and how to optimize memory usage for heavy tasks.

What's next for MedRecord

  • Export for Doctors: Generating a professional PDF summary report that patients can hand to their doctors.
  • Drug Interaction Warnings: Using the local AI to cross-reference prescriptions and warn users about potential side effects.
  • Fine-tuned Medical Model: Training a specialized SLM (Small Language Model) specifically for reading medical receipts and lab reports to improve local accuracy.

Built With

  • built-with-swift-(primary-language)-swiftui-(user-interface)-coreml-(on-device-machine-learning-models)-vision-framework-(ocr-&-text-recognition)-swiftdata-(local
  • encrypted-database-storage)-swift-charts-(data-visualization-&-trends)-watchkit-(apple-watch-integration)-xcode-(development-environment)
Share this project:

Updates