Inspiration

Medical data is scary. For students, looking at raw physiological signals, like an ECG or EEG feels like reading a different language without a teacher.

Thousands of open-source medical records exist, but they are locked behind complex file formats and outdated software. Students are stuck staring at static textbook images, and researchers waste hours just trying to open a file. We wanted to fix this.

What it does

DeepPulse turns raw medical data into an interactive playground. It instantly loads real patient data such as ECG, EEG, blood pressure and renders it on medical-grade grids.

But it’s not just a viewer. We integrated Google Gemini 3 Flash to create "Pulse," an AI tutor that can actually see what you see.

  • Visual Analysis: The multimodal AI (Flash 3) analyzes snapshots of the signals in real-time to identify arrhythmias or sleep patterns.
  • Gamified Learning: It generates quizzes on the fly to test your knowledge.
  • Real-Time Context: You can chat with the data, and the AI understands the specific millisecond you are looking at.

How we built it

We focused on speed and modularity to handle the heavy data processing.

  • Frontend: We used React 19 and Vite for snappy, zoomable SVG rendering.
  • Backend: A FastAPI (Python) server handles the heavy scientific parsing.
  • The Brains: We pipe signal screenshots directly to Gemini 3.0 Flash.
  • Storage: We use Supabase to manage the massive amount of metadata and raw signal files.

Challenges we ran into

LLMs are great at text, but medical signals are visual. Our biggest hurdle was the "Context" problem: capturing the exact moment the user is viewing, converting it to an image, and feeding it to the AI without lagging the browser.

We also struggled with rendering performance. Drawing 12 leads of ECG data simultaneously is heavy, so we had to heavily optimize our SVG rendering to keep the app smooth.

Accomplishments that we're proud of

We are most proud that the AI actually "sees." We successfully built a pipeline where the AI answers questions based on the visual shape of the wave, not just text descriptions.

We also made it fast. We reduced the time it takes to go from a raw, complex file to a visualized graph down to just a few seconds.

What we learned

Prompt engineering is clinical. You can't just tell AI to "analyze this." We had to inject deep domain expertise like specific rules for reading an EEG into the system prompt to prevent hallucinations.

We also learned that in medicine, a grid isn't just a background; it's a measuring tool. Getting the aspect ratios and grid lines accurate was crucial for the data to be valid.

What's next for DeepPulse

Long term, we want to build a dedicated Education Mode, creating curated "playlists" of pathologies for medical schools to use in the classroom.

Built With

Share this project:

Updates