Inspiration
You wake up the morning after a long run, and your hip feels tight.
You try to stretch it, but you are not even sure what “it” is. Is it your hip flexor, your glute, your lower back, or something connected to your knee?
So you Google it.
Five minutes later, you are buried in anatomy diagrams, recovery blogs, medical terms, and conflicting advice. What started as a simple question turns into a confusing search spiral.
Percept was built around that exact problem. Most people do not need more health information thrown at them. They need a clearer way to understand what they are feeling.
Our thesis:
Can we build a multimodal health intelligence system that helps people understand discomfort, patterns, and next steps without replacing human judgment?
Not a symptom checker.
Not a generic wellness chatbot.
Not an AI doctor.
Percept replaces confusing searches and anatomy overload with clear, visual, guided body understanding.
What it does
Percept is a visual health intelligence platform that helps users understand body discomfort without falling into confusing search results or dense anatomy content.
Users describe what they feel in natural language, like “my hip feels tight after running” or “my neck gets tense after studying.” Percept then maps that input onto an interactive 3D body, highlights relevant regions, and explains the connection in plain language.
The experience has four main parts:
- Body understanding: the 3D body lights up connected muscle groups and shows where discomfort may be coming from.
- Why This Muscle?: users can click any highlighted region to see why Percept selected it, what nearby areas are connected, and how movement or posture may relate.
- Recovery Plan: users get safe, simple next steps with focus areas, gentle movements, avoid-for-now guidance, check-ins, and safety reminders.
- Reflection and provider context: users can journal, track mood, control what stays private, and share selected context with providers when needed.
Percept does not diagnose or replace care. It helps users understand what they feel, why it may be connected, and what they can do next in a clear, visual way.
How we built it
Percept was built as a full-stack web platform combining interactive body visualization, shared records, AI synthesis, and recovery guidance.
Frontend
The interface was designed around clarity, calmness, and visual understanding.
The main interface includes:
- interactive body view
- clickable muscle explanations
- Recovery Plan tab
- journal surface
- provider dashboard
- living graph
- AI-guided explanation panels
The body interface is central to the product. It is meant to feel like a guided visual companion, not a static anatomy chart.
Backend
The backend manages:
- structured health events
- journal visibility
- provider access
- graph updates
- shared record logic
- patient-controlled privacy
Users control what stays private, what becomes shared, and what gets included in future care conversations.
AI and reasoning pipeline
Percept uses AI as a synthesis layer, not as a replacement for care.
The system performs:
- natural language symptom interpretation
- body-region and muscle-group mapping
- “Why This Muscle?” explanations
- recovery plan generation
- journal theme extraction
- emotional pattern detection
- longitudinal graph updates
- provider-facing context synthesis
The pipeline:
- User describes what feels wrong.
- Percept classifies the input.
- Relevant body regions light up.
- Users click a region to understand why.
- Recovery Plan suggests safe next steps.
- Patterns build over time.
- Humans stay in control.
Physical inputs become spatial context.
Journal and session inputs become temporal context.
Recovery Plan turns understanding into guided action.
Challenges we ran into
- Making a complex health system feel simple, calm, and easy to use.
- Turning the 3D body from a visual element into something useful and explainable.
- Designing safe AI boundaries so Percept supports understanding without diagnosing or replacing care.
- Balancing body mapping, recovery guidance, journaling, privacy, and provider context without overwhelming the user.
Accomplishments that we're proud of
- Built an experience that turns body confusion into clear visual understanding.
- Added Why This Muscle? so users understand why each highlighted region matters.
- Created Recovery Plan to give safe next steps instead of another search spiral.
- Connected body symptoms, reflection, journaling, and provider context into one health arc.
- Kept the human in control through privacy settings, educational language, and safety boundaries.
What we learned
- Health tools should not start with a wall of text.
- Visual explanations make anatomy easier to understand.
- The best AI health features are grounded, limited, and explainable.
- In healthcare, what you choose not to build matters as much as what you ship.
What's next for Percept
Near-term improvements:
- expand the interactive body model
- ground explanations in clinician-reviewed sources
- improve muscle-group and movement mapping
- personalize Recovery Plan
- add voice interaction for hands-free use
- refine privacy controls
Future directions:
- wearable integration for sleep, activity, and recovery signals
- measurement-based care tools
- movement guidance by muscle group
- multi-session progress visualization
- stronger evidence-grounding for AI outputs
The long-term vision is to make Percept a visual health memory layer: a system that helps people understand what they feel, why it may be connected, what they can do next, and how their patterns evolve over time.
Percept is not trying to replace care. It is trying to make care more continuous, visual, and human.
Stack
| Layer | Technology |
|---|---|
| Frontend | Next.js, React, Tailwind CSS, shadcn/ui |
| Visualization | Three.js / React Three Fiber, graph-based UI |
| Backend | Node.js API routes, Supabase, Postgres |
| Realtime | Supabase Realtime / WebSocket-style updates |
| AI Layer | LLM reasoning, structured extraction, pattern synthesis |
| Product Features | 3D body mapping, Why This Muscle?, Recovery Plan, journaling, provider context |
| Data Model | Patient-scoped records, journal visibility controls, structured health events |
| Deployment | Vercel |
Built With
- claude-api-(anthropic)
- css3
- elevenlabs-api
- express.js
- javascript-(es6+)
- localstorage
- node.js
- react
- render-(backend)
- vercel-(frontend)
- vite
- web-speech-api
Log in or sign up for Devpost to join the conversation.