Inspiration
While brainstorming for this Hackathon, a teammate pointed out that I was tired and needed a nap, even as I was ignorantly hacking away with ridiculous amounts of caffeine. I felt grounded. Her reaction to what she saw on my face over Zoom became the reason I had to take a break. For me, she became Dr Hunch, in that moment, the catalyst to a great idea.
What it does
It serves as an AI-powered mirror for emotional and physical health. Appealing to the 8th interoceptive sense.
How we built it
HunchDoctor is built with an EdgeAI architecture that blends real-time biometrics, vocal prosody and computer vision.
To allow Hunch to "decide" your emotional state, we used a Weighted Linear Fusion equation. This equation combines data from different sensors (camera and microphone) by assigning them different "weights" based on their reliability at that moment.
Hunch calculates your final signal values ($S_{fused}$) using this formula:
$$S_{fused} = (W_{face} \times S_{face}) + (W_{voice} \times S_{voice})$$
To "decide" your heart rate without a sensor, the app uses the Plane-Orthogonal-to-Skin (POS) algorithm. This math isolates the red-green-blue light reflected by your blood flow from the background "noise" of your skin color:
$$\alpha = \frac{\sigma(S_1)}{\sigma(S_2)}$$
$$H(t) = S_1(t) + \alpha \cdot S_2(t)$$
(Where $S_1$ and $S_2$ are different color projections, and $\alpha$ is a dynamic tuning factor based on signal variance.)
Once all signals are fused into the 5 "tastes" (Sweet, Sour, Bitter, Salt, Umami), Hunch makes its final decision by applying a simple Argmax function — it finds the signal with the highest probability value and declares that your "Dominant Signal."
Claude is the "Voice of the Doctor" and the "Insight Engine" that makes sense of all the raw data. While Hume (EVI) handles the live conversation, Claude acts as the sophisticated brain that synthesises everything at the end of a scan.
Challenges we ran into
Integrating VitalLens directly was impossible because their API lacks CORS support, forcing us to route all biometric data through a custom Vercel Edge proxy to handle headers and security.
Calibrating the Hume EVI model took significant trial and error to prevent the AI from becoming over-talkative and to ensure it followed a strict, calming single-question protocol.
Accomplishments that we're proud of
Connecting a diverse group of strangers to build something out of nothing.
What we learned
We learned the power of AI-assisted development, the resonance of a design-first programming methodology and Figma's powerful blend of design and precision.
What's next for HunchDoctor
Carrying out with development of HunchDoctor to become more robust and agile and serve users accurately.
Guided Micro-Interventions: If the compass identifies a high "Bitter" or "Sour" signal combined with an elevated heart rate, the app shouldn't just record it — it should automatically trigger a response (e.g., a 60-second box-breathing session or a specific neural-calming soundscape tailored to your current biometric frequency).
Emotional Weather Patterns: Use data visualisation to reveal trends over time. Are you consistently more "Umami" on Tuesday afternoons? Does your respiratory rate improve after a voice session with Hunch? This turns one-off readings into actionable self-knowledge.
Hunch Profiles: Allow users to choose or have the AI suggest different personas — a "Stoic Hunch" for high-pressure work days, a "Nurturing Hunch" for evening wind-downs, or a "Mirror Hunch" that purely reflects facts without narrative.
Native App / PWA: Since the UI is built with a premium mobile-first aesthetic, moving to a native iOS/Android app or a high-performance PWA is a logical next technical step.
Built With
- anthropic-claude-api-(sonnet-3.7)
- css-(tailwind-4)
- face-api.js
- framer-motion
- functions
- glsl
- hume-ai-(evi)
- javascript
- radix-ui
- react-router-7
- three.js
- typescript
- vercel
- vitallens-api-(rppg-vitals)
- vite
Log in or sign up for Devpost to join the conversation.