Inspiration

Max works hard but can't get a job because she freezes in interviews. Her friends say "just be confident," but that isn't a button you push. Confidence is small skills slow down, sit up, stop saying "um" and in a real interview, no one can tap your shoulder to remind you. Until now.

What it does

Tactile Talent is a real-time interview coach that trains confidence through your skin. You talk to an AI interviewer wearing a small haptic device (Woojer). The moment you slip (talking too fast, slouching, breaking eye contact), your body feels a gentle pulse. No words, no screen. Pulse after pulse, your body learns, until the cue fades because you don't need it anymore.

How we built it

Input: microphone + Google STT for speech; OpenCV + MediaPipe for pose and gaze; HeyGen LiveAvatar + GPT-4.1-mini as the interviewer. Abstraction: all signals converge on one API — handle_event(name, meta) — routed through SSE + JSONL. Output: NumPy-synthesized waveforms mapped per event (slow pulses for TOO_FAST, rising chirp for SLOUCHING, directional tap for LOOKING_AWAY), played through Woojer via stereo USB audio. End-to-end latency <100 ms.

Challenges we ran into

  • Designing haptics that feel meaningful not just detectable. Tuning waveforms so a pulse intuitively means "slow down" took way longer than writing the code.
  • Calibrating the vision pipeline. MediaPipe is jittery; we added per-user baselines and rolling windows to stop false alerts.
  • Cross-platform audio. Mac vs. Windows device indices behave differently — we built a diagnostic tool to keep the team in sync.

Accomplishments that we're proud of

  • A working end-to-end MVP in 24 hours real closed loop on real hardware.
  • Max actually overcame her fear. She'd stopped applying for jobs; testing this on herself, the pulses didn't feel like criticism they felt like someone on her side.
  • A clean event abstraction any new sensor can plug into.

What we learned

Haptics is a design problem, not a code problem: the waveform carries the meaning. Calibration per user is non-negotiable in CV. A good API upfront saves hours of integration later. And interviews aren't a confidence problem, they're a feedback problem.

What's next for Tactile Talent

Smarter, context-aware cues via multimodal agents. Personalized adaptation that learns each user's baseline. Expanding beyond interviews — public speaking, teaching, therapy. And a thinner, always-on wearable you could actually bring into a real interview. We don't want to tell you to be confident. We want your body to find it.

Built With

Share this project:

Updates