Inspiration
After a physio appointment, most patients are sent home with a printed sheet of exercises - no feedback, no accountability, no motivation. It’s easy to get discouraged, do the movements incorrectly, or give up entirely. We’ve been there.
We wanted to fix this, not by replacing physiotherapists, but by supporting their work in the moments that matter most: when the patient is alone. That’s how Mova was born.
What it does
Mova is a conversational physiotherapy assistant. It creates fully personalised adaptive physio sessions - tailored to your injury, stage, and even your mood that day - and delivers them through dynamic audio, video, and voice-based check-ins.
Users answer a few simple questions, and Mova instantly generates a guided session, and adapts to patient feedback, using real-time AI and video synthesis. It runs in any browser with no app install needed.
How we built it
We built Mova using:
- React & TypeScript (via Bolt.new): For a robust, type-safe, and interactive frontend experience
- ElevenLabs: For realistic, natural AI-generated voices
- Tavus: For the personalised physio video coach
- Zustand: For simple, effective state management across the application
- Supabase: For our backend, user authentication, and data storage
- Netlify: For fast, clean deployment
The flow was designed to feel seamless - from onboarding, to generating a tailored session, to guiding the user through it.
Challenges we ran into
- Time pressure: Coming up with the idea, building it, and fixing bugs while both of us work full-time jobs was pretty intense.
- Video syncing: Aligning voice, video, and session content smoothly took a lot of iteration.
- Customisation depth: Making sessions feel genuinely tailored without overwhelming the user was a constant balance.
Accomplishments that we're proud of
- We turned a simple idea into a working product in just a few weeks.
- The feedback from early research has been overwhelmingly positive.
- It feels like a real, usable product. Because it is.
What we learned
- Audio-first UX is extremely powerful for physical, movement-based tasks.
- You can ship something genuinely valuable in days if you stay focused and cut ruthlessly.
- Personalisation drives user trust and engagement more than we expected.
What's next for Mova
What started as a hackathon project has become a passion project. We're now planning to refine Mova into a polished MVP, with plans to launch a public beta and explore real-world use cases.
Technically, we plan to integrate an LLM (like Gemini or GPT-4) to dynamically generate entire exercise programs from a simple prompt. We will also explore using computer vision via a user's webcam to provide real-time feedback on their form and posture, ensuring exercises are performed safely and effectively.
This has legs, and we’re running with it. And if we get injured running, Mova's there for the physio!
Built With
- bolt.new
- elevenlabs
- javascript
- netlify
- react
- supabase
- tailwind
- tavus
- typescript
- vite
- zustand


Log in or sign up for Devpost to join the conversation.