Inspiration
Most people do not have easy access to a physical therapist (PT), especially in more rural communities. In the worst case, the closest PT can be several hours away, and if there are more serious injuries that need to be treated, that distance would need to be driven several times. We wanted to create a solution to help alleviate this problem.
What it does
KineTracks helps users improve the form of their physical therapy exercises through their phone and AI. KineTrack opens to a live camera so you can practice movements while the app gives short spoken prompts to give advice for how to improve your movements. At the end, the user will get a summary of what they did well and aspects of the exercises that they should focus more on. Each recording is saved so you can always look back at your previous form. You can choose also camera resolution and frame rate to match your device for smoother tracking.
How we built it
iOS App (Swift / SwiftUI)
- Capture & UI:
AVFoundation,SwiftUI - Realtime transport:
URLSession(HTTP) +URLSessionWebSocketTask(WebSocket) - AI client helpers: Gemini requests (see
GeminiAPIFunctions.swift) - TTS client: ElevenLabs REST (or SDK)
AI & Services
- Gemini 2.5 Flash (Vision + Text): posture assessment + coaching text
- ElevenLabs: convert tips to natural speech (streamed back)
Backend (Python / FastAPI)
- Endpoints:
ws/analyze - MediaPipe Integration: Utilizes ML Kit Pose Detection API for keypoint detection
- Exercise Analysis: Heuristic-based algorithm for detecting exercise mistakes
Challenges we ran into
Connecting the websockets to the frontend was the biggest challenge we ran into. There were several small mistakes in the expected JSON information from both sides, which caused us to debug it for almost 3 hours.
We also struggled to finalize the system's design. We considered writing everything in Swift to keep the app running fast, but only one person on our team knew Swift well, which would halt any future progress. We opted to have a Python/FastAPI backend instead to handle the pose detection and exercise analysis logic, but it took time to decide on this specific architecture.
Accomplishments that we’re proud of
Kevin: I'm proud of us getting the API keys to work and mainly making an IOS app for this hackathon project.
Ryan: I'm proud of getting a proper backend setup in a hackathon. I am also proud of my heuristic algorithms for analyzing the user's exercise mistakes.
Zeedan: I'm proud of participating in the team, thinking about how the application will function backend and front end. I'm also proud of designing and coding a reactive front end that seamlessly interacts with the backend. I'm also proud of everything I've learned about API calls this Hackathon and see forward to use more of them in future.
What we learned
Kevin: I learned how coding an app using apple software works as I haven't actually experienced coding in swift or Objective-C, so it was nice learning how it works and the syntax along with it.
Ryan: I learned how to work with WebSockets and connecting a backend to a non-Javascript language.
Zeedan: I learned how to work with WebSockets, APIs, Swift, and apple framwork. I gained the experience of developing software in a fast paced environment with other cool people.
What’s next for KineTrack
We want to incorporate more exercises, yoga positions, and stretches to the app, as well as add more advanced heuristics/ML for catching mistakes. We also want to implement a recommendation system that gives suggestions on what stretches or exercise the user would want to do based on their injury or physical fitness.
Built With
- elevenlabs
- gemini
- mediapipe
- opencv
- python
- swift
- websockets

Log in or sign up for Devpost to join the conversation.