Inspiration
In a world of bloated fitness apps requiring subscriptions, logins, and complex setups, we missed the raw, competitive spirit of the schoolyard: "I bet I can do more push-ups than you." We wanted to build the "Snapchat of Fitness", a frictionless, mobile-first web app where you open a link, drop your phone on the floor, and let AI be the referee. No servers watching you, no accounts required, just pure peer-to-peer competition powered by the device in your pocket.
What it does
DailyReps is a Progressive Web App (PWA) that uses on-device computer vision to automatedly track and referee push-up challenges. AI Rep Counting: Users place their phone on the floor (selfie mode) or lean it against a wall. The app tracks body keypoints to count reps automatically. Adaptive Calibration: It learns the user's specific range of motion during a "countdown calibration" phase to establish a personalized baseline. Real-time Feedback: Visual and audio cues guide the user to hit the correct depth. Async Battles: After recording, it generates a challenge link. Friends click the link, see the target score, and the camera immediately opens for them to try and beat it.
How we built it
We built DailyReps using React 19 and Tailwind CSS for a native-app feel, relying on modern PWA standards for camera access and offline capabilities. For the core detection engine, we utilized ml5.js (MoveNet Lightning) running entirely in the browser (Edge AI). We chose this for privacy and zero-latency feedback—video streams never leave the user's device. The "Hybrid Excursion" Algorithm The most complex part of the build was the math behind the detection. We couldn't simply track the Y-axis because users place phones in two distinct ways: Wall-lean: User moves up/down (). Floor-selfie: User moves closer to the camera (). To solve this, we engineered a weighted signal fusion algorithm. We calculate a robust centroid based on the Nose, Eyes, and Shoulders, and combine the vertical movement with the "scale" (eye distance) to create a unified Excursion Metric: This allows the app to work flawlessly whether the phone is vertical against a wall or lying flat on the ground.
Challenges we ran into
- The "Floor Perspective" Problem Standard pose detection models struggle when a subject is looming over the camera (selfie mode on the floor). The body doesn't move "down" the screen; it simply explodes in size. Our early prototypes failed to register reps in this position. We had to rewrite our engine to treat Proximity (Scale) as a primary signal for depth, weighted heavier than vertical movement in that specific context.
- Stale Closures in the React Loop Synchronizing the high-speed requestAnimationFrame loop (running at 60fps) with React's state model caused "stale closure" bugs, where the AI would count a rep internally, but the UI wouldn't update. We implemented a Ref Bridge Architecture—using mutable useRef objects to bridge the gap between the render cycle and the animation loop, ensuring the detection logic always had access to the latest application state without triggering re-renders. ## Accomplishments that we're proud of Auto-Ranging Thresholds: The app doesn't use fixed values for "up" or "down." It dynamically adapts to the user's arm length and camera distance during the first few seconds of the workout. Privacy-First Architecture: By using IndexedDB and local inference, we built a fully functional video social app that requires zero backend infrastructure for the video analysis. Cross-Device Consistency: Getting the math to work consistently across iPhones, Androids, and Laptops required significant normalization of the coordinate systems. ## What we learned We learned that real-world computer vision is messy. Lighting flickers, loose clothing hides limbs, and users have terrible camera angles. We learned that smoothing algorithms (like the Low Pass Filter we applied to the signal) are just as important as the AI model itself to prevent "jitter reps." ## What's next for DailyReps V1 validates: Do people actually record push-ups and challenge friends? If yes, V2 adds: Multi-person leaderboards (3+ participants) 7-day streak challenges Basic AI rep counting (via pose estimation) Pro tier ($2.99/month) for unlimited challenges If viral coefficient >1.2: Native mobile apps (iOS/Android) Push notifications for challenges Other exercises (pull-ups, squats) Challenge entry fees (winner takes pot) Core metric: Share rate after completing challenge. If >30%, we have product-market fit. We are incredibly excited to integrate Gemini 1.5 Flash to evolve DailyReps from a "Counter" into a "Coach." Multimodal Form Correction: While MoveNet counts the reps, we plan to send keyframes to the Gemini API to analyze form quality (e.g., "Your hips are sagging" or "Elbows flaring too much"). AI Trash Talk: We want to use Gemini to generate personalized, playful taunts or encouragement based on the match history between two friends. Voice Command: Enabling hands-free operation using Gemini's audio capabilities, allowing users to yell "Gemini, start the timer!" while in plank position.
Built With
- movenet
- pwa
- react
- react-19
- service-workers
- tailwind-css
- tensorflow.js
- typescript
- web-audio-api
Log in or sign up for Devpost to join the conversation.