Inspiration

The fitness world is full of social media lies and contradictions. You’re told to "train harder," but also to "train smarter." The problem is, nobody actually tells you when to stop. Most of us either leave gains on the table by quitting too early or grind so hard that we fry our central nervous system, sabotaging our recovery for the rest of the week.

We call this the Hypertrophy Paradox: the perfect stimulus for muscle growth looks almost exactly like overtraining. Research shows that the "sweet spot" for growth usually happens at a 20–35% loss in rep velocity. We wanted to build something that could find that window for you, in real-time, so you never have to guess if you’ve done enough.

What it does

NeuroGains is a real-time neural fitness coach. It uses your webcam and biometric data to track not just how many reps you’re doing, but the quality of those reps.

By analyzing your joint angles and neural stability (tracking things like micro-tremors and jitter), the system detects exactly when your muscles are hitting peak fatigue. When you hit that "red zone," NeuroGains alerts you to rack the weight. You get the maximum stimulus with minimum burnout. Everything is then saved to a dashboard that gives you a Daily Readiness score and visualizes your CNS recovery trends.

How we built it

We leaned heavily on MediaPipe Pose to track 33 points on the body. To turn those coordinates into a workout, we built a four-state machine (Ready → Descending → Bottom → Ascending) that tracks elbow angles using vector math. We even added a 5° hysteresis buffer to make sure the "jitter" of a heavy lift didn't trigger accidental rep counts.

The real "magic" happens in the data processing:

WebSockets: We stream biometric and pose data instantly into a live Recharts dashboard.

The Math: We calculate velocity loss by comparing your current ascent speed against a baseline established in your first two reps.

The Stack: We used a PostgreSQL database to handle the session history and built the frontend to translate complex neural signals into simple, color-coded status labels.

Challenges we ran into

At first, we didn't even know where to start. We had to do tons of research to even get the math behind all the data. Even as we started to get the computer vision to work, we ran into problems with the cold start. People don't always start a set with perfect form, so establishing a baseline for normal speed was tricky. We solved this by using a moving baseline window for the first two reps before "locking in" the calibration.

We also fought a long battle with sensor noise. Distinguishing between a shaky muscle and just camera rustling required some heavy lifting with non-linear jitter detection algorithms. We ended up comparing short-window vs. long-window standard deviations to make sure the fatigue we were seeing was actually real.

Built With

  • bolt-database-(postgresql)
  • browser
  • built-with:-react-18
  • framer-motion
  • lucide-react
  • mediapipe-pose
  • react-router-v7
  • recharts
  • tailwind-css
  • typescript
  • vite
  • webcam
  • websockets
Share this project:

Updates