AURA
Adaptive Unified Realtime Agent for Human Movement
đź’ˇ Inspiration
Physical injuries are incredibly common. As athletes and runners, we’ve felt how subtle errors in form, repeated thousands of times over time, can quietly accumulate into serious injuries that linger for years or never fully heal. We’ve seen the same story with our parents, who have developed chronic joint pain after decades of barely noticeable misalignments in how they move at work.
This isn’t just anecdotal. Low back pain alone affects over 619 million people globally and is the leading cause of disability worldwide.
Most people only receive feedback on their movement in short bursts — during a training session, a doctor’s visit, or a few weeks of physical therapy. Outside of those windows, they’re largely on their own, relying on vague cues from the internet or trial-and-error. The problem is that movement issues rarely hurt in the moment. Pain shows up much later, making it incredibly hard to trace back to the specific habits and patterns that caused it. Real correction needs to happen while the movement is happening.
We wanted a system that learns how you move and gently steers you in the right direction mid-motion — no delay, no inaccessible nor confusing coaching, just tiny course corrections when they matter most.
This is AURA.
🎯 What AURA Does
AURA is a real-time wearable system that prevents injuries by using haptic feedback and adaptive motion intelligence to correct your form before small errors accumulate into long-term damage.
- You calibrate the sensors with a camera
- AURA creates a 3D digital twin of your body, learns your regular movement patterns, and continuously compares them to expert form
It’s designed to fit easily into your everyday routine, quietly correcting subtle, harmful movement patterns before they build into long-term injuries.
From a single demonstration video, AURA learns the motion and then coaches you through it as you move.
You start with one high-quality demonstration video (yours or online-sourced) AURA builds a three-dimensional digital twin of that motion As you move, AURA:
- Detects which joints are drifting away from the intended pattern
- Speaks short, easy-to-follow voice cues
- Shows a live joint error map in the digital twin view
- Sends a gentle vibration to the limb that needs adjustment
This way, your form is corrected while you’re moving, at the exact moment your body needs the guidance.
🛠️ How We Built It
| Component | Purpose |
|---|---|
| MediaPipe Pose | Tracks body position frame by frame |
| FastAPI with WebSockets | Streams posture information continuously |
| Three.js | Displays the digital twin with heat based feedback color |
| Dynamic Time Warping | Matches variation in movement speed to the reference example |
| Gemini AI | Produces natural and short spoken cues that guide correction |
| ESP32 with IMU and Vibration Motors | Provides a subtle physical cue for direction of adjustment |
The hardware is built as an IoT network of wearable buzzer edges controlled by a central Raspberry Pi. Each edge measures your relative motion, and the Raspberry Pi uses those signals to reconstruct your posture as a 3D digital twin and uses it to evaluate your form in real time. During setup, a laptop runs a calibration to map the relative edge positions to an ideal digital town and generates a set of gradual learning steps. These parameters are then uploaded to the Raspberry Pi so it can analyze posture and give feedback continuously in real time independently from the computer.
Each buzzer edge communicates with the central Raspberry Pi over WiFi using an ESP32 microcontroller and a TCP protocol. They are each powered independently with a battery module. Each buzzer edge also has a 6-axis IMU (LSM6DSO) that measures motion through acceleration and gyro. These relative positions from the IMUs are then combined to reconstruct the digital twin that’s used for analysis and feedback.
đźš§ Challenges We Ran Into
Hardware was a real limitation in the beginning. Some of the boards we had did not include WiFi support. Princeton campus WiFi was also blocking our TCP communication attempts which stopped our signals entirely. We eventually identified a board with WiFi support and created a communication layer that allowed signals to pass reliably. This allowed us to send live cues during movement.
The IMU initially overloaded when we streamed motion data and triggered vibration at the same time. This caused repeated crashes. We tuned load scheduling and balancing so vibration feedback could occur without interrupting motion sensing.
We also realized that we did not need to physically track every joint. Instead we determine which limb is meaningfully deviating and provide feedback only there. This simplified the design and made the guidance feel clearer and more natural.
🏅 Accomplishments We Are Proud Of
- The complete Adaptive Unified Realtime Agent feedback loop is working
- The physical vibration cue feels like a coach tapping a shoulder rather than a vibration device
- Voice cues arrive at the correct moment rather than constantly
- The system supports sports training, physical therapy routines, lifting technique and posture alignment
- The experience feels calm and supportive rather than overwhelming
📚 What We Learned
People do not need many instructions at once. They need one instruction at the moment it matters. Visual feedback combined with a short voice cue and a light physical signal helps the body correct itself faster with less effort. Timing is the key factor. Subtle guidance works better than continuous correction.
*AURA: * Small adjustments create lasting change.


Log in or sign up for Devpost to join the conversation.