Confidence Starts Here
Built in 24 hours, Presently helps you exude more confident body language throughout speeches.
With live feedback using a Raspberry Pi 3 and computer vision, Presently monitors your posture and gestures in real-time. Whether you're rehearsing or presenting live, it guides you to:
- Maintain upright posture
- Minimize nervous gestures
- Make purposeful hand movements
Features
- ✅ Real-time body language analysis
- 📊 Visual and audio feedback cues
- 🧠 AI-powered posture correction
- ⚡ Works offline on low-power hardware
How It Works
- Connect a webcam or use the Pi Camera module.
- The system runs computer vision models locally on the Raspberry Pi.
- Your movements are analyzed frame-by-frame.
- Instant feedback is delivered through LEDs, LCD screens, or audio.
Why It Matters
Confident delivery doesn't just rely on your words—it’s how you carry yourself. Presently helps you align your body language with your message.
Project built during a 24-hour hackathon.
Built With
- opencv
- python
- raspberry-pi
- socket
- tensorflow
Log in or sign up for Devpost to join the conversation.