MotionGuardian - DevPost Project Story
Inspiration
The hackathon challenge asked us to build "autonomous, self-improving AI agents that feel alive" - and we immediately knew this aligned perfectly with Loud Labs' vision of ambient spatial intelligence.
We were inspired by a simple but powerful question: What if AI could learn what's normal for YOU, not what's normal for everyone?
Traditional motion detection systems use fixed thresholds. If acceleration exceeds 5.0 units, trigger an alert. But this one-size-fits-all approach fails spectacularly in healthcare. A 75-year-old with Parkinson's has completely different "normal" motion patterns than a 30-year-old athlete. Existing systems either flood caregivers with false alarms or miss real emergencies.
My sister has epilepsy and caregiver fall detection has been on my hack list for years. BUT:
We wanted to build an agent that truly learns - that starts knowing nothing and becomes an expert on YOUR unique patterns. An agent that makes autonomous decisions about when to upgrade its own intelligence. An agent that doesn't just detect anomalies, but understands context and communicates with empathy.
Most importantly, we wanted to prove that self-improving AI isn't science fiction - it's achievable right now with the right architecture.
What it does
MotionGuardian is a self-teaching spatial AI agent that learns your motion patterns and gets smarter with every movement.
Here's the magic:
Phase 1: Initial Learning (Naive Agent)
- Starts with zero knowledge
- Uses simple fixed threshold detection (if diff > 5.0 → anomaly)
- Collects every motion event in Redis for learning
- Like a newborn - reactive but not yet intelligent
Phase 2: Autonomous Evolution
- After collecting 10+ samples, the agent autonomously decides to upgrade itself
- Calculates statistical model: mean, standard deviation, z-scores
- Switches to adaptive threshold detection (z-score > 2.5)
- No human intervention required - the agent makes this decision
Phase 3: Continuous Improvement
- Every new motion event refines the statistical model
- Rolling 100-event window means the agent adapts to changing patterns
- False positive rate decreases over time
- Claude Sonnet 4.5 generates context-aware messages that reference learning history
Real-World Flow
- User moves → Device accelerometer captures x, y, z data
- Frontend calculates vector magnitude and sends to backend
- Agent learns → Stores in Redis, updates statistics
- Anomaly detected → Statistical analysis determines if unusual
- Claude analyzes → Generates personalized caregiver message with context
- Notification sent → Postman webhook delivers alert
- Loop repeats → Agent gets smarter with each cycle
Proof of Learning
Visit /history endpoint to see:
- Real-time learning metrics (mean, std dev, adaptive threshold)
- Timeline showing evolution from fixed → adaptive detection
- Detection method logged for each anomaly (proof agent improved)
- Statistical model parameters calculated from YOUR data
How we built it
Technology Stack
Required Tools (3/3):
- Redis (Upstash) - Persistent learning memory, time-series storage
- Anthropic Claude Sonnet 4.5 - Context-aware AI message generation
- Postman Webhook - Caregiver notification delivery
Additional Tech:
- Node.js + Express for backend API
- Vanilla JavaScript for frontend (no frameworks, fast & clean)
- Device Motion API for real-time accelerometer data
Architecture
Device Sensors → Frontend (motion.html)
↓
POST /learn - Continuous learning endpoint
↓
Redis Storage
↓
Statistical Calculations
(mean, stdDev, z-scores)
↓
POST /anomaly - Intelligent detection
↓
Adaptive Logic
(fixed threshold vs z-score)
↓
Claude Sonnet 4.5
(context-aware prompt)
↓
Postman Webhook
(caregiver notification)
Key Implementation Details
1. Rolling Statistical Learning
// Calculate from last 100 events in Redis
const mean = values.reduce((sum, val) => sum + val, 0) / n;
const variance = values.reduce((sum, val) =>
sum + Math.pow(val - mean, 2), 0) / n;
const stdDev = Math.sqrt(variance);
const adaptiveThreshold = mean + (stdDev * 2.5);
2. Autonomous Decision Making
// Agent decides: use adaptive or fixed threshold?
if (stats && stats.count >= minSamplesForAdaptive) {
// Agent autonomously switched to adaptive mode
const zScore = Math.abs(vector - stats.mean) / stats.stdDev;
return zScore > 2.5; // Personalized threshold
} else {
// Still in initial learning phase
return diff > 5.0; // Fixed threshold
}
3. Context-Aware AI Prompting
// Give Claude the agent's learning context
const prompt = `You are MotionGuardian, a self-teaching AI...
CURRENT ANOMALY:
- Motion vector: ${vector}
- Expected baseline: ${baseline}
LEARNING CONTEXT:
- Statistical Learning: Mean=${mean}, StdDev=${stdDev}
- Detection Method: ${adaptiveMode ? 'Adaptive' : 'Initial Learning'}
- Agent Runtime: ${minutes} minutes, ${totalEvents} events analyzed
Write a brief, calm caregiver message that references
what you've learned about this person's patterns...`;
4. Visual Learning Dashboard
Built a beautiful /history endpoint that shows:
- Agent status (Initial Learning vs Adaptive Mode)
- Statistical model parameters
- Timeline with badges showing detection method evolution
- Real-time proof that agent is improving
Challenges we ran into
Challenge 1: Proving Self-Improvement
Problem: How do you prove an agent is "self-improving" in a 2-minute demo?
Solution: We made learning visible:
/historydashboard shows exact statistics- Each anomaly logs detection method (fixed vs adaptive)
- Status banner shows phase transitions
- Judges can literally see "Initial Learning Phase" → "Adaptive Learning Active"
Challenge 2: Balancing Sensitivity
Problem: Too sensitive = false alarms. Too conservative = missed emergencies.
Solution: Dynamic adaptation:
- Start conservative with fixed threshold during learning
- Switch to personalized z-score thresholds after enough data
- Cooldown period prevents alert spam
- Statistical approach naturally balances based on individual variance
Challenge 3: Cold Start Problem
Problem: Agent knows nothing initially - how to avoid bad decisions?
Solution: Graceful degradation:
- Phase 1: Use safe fixed threshold (proven in healthcare)
- Collect minimum 10 samples before switching
- Show users "learning in progress" status
- Never make high-stakes decisions without sufficient data
Challenge 4: Making AI Feel Human
Problem: Technical stats don't resonate emotionally with caregivers.
Solution: Claude as the empathy layer:
- Translate statistical anomalies into caring messages
- Reference learned patterns ("I've been monitoring...")
- Avoid medical jargon
- Suggest action without alarming ("might want to check in")
Challenge 5: Demo Consistency
Problem: Real motion sensors are unpredictable in hackathon setting.
Solution: Hybrid approach:
- Real device motion API for authenticity
/simulate/fallendpoint for reliable demo- Simulation uses actual statistical model (4 std devs above mean)
- Desktop fallback mode for any environment
Accomplishments that we're proud of
1. True Self-Improvement, Not Theater
Most "learning" demos are pre-trained models. Ours genuinely starts with zero knowledge and learns in real-time. The /history dashboard provides irrefutable proof - you can watch statistics calculate and thresholds adapt.
2. Autonomous Decision-Making
The agent decides when to switch from fixed to adaptive mode. No human intervention. No configuration. It just... upgrades itself when ready. That's genuinely autonomous AI.
3. Production-Quality Code in Hackathon Time
- 790 lines of clean, commented, production-ready code
- Comprehensive error handling and logging
- Environment validation on startup
- Graceful shutdown handling
- Health monitoring endpoint
- Full test coverage plan
4. Aligned with Loud Labs' Vision
We didn't just build for the hackathon - we built what Loud Labs stands for:
- Ambient intelligence that fades into background
- Spatial awareness (motion patterns in physical space)
- Proactive insights (detects changes before user knows)
- Life-first design (safety without constant attention)
- "What you didn't know you didn't know" (gradual pattern shifts)
5. Measurable Impact Story
We can articulate clear business value:
- 30% reduction in false alarms vs fixed thresholds (testable hypothesis)
- Personalized to individuals, not population averages
- Scalable to assisted living facilities (100+ residents)
- Foundation for long-term health trend analysis
6. Beautiful Developer Experience
README.md- Complete documentationQUICKSTART.md- 5-minute setup guideDEMO.md- Pitch scripts and walkthroughverify-setup.js- Pre-demo environment checker- Clean error messages and helpful logs
What we learned
Technical Learnings
1. Redis as a Learning Engine We discovered Redis isn't just a cache - it's perfect for time-series learning:
LPUSHfor O(1) event storageLRANGEfor rolling window queriesLTRIMfor automatic size management- Sub-millisecond performance for real-time learning
2. Statistical Learning > Machine Learning (for this use case) We initially considered training ML models, but realized:
- Statistical z-scores are interpretable (judges understand them)
- No training time required (learn from first data point)
- Adapts instantly to changing patterns
- Explainable to caregivers and medical staff
3. Claude's Context Window is a Superpower By passing learning history into Claude's prompt, we got dramatically better responses:
- "I've been monitoring for 15 minutes and noticed..."
- "This is unusual compared to your typical patterns..."
- "Based on the 47 motion events I've analyzed..."
Context transforms generic alerts into intelligent communication.
4. Autonomous Agents Need Observable State For judges to trust autonomous decisions, they need visibility:
- Log every decision and why
- Expose internal state (/healthz endpoint)
- Show learning progression (/history dashboard)
- Make phase transitions explicit
"Trust but verify" applies to AI too.
Product Learnings
1. Demo-Driven Development We built backwards from the demo:
- What needs to be visible? → Built
/historyfirst - What proves learning? → Added detection method logging
- What creates "wow" moment? → Autonomous mode transition
Result: Every feature serves the narrative.
2. The Power of "Before and After" Showing transformation is more compelling than showing capability:
- Before: Naive fixed threshold
- After: Intelligent adaptive z-scores
- The journey IS the product
3. Healthcare Needs Explainability Caregivers don't want black boxes. Our statistical approach means:
- "Alert because z-score was 3.2 (threshold 2.5)"
- "Baseline is 10.2 ± 2.1 based on 67 samples"
- Medical staff can audit and trust the logic
Philosophical Learnings
1. Self-Improvement Requires Humility Our agent starts by admitting "I don't know enough yet." That humility - using safe defaults until confident - is what makes autonomous learning trustworthy.
2. Intelligence Emerges from Simplicity We didn't use complex neural networks. Just:
- Store data
- Calculate statistics
- Compare to thresholds
- Improve continuously
Simple building blocks → emergent intelligence.
3. The Best AI Fades Away Loud Labs' philosophy proved true in implementation. Best demo moments:
- Agent quietly learning in background
- Autonomous decision to upgrade (no fanfare)
- Caregiver messages that sound human
- System that "just works"
Ambient intelligence should feel like magic, not machinery.
What's next for MotionGuardian
Immediate (Post-Hackathon)
1. Real-World Pilot
- Partner with 2-3 assisted living facilities
- Deploy to 50-100 residents
- Measure false alarm reduction vs current systems
- Gather caregiver feedback on message quality
2. Multi-Modal Learning
- Add time-of-day context (morning routines vs evening)
- Day-of-week patterns (weekday vs weekend)
- Location awareness (bathroom vs bedroom)
- Build richer contextual models
3. Long-Term Trend Detection
- Implement AWS Lambda + S3 integration (from our proposal)
- Detect gradual mobility decline over weeks/months
- Proactive alerts: "Activity level down 15% this week"
- Earlier intervention for health issues
Medium-Term (6 Months)
4. Multi-User Platform
- DynamoDB for user profiles and learning models
- API Gateway for scalable routing
- Caregiver dashboard showing all residents
- Population-level insights and benchmarking
5. Native Wearable Integration
- AWS IoT Core for Apple Watch / Fitbit
- Greengrass edge computing for local processing
- Offline operation with cloud sync
- Battery-optimized data transmission
6. Advanced AI Features
- Predictive anomaly detection (forecast issues before they occur)
- Multimodal AI (analyze voice tone, gait, sleep patterns)
- Federated learning across user base (privacy-preserving)
- Explainable AI reports for medical staff
Long-Term (12+ Months)
7. Healthcare Integration
- EHR/EMR system integration (Epic, Cerner)
- HIPAA compliance certification
- Clinical validation studies
- Insurance reimbursement pathways
8. Loud Labs Product Ecosystem
- Integration with other Loud Labs spatial products
- Unified ambient intelligence platform
- Cross-product learning (location + motion + voice)
- "Digital sixth sense" that knows you holistically
9. Research & IP
- Publish research on adaptive threshold algorithms
- Patent self-upgrading agent architecture
- Partner with universities on aging-in-place studies
- Contribute open-source statistical learning library
Vision: Ambient Health Guardian
The ultimate goal: An AI that knows you so well it catches health issues before you notice them.
- Detects subtle gait changes that predict falls weeks in advance
- Notices bathroom trip frequency indicating UTI before symptoms
- Identifies sleep pattern disruptions signaling depression
- Alerts to medication non-compliance through routine changes
Not through invasive monitoring. Not through constant attention. Not through sacrificing privacy.
But through intelligent pattern learning that respects your life while protecting your health.
That's the Loud Labs vision. That's what MotionGuardian becomes.
Why This Matters
We built MotionGuardian in 48 hours, but the implications extend far beyond this hackathon:
For individuals: Aging in place with dignity, not institutional monitoring For caregivers: Peace of mind without alert fatigue For healthcare: Earlier interventions, lower costs, better outcomes For AI: Proof that autonomous, self-improving agents are achievable today
The future of AI isn't models that know everything. It's agents that learn everything about you.
That future starts now.
Built with 💜 by Loud Labs for the Luma × AWS × Anthropic Fall AI Hackathon
"The best AI is the kind you never notice - until it saves the day."
Built With
- anthropic-claude-sonnet-4.5
- css3
- device-motion-api
- express.js
- html5
- javascript
- node.js
- postman-webhooks
- redis
- upstash

Log in or sign up for Devpost to join the conversation.