🚀 NanoSensei — Your Personal On-Device Skill Coach Powered by AI
🎯 Inspiration
Learning new skills is still painfully inefficient. Whether you're practicing speech, coding, design, fitness form, or creative skills, most learning apps rely on cloud AI, meaning:
- High latency
- Privacy concerns
- Expensive inference
- Poor offline support
But ARM-powered mobile devices today are capable of incredible on-device AI performance. I wanted to build something that proves this future:
🔥 An AI skill coach that runs locally on your phone and improves instantly based on your behavior — without ever sending your data to the cloud.
That idea became NanoSensei.
🤖 What it Does
NanoSensei is an offline-first AI coaching companion that analyzes micro-behaviors on your mobile device and gives real-time personalized feedback.
Key features
On-device inference (ExecuTorch / SQLite+vector embeddings stored locally) Voice and motion skill detection (device sensors) Instant feedback without cloud latency Zero data leaves the device — full privacy Graviton-powered backend for syncing anonymized session summaries Gamified progress scoring using AI embeddings
NanoSensei helps users learn any skill faster — from speaking confidently to improving motor-skill practice.
🛠️ How We Built It
Mobile (On-Device AI)
- Built with React Native + ExecuTorch for mobile inference
- Embedded a lightweight transformer-based skill evaluation model
- Sensor fusion: accelerometer, microphone, and touch-event signals
- Local embeddings stored in SQLite
- No cloud inference required
Backend (ARM-Optimized)
Deployed on AWS Graviton (c7g.xlarge) for massive efficiency
- FastAPI backend
- Dockerized for ARM64
- PostgreSQL with SQLModel for session storage
- Simple sync API for user devices
- Architecture tuned for low cost and high throughput
DevOps
- Docker Compose (ARM64)
- Secure SSH via PEM
- CI/CD with GitHub Actions (optional future extension)
⚔️ Challenges We Ran Into
- ExecuTorch model conversion required significant tuning
- SQLModel raised errors with reserved names (
metadata) — resolved by refactoring models - ARM64 Docker images needed custom builds
- OneDrive Windows path permissions caused SSH key issues
- Container kept restarting until deep inspection of logs
- Rebuilding a backend fully compatible with ARM required several iterations
🏆 Accomplishments We're Proud Of
- On-device LLM inference fully operational
- Backend deployed 100% on ARM / Graviton, no x86 dependencies
- Clean architecture: mobile → local AI → sync → Graviton backend
- Achieved offline-first skill coaching, no cloud inference
- Optimized Docker image size & startup time
- Built a system that feels like the future of personal learning
📚 What We Learned
- ARM-powered devices can run real AI workloads that used to require servers
- ExecuTorch is a breakthrough for local inference
- ARM Graviton simplifies cost and performance at scale
- Careful naming in SQLModel prevents internal conflicts
- Debugging containers via logs is essential with restarting loops
- How to run end-to-end mobile + backend inference without ever relying on GPUs
🚀 What's Next for NanoSensei
- Add more “skill packs” (speech, design gestures, mindfulness, drawing, fitness form)
- Expand to real-time feedback with 30ms latency
- Add WebGPU support for browser inference
- Build a marketplace of custom skill-coaching AIs
- Enable federated learning across devices
- Offer SDK for other developers to embed NanoSensei in their apps
- Launch on App Store + Play Store in 2026
Log in or sign up for Devpost to join the conversation.