Inspiration
I wanted to create a practical application that demonstrates the power of on-device AI processing on Arm architecture. Emotion tracking is a fascinating use case that combines computer vision, machine learning, and real-world utility, all while respecting user privacy through local processing.
What it does
EmotiSense AI analyzes your facial expressions in real-time to detect emotions: happy, excited, tired, and neutral. It tracks smile intensity, energy levels, head movement, and drowsiness, all processed locally on your device using Arm-optimized AI. The app provides:
- Real-time emotion detection with confidence scores
- Smile intensity tracking
- Energy level monitoring
- Head movement detection
- Drowsiness alerts with multi-factor calculation
- Eye closure tracking
- 100% private, all processing on-device, no data leaves your phone
- Fully offline, no internet required
How i built it
The app is built with React Native and Expo SDK, leveraging Google ML Kit Face Detection which runs TensorFlow Lite models optimized for Arm architecture. The tech stack includes:
- React Native + Expo SDK 54 for cross-platform mobile development
- Google ML Kit Face Detection (TensorFlow Lite on Arm)
- TypeScript for type-safe code
- React Native Vision Camera for real-time camera access
The architecture uses a modular detection system with independent detectors:
- EyeDetector: Eye tracking with 5-frame averaging for smooth detection
- HeadMovementDetector: Head pose and motion analysis with 10-frame history
- SmileDetector: Smile intensity analysis
- DrowsinessDetector: Multi-factor drowsiness calculation combining multiple signals
- EmotionClassifier: Central emotion classification logic
The camera captures faces at 5 FPS (200ms per frame), ML Kit extracts facial features, and custom detectors analyze the data to determine emotions with high accuracy.
Challenges i ran into
- Real-time performance: Optimizing detection algorithms to run smoothly at 5 FPS on mobile devices required careful tuning of frame averaging and history buffers.
- Expo Go limitations: ML Kit isn't supported in Expo Go, requiring development builds which added complexity to the setup process.
- Multi-factor detection: Combining multiple signals (eyes, smile, movement) into coherent emotion classifications required extensive testing and calibration.
- Cross-platform compatibility: Ensuring consistent performance across iOS and Android devices with different Arm processors.
Accomplishments that i am proud of
- Achieved 100% on-device processing with zero data collection
- Built a modular, maintainable architecture that's easy to extend
- Created smooth real-time emotion tracking that updates every 200ms
- Optimized for Arm processors with minimal battery impact
- Delivered a fully offline experience that respects user privacy
- Clean, documented codebase that other developers can learn from
What i have learned
- Deep dive into TensorFlow Lite optimization for Arm architecture
- Understanding of Google ML Kit Face Detection capabilities and limitations
- Best practices for real-time computer vision on mobile devices
- Balancing accuracy vs. performance in edge computing scenarios
- The importance of frame averaging and temporal smoothing for stable detections
What's next for EmotiSense AI
- Add more emotion categories (sad, angry, surprised, etc.)
- Implement mood tracking over time with local data storage
- Add customizable alerts for specific emotion patterns
- Create wellness insights based on emotion history
- Explore integration with meditation and mindfulness apps
- Add accessibility features for users with special needs
- Optimize further for even lower power consumption
Built With
- expo.io
- kitreact
- liteml
- nativecomputer
- react
- tensorflow
- typescript
- visionarm

Log in or sign up for Devpost to join the conversation.