Inspiration
Valentine’s Day can be exciting for some, but isolating for others. We wanted to build something for people who might spend that day alone, but even more so for those who crave companionship or the feeling of being heard; therefore, we present ReLumi—a dynamic, mobile robot that shows care and connection in unique and creative ways.
What it does
ReLumi is a WiFi-enabled robotic companion that translates a variety of user and environmental inputs into expressive motion, thoughtful responses, and animated emotion. Through a web interface hosted on the ESP32, or through relevant questions and thoughtful conversations, users can: Trigger emotional states (sleepy, happy, confused), activate dance routines, and play interactive mini-games like Rock-Paper-Scissors and Tic-Tac-Toe. Each emotion is implemented as a synchronized combination of motor-control sequences and OLED-based facial animations.
How we built it
ReLumi is built around an ESP32 microcontroller and an ESP32 cam, chosen for its WiFi communication capabilities. The system architecture includes ESP32 (main controller +CAM), TB6612 motor driver, dual yellow hobby motors, OLED display, and a NiMh battery pack. Additionally, the ESP32 CAM hosts a web server for live camera feed, allowing ReLumi to visually engage with the user. Through computer vision, a Gemini, ElevenLab, and AssemblyAI API keys, ReLumi can mimic how a human would see and respond—picking between a wide range of unique, animated facial expressions, responses, and movements. Through MediaPipe, OpenCV2, and the integration of MiniMax algorithms, ReLumi can challenge your brain in fun ways with games such as TicTacToe, Rock Paper Scissors, and countless word games.
Challenges we ran into
- Connecting the ESP32 to WiFi was a nightmare (special characters prevented connection)
- Time constraints forcing scope prioritization (having to cut down on specific features)
- Handling WiFi requests while maintaining smooth motion
- Integrating live camera streaming without overwhelming the ESP32
- Multithreading the various functionalities to run in parallel for fast, realistic, and human-like responses
Accomplishments that we're proud of
- Successfully integrating motion, display animation, live camera feed analysis, and several API calls into a stable embedded system
- Turning a conceptual idea into a functioning robotic companion within 24 hours
- Successfully multithreading all the processes to run in parallel to SIGNIFICANTLY cut response times
What we learned
- Power distribution is critical when mixing motors and microcontrollers
- Integration complexity increases rapidly as subsystems (WiFi, motors, display, camera) are combined
- The challenges of multithreading with unstable livefeeds (ESP32-Cam)
- Be wary of special characters >:(


Log in or sign up for Devpost to join the conversation.