Inspiration

Valentine’s Day was yesterday, and while everyone else was posting flowers and dinner dates, all of us were together with code and circuits. We realized we were spending the holiday single… again. So instead of feeling sorry for ourselves, we decided to build something that could spread the love we didn’t get. If Cupid wasn’t showing up for us, we’d engineer our own. That idea became Cupid’s Wingman, a robot designed to help people feel seen and appreciated.

What it does

Cupid’s Wingman is an autonomous AI-powered robot that navigates independently, detects and recognizes faces, and approaches people at a respectful distance. Once engaged, it interacts through expressive animated eyes and subtle head tilting. It uses movement and facial gestures to convey emotion, making people feel seen, acknowledged, and appreciated without saying a single word.

How we built it

Cupid’s Wingman is powered by a Raspberry Pi running OpenCV for real-time object and face detection. The Pi streams camera data over the network to a computer, where the vision processing script identifies faces and determines movement decisions. Once a target is detected, commands are sent to an Arduino, which controls the drive motors to steer the robot toward the person at a safe distance.

The Raspberry Pi also controls a servo motor that enables the upper assembly to tilt, creating expressive head movements. The robot uses two separate battery systems, one dedicated to powering the microcontrollers and processing units, and another for the motors, ensuring stable power distribution and preventing electrical noise from affecting control signals. A smartphone mounted at the front acts as the robot’s face, displaying animated eyes that bring personality and emotional expression to the interaction.

We have both a cardboard prototype and a 3d design that can be 3d printed and fully enclosed designed from scratch.

Challenges we ran into

SSH & Networking Reliability – Establishing stable communication between the Raspberry Pi and the computer for real-time OpenCV processing involved debugging connection drops, latency, and configuration issues.

Real-Time Vision Calibration – Tuning face detection thresholds and accounting for lighting variations required extensive testing to reduce false positives and improve tracking accuracy.

Movement & Distance Calibration – Converting vision data into smooth motor steering was challenging. We had to fine-tune stopping distance and steering sensitivity to ensure natural, respectful interactions.

Accomplishments that we're proud of

We’re also proud of successfully integrating autonomous navigation, face recognition, and expressive interaction into a cohesive design.

What we learned

Learned OpenCV and object recognition Learned Raspberry Pi and SSH while processing video footage with droidcam Learned Arduino controls Learned rapid prototyping using 3d printing Learned the importance of rest

What's next for Cupids Wingman

Next, we are transitioning from our prototype to the fully 3D-printed final model to improve durability, structural stability, and overall design polish. We also plan to optimize our object recognition pipeline by reducing camera resolution for faster processing, implementing HSV color trimming, and creating refined masking techniques to improve detection accuracy and reduce computational load. These improvements will make the robot more efficient, responsive, and scalable for future development.

Built With

Share this project:

Updates