Inspiration

We were inspired by the idea of creating an interactive, AI-powered robotic companion that could assist in real-world scenarios, from helping visually impaired individuals to enhancing educational experiences and smart spaces.

What it does

Boogie captures real-time images using a webcam mounted on a buggy robot, performs object detection and facial recognition, synchronizes animated mouth movements with text-to-speech, and allows user interaction through predefined commands or chatGPT.

How we built it

Webcam integration for real-time image capture.

AI pipeline using YOLOv10 for object detection and classification.

Facial recognition to distinguish teammates from strangers by comparing feature vectors.

PyGame application for mouth animation based on the amplitude of text-to-speech output.

Central controller that integrates object detection, facial recognition, animation, and chatGPT communication.

Open AI API and Google for NLP

raspberi-pi, arduino and lidar

Implements navigation and mapping

Challenges we ran into

Accomplishments that we're proud of

What we learned

What's next for Boogie

Share this project:

Updates