Inspiration
Hands-off photography is difficult without a photographer manually setting up and aiming the camera. Static setups like tripods fail to capture and/or properly center moving subjects as well as keep pace in dynamic, fast-moving events.
What it does
Our TurboPi Rover autonomously roams while panning its camera in search of human subjects. Once it detects a person or group, it centers them in the frame, snaps a photo, and later emails it to them.
How we built it
We began by assembling the Rover's hardware, attaching the motors, camera, and Raspberry Pi. After SSHing into the Raspberry Pi, we used VS Code and VNC to code and test various features like roaming, detection, panning, and photo delivery.
Challenges we ran into
After completing the code relating to human detection we realized that the program was overloading the CPU of the Raspberry Pi as the frames were dropping and the detection was lagging and overall unreliable. We solved this by offloading the majority of the processing to a laptop via multi-threading.
Accomplishments that we're proud of
We're most proud of the way that we leveraged the laptop to connect to the Raspberry Pi. This helped us overcome our most significant roadblock where we were unsure if the Rover would even be capable of performing the various implementations we had planned. Using multi-threading allowed us to stay true to our original design and have smoother, more accurate tracking and recognition than the Raspberry Pi would be able to accomplish by itself.
What we learned
We learned how to SSH into remote systems and how to use multi-threading to perform multiple operations at the same time. We expanded our understanding of hardware and how to integrate physical and software systems.
What's next for Carrier Pigeon
A feature that we would like to implement is a variety of hand gestures recognized by the robot. For example, a thumbs up could encourage the Rover to come in for a picture or a peace sign could signify that it should zoom in for portrait mode. In addition, making a "stop" hand signal would tell the robot that the person does not want to be photographed. This could be complemented by a facial recognition feature: if a person has previously specified that they would not like their photo to be taken at the event, the Rover would be able to remember their face and avoid taking their picture in the future.
Log in or sign up for Devpost to join the conversation.