Our Vision

Today, our planet is facing an unprecedented environmental catastrophe, where even small quantities of wasted energy leave a large impact. Despite that, we have all been guilty of leaving a room without turning off the lights. It is now time for that to change.

To accomplish this, we built a fully-automated robot that monitors the status of the light switch in a room. If anybody leaves the room without turning out the lights, it “gently” shoots them with nerf darts to remind them of the importance of turning off the lights and continues to follow them around until they comply.

Our Process

We started our process by ideating on how we can help decrease energy waste and incentivize positive actions, while also striking a balance between making something completely serious and something utterly ridiculous.

We split into two overall teams: one handling image recognition and higher level Roomba control and another understanding how to control the Roomba and set up the corresponding peripherals (light switch, Nerf toy, etc). We were able to work smoothly in our two sub teams for the first day, and when it came time to compile our work, we were able to quickly set up libraries and condense functions appropriately since we were effectively communicating the entire time.

Key Technologies

NERF Toys

Our project utilized an auto-firing NERF toy (the Vulcan EBF-25), and we based all of our designs around this in conjunction with the Roomba.

iRobot Create2

The iRobot Create2 was provided by iRobot, and we were able to use it as the base of our system, both literally and figuratively. It supported our mobile platform, and we were able to use its API to successfully and accurately control the robot.

Arduinos

Our project involves not just one, but two different Arduinos controlling various low-level sensors and actuators – including the servo motor that fires the NERF toy, the photoresistor that detects human passage, and the light switch itself. Having an easy platform to which we could connect these various devices was essential in making our project possible in such a short timespan.

Firmata

Because the low-level behaviors of the Arduinos are so closely connected to the high-level computer vision and state machine logic, we knew we would need to send large amounts of serial data between different devices. The Firmata protocol made that easy, while also enabling us to keep our entire codebase in a single programming language (Python 3) and a single deployment pipeline.

OpenCV and Machine Learning (Google Cloud Vision)

Our computer vision system is based on OpenCV for image capture and storage. Although long-term, we wish to train our own models. We are currently utilizing the Google Cloud Vision API to perform image recognition, specifically looking for people that could be leaving the lights on when they leave the room.

MQTT

To transmit data quickly and reliably between the two computers (the one running the switch and the one running the robot), we used the MQTT protocol and the CloudMQTT service. This made it possible to have bidirectional communication despite the extensive network firewalls in place at the school where we were working.

Authorize.Net

Because we were so sure that people would want our product, we implemented a FAKE platform to allow people to purchase our amazing idea. We would like to emphasize that no real data should be placed into the service, as we do not intend to mass produce.

Where We Go From Here

One functionality we developed but did not yet integrate into the final product goal was to analyze the room for additional people when a person leaves the room. We intended to do so with an unflipped switch. Although we accomplished this in code, due to the time crunch inherent in a hackathon, we were unable to hook up an additional camera to the system.

We also want to run the Roomba/Nerf system on a smaller computer, but after working for multiple hours with the provided Dragonboard, we were unable to get the system working and thus decided to keep a computer for the first model of our product.

Finally, we aim to increase the efficiency of the code through self-training a system to detect objects and their motion from our cameras while running locally. This will allow us to significantly increase our framerate over the current cloud system and will allow our Roomba to more successfully follow the person it is trying to help.

Built With

+ 14 more
Share this project:
×

Updates