Imagine not having to aim properly at the dustbin and just "agar agar" your aim, eliminating the possibility of having to make the exhausting extra trip to bend over and pick it up. Oh the shame of the whole room knowing of thou aiming skills.

Serious pitch: Users who have disabilities and have poor depth perception face many difficulties. It is likely that they may not be technically illiterate to adapt to the ever growing software applications. This makes way for effective, instruction-less, automated hardware solutions - one that is not user-oriented but one that serves the user from an external scope. "Living-Robotics"

It is an automated rubbish bin made with ESP32 which takes advantage of OpenCV with numpy filtering and sectioning to establish a localization in real time on a constantly moving object (as quick as a throwing an item in a matter of seconds). Predicting its trajectory and optimizing the network channels from

live video feed --> imageprocessing --wifi/router--> esp32 --> robot motor

Latency has always been a fear when implementing real-time functionality, especially so when the data you are processing is not binary (like automated room lights). We decided to face head-on with this hack.

How we built it: Algorithm for the computer server We used openCV to find the differences between frames in order to detect the balloon’s trajectory Can be done using openCv’s framedelta after grey scaling the images We used multiprocessing to render the frames in a separate core in order to maximise the number of frames per second we get from the camera. The frames rendered from the camera is enqueued into a thread safe queue (multiprocessing.Manager.Queue()) They will be dequeued on the main thread to be processed We processed the frames using numpy to identify zones where the balloon is at Using the numpy .where() function, we get the coordinates of the places in the frame where the balloons is at. Using the leftmost x-axis and the rightmost x-axis of the white pixels, the midpoint of the balloons can be estimated We used socket to communicate with the ESP32 through Network protocols The computer acts as a server to wait for the ESP32 client to connect to it. Once connected the computer will send data based on the location of the balloon

Challenges we ran into: There were issues with working with the arduino Hardware: wires were not connected properly due to the shifting of the overall motors, securing was also an issue due to the clamping of the motor driver. Unknown battery levels at all times also meant that the basket would move at slower speeds after the 1st few uses. We faced discrepancies with connections to the wifi. Independently, the basket functions perfectly while connected wired to the computer. However, using wifi would disable 1 of the motors, an issue which baffled us for hours. The distribution of the camera vision into its respective sectors faced numerous issues when faced with the basket’s abrupt movement. This causes major fluctuations in the computer visions’ sectoring. For instance, when the bin abruptly traverses from computer-defined sector 6 to sector 5, it may register it as sector 4 instead, and providing a wrong signal to the bin for corrections. In terms of the computer vision algorithm and detection, our basis relied on the fact that detection of white pixels would determine how the computer would classify and section the screen. However, the wall(the screen we used in this hackathon), had multiple white holes and gaps which posed a major challenge for us to conquer. The computer registers the gaps in the screen as white spots used to section our screen.

Accomplishments that we are proud of: We finally got it working after countless efforts of troubleshooting and finding faults. We also had a rocky start of trying to figure out how we should structure the code and connect the ESP32 with the computer. Last but not least, it was a challenge for us, as we did not know each other beforehand, and thus took the leap of faith by working with each other for the 1st time. I’d say it turned out pretty well.

What we learnt: Through this hackathon, we have come to understand each other’s strengths and weaknesses. Some of us, with beginner’s knowledge of programming, are also able to contribute to this project. Majority of us are also participating in a hackathon for the 1st time. We are able to dedicate different assignments for this task to suit each other’s familiarity, and also accommodate for different people’s differences. But most importantly, we had fun through these 24 hours. We have learnt how to assemble hardware better. We all have diverse backgrounds in hardware and software, and this project was able to compliment each of our strengths to bring our creation to life.

Built with: Python, VS Code, Arduino IDE, Ultrasonic, USB Camera, L239N, ESP32 NodeMCU

It's potential and extended scope of use: This has the ability to adapt to many more items such as catching laundry and transporting it to washing machines. With proper adapting and interfacing, this is a pioneer step to start making houses truly automated to serve the disabled.

Built With

Share this project:

Updates