Inspiration
Seeing the advancement technology has achieved with machine learning and people around building cool stuff using the same, we were pretty motivated to implement machine learning in our project at some point. Coming to this hackathon, and seeing sponsor Bobst's challenge on their demo-simulator machine pushed us further to work on this project.
What it does
It receives the hand gestures from webcam, detects the finger count, and sends the signals to the simulator machine. The simulator machine performs a series of actions on the basis of our finger count. For example, showing one finger indicates a start feeder for the machine, and showing three fingers increases the machine speed, and so on.
How I built it
We used Python with OpenCV to detect hand gestures on the basis of convex hulls. We identified holes in the convex hull, and implemented such that the number of holes correspond to the finger count. We then send requests to the simulator machine API, and perform actions accordingly.
Challenges I ran into
Machine learning, being a relatively new term for us, was as challenging as it was interesting. We were all experimenting with the tech stacks used for the first time, so we had to start from scratch and time management was a limiting factor. Besides, data fetching, rendering images/ live feed, appropriating models for the OpenCV cascades, time constraints to implement the use cases as planned.
Accomplishments that I'm proud of
To be able to present the first working version of our project. Our machine is now able to understand our hand gestures and perform required actions.
What I learned
We learned more about OpenCV, and some basic nitty gritty on machine learning. Also, we realized how the pre-existing libraries and the wide online presence of a very supportive tech community comes in handy when learning new tech stacks.
Log in or sign up for Devpost to join the conversation.