Inspiration

Our inspiration for this project stemmed from discovering unique methods of input management, especially with the advent of tracking software. Being inspired by a story of an old HTN competitor, who used a unique method to have users interact with their virtual world differently, we thought of a way to have seamless inputs through hand gestures, providing many different combinations of inputs using a tool that most people already use everyday.

What it does

The model places digital nodes on the major joints on your hands and provides their co-ordinates relative to the screen. Using these points, an overall action/gesture is determined by using ratios and differences to determine which fingers are open or closed. And with different patterns of open or closed fingers, the program determines what gesture each player is making.

How we built it

We used mediapipe and opencv for the machine learning model in order to track hands The interpretation of gestures was custom, deduced by mathematically deducing the distances and ratios between points on the hand. We used aseperite for assets and godot for the assembly of the games and tracking software.

Challenges we ran into

A major challenge we ran into was that of converting a .py file to a .exe file. Despite the simplicity of the process, the dependencies led to 2 hours of errors and failures. We ran into many unexpected errors which prevented us from porting our finished tracking software into Godot, the game making engine we used. In order to circumvent the challenge we decided to take a parallel approach where the mediapipe program detected and deduced the hand gestures, placing the data in a separate file that was being read by the engine in real-time. Despite the lengthy and technical process that took the input of the camera and transformed into the actions of the characters on-screen, the program had barely noticeable latency, allowing for accurate gameplay.

Accomplishments that we're proud of

Being able to successfully use the model and put it to use practically through critical and creative thinking was a very gratifying experience, in addition to learning more about game design and creation through creating our own assets and putting everything together.

What we learned

We learned the basics of using ML models in conjunction with libraries like opencv for reading and writing data.

What's next for Pipedream

In the future, we may explore more path for models not only restricted to hand movement but potentially face and full body as well. It would also be interesting to explore practical applications of this technology for not only entertainment but other avenues as well such as daily convenience and services.

Built With

Share this project:

Updates