Inspiration

We were inspired to do this project from multiple AI vision projects we found on YouTube, such as some created by the youtuber Michael Reeves. The reason we decided to focus on hand gestures to relate to keyboard inputs, was because we were thinking of how we could make our own experience easier during hackathons and similar events.

What it does

Short Signs uses Ai to read the hand sign that a user gives and stores it. Then, the user can assign a keybind or a website url to the hand sign, so that while Short Signs is running, if the user makes the same hand sign again, either the keybind will trigger or the website is visited. These keybinds can be changed and deleted as well.

How we built it

We made the application entirely in Python. The User interface was made using the PyQt6 library with one main page for the entire application and displaying the different widgets such as the user’s video and their current keybinds. The keybinds, their names and the related hand gestures were stored in a JSON file which is loaded when the application is run again. AI vision was done using OpenCV to get the webcam data which was fed into a mediapipe which generated the joints for the hands it detected. Using these joints we stored the relative angle and positions of each in order to easily detect the hand gestures in different distances.

Challenges we ran into

One of the biggest issues we had was the multiple merge conflicts and dependency issues, due to writing the code on different operating systems, which caused some lines of code to be deleted for no reason, or for new issues to arise due to the different implementations of the same systems. We were able to solve this through better communication on who was assigned to which tasks, as well as handling merging to the main branch more carefully.

Accomplishments we’re proud of

We’re happy with how accurate the AI is with recognising the same hand gestures given the short amount of time we were given to work with. We are also pleased with the range of functionality we were able to give the program, as it can work with two hands and do more than one keybind at a time. We are proud of the project overall.

What we learnt

We learned how to AI vision and how to deal with only finding gestures from unnormalised values. Additionally, we improved on our teamwork as we communicated to solve each conflict and dependency from each branch.

What’s next for Short Signs

We would try and implement tracking of the face in order to give more complex commands. We might also try and improve the UI to give a more user-friendly experience.

Built With

Share this project:

Updates