Many people in the world today are not fortunate enough to be able to enjoy the same things that we do. It's important that entertainment today be accessible so as not to be exclusive towards any people. We thought that it would be a fun and exciting challenge to design a game that can be played by people of all levels of physical ability, hence BabblArt.

BabblArt is a game slightly modelled after Pictionary - there is one player drawing a picture of a given word while the rest of the players have to guess what is being drawn. The main difference with BabblArt is that the program makes use of motion capture technology to guide the pen based on the position of the artist's head. In addition, players can adjust the thickness of their pen by altering their speaking volume. There are also different functionalities to change the pen color and clear the board. The rest of the players will compete to see who can determine the word represented by the picture in the least amount of time. There can only be one winner.

In building Babblart, we primarily utilized multiple open-source python libraries for purposes such as collecting face-tracking data and audio data, as well as threading for running concurrent processes. We also made heavy use of socket-programming libraries for communication between devices (host and clients).

The process was full of challenges. One such challenge was incorporating multi-threading to avoid latency between players and the drawing feed. Another issue that we had to work past was normalizing the input data so that the audio and facial recognition aspects had continuous data with little spikes/noise. This was extremely important because it led to better images being drawn and improved gameplay.

Some of our most notable achievements during the creation of BabblArt were unpacking and utilizing the functions of open-source computer visualization libraries. We were also proud that we achieved communication between multiple computers and were able to reduce unwanted variation in the audio and computer vision data

Throughout the process, we were able to gain a better understanding of audio processing, video processing, computer communications, and making art in python.

In the future, our team can definitely see ourselves expanding the concept by making a react app and using speech recognition to make guesses instead of typing. We also aim to implement a scoring system throughout multiple rounds of the game. Hopefully, somewhere down the line, we can pave the way for other accessible online games.

Built With

Share this project:

Updates