Retro-Eye was born from our combined love of gaming and want to share that passion with as many people as possible. Tetris is a game almost everyone has played before - but nobody has played it with only their eyes (we think).

What it does

Retro-Eye is an accessible version of Tetris that does not require a controller or mouse/keyboard to play. It uses Google's Computer Vision API to track the user's eye movements and play the game accordingly. Our hope is that Retro-Eye's control scheme is accessible to a wider audience than traditional video game control schemes. People who have previously missed out on the joy of gaming and playing with their friends will now be able to any time they want!

How we built it

Retro-Eye is built from 14 cans of Monster, countless Rice Krispie Treats, and a love of hacking. It started out with Pygame as the engine for Tetris and OpenCV for processing images from a user's webcam. Some of the eye tracking methods we implemented were cascade classifiers, facial landmark detection, and optical flow techniques. Once we were more comfortable with the game mechanics and computer vision, we moved to an implementation of Tetris written in JavaScript and Google's API for the image processing. This version of our application was much more compatible with a web browser and is what we settled on for our final design.

Challenges we ran into

The most complicated part of this project was combining the game and computer vision into one final product. The two halves of our project ran great separately. We implemented a separate controller that used a keyboard for testing Tetris, which we also used as a baseline for constructing the controller that used eye movements.

Another challenge we faced was scaling our goals to fit the time constraints put upon a hackathon. Being able to create a polished product in 36 hours requires careful planning and goal setting. Originally we hoped to get a minimum of two games live on our web-app, but after 6 hours or so it became clear that that method would result in two poorly coded games. Numerous times throughout the day we had to cut back features that weren't working or were looking less feasible, and focus in on what was working.

Accomplishments that we're proud of

We are most proud of the fact that me made it work. Towards the end of Saturday and in the early hours of Sunday, we were not sure if the entire project would come together as smoothly as we hoped. Tetris is a relatively simple game to implement, but creating multiple types of controllers and integrating the game into a web browser was exciting to finish. The best part of the hackathon by far was the first time we watched a tetronimo move without touching the keyboard - that thrill is what will inspire us to keep hacking every day.

What we learned

We learned a significant amount about new technologies and programming languages, but equally important was the experience we got building something as a team. Coding (and hacking) is more fun and more productive when working with others - spending your weekend with 600+ like-minded people gives you a sense of belonging that extends beyond the technical skills gained.

What's next for Retro-Eye

We are hoping to expand Retro-Eye to include even more games (Cubefield, Frogger, etc.) so that more people can experience the joy of gaming. Our focus will be on retro games. since they tend to have fewer controls than modern games. We also hope to fully complete our implementation of Tetris up to modern standards - there are a few newer mechanics (such as wall kicks) that we did not include in our initial design.

In the future, we hope to move Retro-Eye to be hosted on Google Cloud. Currently, it runs locally and uses the Google Cloud API to process images from the user's webcam. Moving the entire program to Google Cloud will allow for Retro-Eye to reach a significantly greater audience.

We also hope to increase the quality of the eye-tracking we used in Retro-Eye. The next step would be to include some of the more efficient eye tracking techniques that we had worked on, such as optical flow. This method tracks changes in the eyes, rather than uploading images at set intervals and comparing differences in the frames.

Built With

Share this project: