The initial inspiration for this project was expanding the idea of universal design to a digital environment. People with limited mobility and / or dexterity would benefit from programs and games that provide the option to be powered / operated in alternative ways. This project explores using facial recognition as an alternative to traditional game controls.

Whether using facial recognition as a primary control or using it to supplement existing control features, there is an added level of accessibility and function to reach a wider audience by implementing controls beyond the traditional mouse, keyboard, or joystick controls.

Some games developers already implement some level of motion detection to their systems; however, the goal with this project was to prove that subtle movements could be clearly identified as a purposeful control without advanced sensors. In this way, we were inspired to create an affordable solution using a common laptop webcam.

What it does

The code uses real-time facial recognition on live video to identify and map when a person raises his or her eyebrows. In response to eyebrow movement, a visual effect (an animated gator with a changing background) is displayed as a proof of concept that this movement can act as a boolean to activate/deactivate a game event much like how a spacebar would be used in a traditional control setup.

How We built it

We built this application using Python 3, face_recogition by ageitgey on github (, OpenCV for Python, and Pygame.

The program runs by constantly taking a video via the webcam in the background. Two countdown timers (inspired photo booth) help users pose while video is being taken. The two positions required were a neutral expression and a raised eyebrows expression. Both of these were captured as images from frames of the video, and facial recognition was used on these two initial images so numerical comparisons could be made to determine an individual’s brow range of motion.

The most important data points from the facial recognition were related to the position of the bridge of the nose and the left and right eyebrow. The bridge of the nose was chosen as a datapoint because it remains in a relatively constant position when the brows are raised. A weighted average was taken across several data points mapping out these features, and similarly a weighted average was taken for the left and right eyebrows so this motion would be considered as a singular upward movement. The initial captures were used to create a margin of error for the comparisons.

After the initial images were captured and points were calculated from them, every other frame of the live video was captured and analyzed in comparison to the default states to determine whether the brows were raised at the time or not. This idea is what created the boolean that is essential the control function of the concept.

Pygame was used to provide a simple visual demonstration of how the boolean nature of lifting the brow can be implemented in a game-like environment.

Challenges We ran into

  1. Accounting for a person moving on the webcam from the initial capture position
  2. Mapping movements for different faces Solution: Utilized many facial features and complete facial mapping to identify unique individual. Every individual has their own unique eyebrow mapping. For example, some people have less range of motion to raise their eyebrows than others.
  3. Creating a hands-free solution to capture default state images Solution: Created photo booth inspired countdown to capture facial image.
  4. Troubleshooting issues with Mojave not working with Pygame Solution: Attempted to use an anaconda distribution of Pygame. This did not work. A fork of Pygame (Cog Sci) was used due to MacOS Mojave conflicting with the Pygame library.
  5. Addressing video capture lag and reaction times
  6. Dealing with poorly documented face mappings
  7. Changing the facial mapping from absolute coordinates to relative coordinates based on identified features of the nose and eyebrows.
  8. Completing a functional game using facial recognition with the time constraints of the event.

Accomplishments that We're proud of

I’m proud of the completion of the facial recognition function of the project. Without any specialized sensors, we were able to get the code to do its job and confirm the same with the various displays of the animated gator. It was very rewarding to be able to build off of a very complete facial recognition library and specialize its use with the focus of brow movement. Laptop webcams are by no means the best of quality compared to other modern cameras like phone cameras; however, it was very cool to be able to get a lower quality webcam to achieve relatively accurate facial motion detection.

What We learned

We learned that facial recognition is both a computing and human challenge. We discovered the technological and timing challenges regarding the limits of frame rates in trying to optimize the code. By reading through the documentation of the facial recognition library that my code is built off of, we understood more about how facial recognition is actually achieved and what facial data points are important in making distinctions. Beyond the computing aspects of this project, the human / user-interaction part of the project taught me how important it is to consider different users. For example, user who tested the program was concerned that less distinctive eyebrows would not be recognized by a laptop webcam. This helped inspire taking more initial data points to create a more unique profile for the current user of the program. This was one example of many more like considering different ranges of motion, initial eyebrow locations, and uneven movement between eyebrows. The design of the program needed to account for what was universal or attainable by both the computer and the person.

What's next for Game Face

The next steps in the development of Game Face would be the creation of simple games to interface with the facial recognition code. This could include games similar to Google Chrome’s offline dinosaur run/jump game or Flappy Bird. These games for the most part could be controlled with just the eyebrow recognition controls that are implemented already. Further development would see more facial movements being mapped in order to provide games with more advanced controls.

The domain was also registered during this hackathon as a domain to represent future work in facial recognition and gaming. This domain was chosen because of the importance of video frame capture and analysis in facial recognition.

Built With

Share this project: