Inspiration

A member of our team plays the game 'Overwatch,' and one of his favorite streamers actually suffers from unilateral deafness. It is truly inspiring for him to see the bold titles and gameplay strategies that the streamer uses despite this challenge. This situation highlighted the significant barriers faced by deaf players, particularly in fast-paced, competitive games where sound cues are crucial. This realization motivated us to develop EchoVisual, aiming to empower gamers with unilateral deafness by providing them with the visual tools they need to fully experience and excel in the gaming world.

What it does

In the realm of competitive gaming, every player deserves an equal footing, yet players with unilateral deafness face unique challenges that hinder their full participation and enjoyment. "EchoVisual" aims to bridge this gap in FPS games by converting critical sound-based information into intuitive visual cues, thus enhancing the gaming experience and ensuring inclusivity. This project is not just a tool; it's a step towards redefining accessibility in gaming, making it a truly universal pastime.

According to the American Academy of Audiology, unilateral hearing loss (UHL) significantly impacts many individuals, affecting their daily lives, social interactions, and even work productivity. An estimated 60,000 people per year in the U.S. acquire UHL, and given its effects on sound localization and speech perception, this condition can significantly reduce quality of life, similar to or even exceeding the impact of binaural hearing loss​​.

UHL presents significant challenges for gamers, particularly affecting their performance in fast-paced environments like FPS (First-Person Shooter) games. The inability to localize sounds accurately is a critical disadvantage, as UHL prevents players from detecting where sounds such as footsteps, gunfire, or verbal cues from teammates are coming from. This difficulty not only slows reaction times but also impairs strategic planning and spatial awareness—key elements in competitive gaming. Consequently, players with UHL often experience frustration and a diminished gaming experience, which can lead to decreased participation and enjoyment.

EchoVisual aims to enhance gaming accessibility for individuals with unilateral deafness. The introduction of visual cues for sound localization could not only improve the gaming experience for these players but also potentially reduce the cognitive strain associated with compensating for their hearing impairment.

Prototype Description: EchoVisual

EchoVisual is an innovative prototype developed to enhance gaming accessibility for players with unilateral deafness. Here’s how our prototype functions:

  1. Our prototype extracts and analyzes audio from gaming clips. It is capable of detecting and interpreting spatial sound data, which is crucial for understanding the environment and events within the game.
  2. Based on the analysis, EchoVisual translates important audio cues, such as the direction and proximity of footsteps or gunfire, into visual signals. These visual cues are displayed on the screen, providing players with unilateral deafness the necessary spatial information to react accordingly.
  3. The prototype operates in real-time, ensuring that there is minimal lag between the sound occurrence in the game and the visual cue representation. This synchronization is vital for maintaining the competitive balance in fast-paced gaming scenarios.

We advocate for gaming companies to consider integrating similar features/extensions directly within their games. Implementing such accessibility options can significantly broaden their user base, catering not only to players with unilateral deafness but also enhancing the gaming experience for all players by offering more ways to interact with the gaming environment.

How we built it

We designed EchoVisual using Python and Streamlit, leveraging various libraries to process audio signals and display visual cues. The core functionality hinges on analyzing audio from games, detecting spatial information like the direction of sounds, and translating these into on-screen visual indicators. We developed our prototype on a local server and then deployed it for anyone to try it out, refining the processing algorithms to minimize latency and ensure that visual cues are synchronized with game events.

Challenges we ran into

One of the main challenges was the real-time processing of audio signals with high accuracy and low latency. Ensuring that our visual cues accurately represented the audio in a fast-paced game environment required extensive testing and optimization. Additionally, integrating our solution into diverse gaming setups posed technical hurdles, as we had to account for various hardware and software configurations.

Accomplishments that we're proud of

We are particularly proud of developing a functional prototype that successfully translates audio cues into visual signals. This prototype not only demonstrates the viability of our concept but also represents a significant step towards inclusivity in gaming. We managed to achieve low-latency processing, which is critical for the fast-paced nature of FPS games.

What we learned

Throughout this project, we gained insights into the complexities of audio processing, the importance of user interface design in accessibility tools, and the challenges of creating adaptive systems for diverse platforms. We also learned more about the specific needs and challenges faced by gamers with unilateral deafness.

What's next for EchoVisual

Looking forward, we plan to refine our prototype based on user feedback and explore partnerships with gaming companies to integrate our technology directly into games. Expanding our solution to support more games and platforms will help us reach a broader audience. Additionally, we aim to develop additional features that could support other types of disabilities to further our mission of making gaming accessible to everyone.

Built With

Share this project:

Updates