Inspiration

We looked online and found out that two of the most popular chess websites, lichess and chess.com, have no voice options for the visually impaired. We wanted to create something, that can help those who are visually impaired to have a complete experience without the hassle on relying on screen readers for every single move. And thus originated the idea of voice powered chess, ACE. The community of visually impaired chess players is a large one and this project is our attempt at a contribution to their happiness :)

What it does

ACE (Accessible Chess experience) is a real time voice powered chess application which lets you play online with your friends. The game has features like moving pieces, finding pieces on board, getting board summary, repeating opponent moves, and much more, all implemented using voice commands. Visually impaired players can play using purely voice and a spacebar key only! The standard method of dragging and dropping pieces is also implemented.

How we built it

We used React to create the front end of the game and nodeJS for its backend. The real time communication between players is facilitated by web sockets and we used github for code collaboration. For integrating voice commands we have used the azure api for speech to text conversion and the webspeech api for text to speech conversion.

Challenges we ran into

We ran into plenty challenges along the way. We looked for and tested out pretty much every speech API under the sun and even tried to tailor them for our task. We eventually settled on Azure Speech API due to its superior accuracy and possible option for custom speech training on provided voice samples. The asynchronous running of speech functions and their interaction with the chess logic was another difficult aspect of our project that took some patience :)

Accomplishments that we're proud of

  • The fact that we stuck to our original vision of a purely voice operated game from start to finish.
  • The collaboration between teammates despite several blockers along the way.
  • Managing to complete the task in a comprehensive manner in 3 weeks.

What we learned

  • Learnt to build a full stack web development application
  • Learnt about real time communication using sockets
  • Learnt to use Azure Speech SDK and custom speech training
  • Explored various speech APIs

What's next for ACE

Currently we are working on training the azure speech api using custom audio commands and in future, we plan to integrate more and more features and make our own model for speech recognition to make the whole experience smoother. We also plan to make our error fallbacks even more robust to take care of edge cases that may creep up otherwise.

Share this project:

Updates