Our goal was to tackle a specific challenge: providing better options for disabled game players. We wanted to take something that is simple and easily recognized and add functionality to it. We also wanted to explore the use of AI in chess to make a chess playing agent.
What it does
Voice control is a common solution for players that have physical disabilities and are unable to operate computers or use a conventional chess board. We wanted to take our own shot at making this solution. We allow the user to communicate with the game through a verbal interface alone, and play against an AI agent that we created.
How we built it
To make it easier to understand, we'll list it out. Controller and GUI: Built in QT which is a GUI+ IDE and framework. We utilized it in C++. Voice recognition: Using googles cloud to speech API to allow for recognizing of the voice and handling the verbal interface with the user. Chess AI Agent: This was build using a minimax agent that found the move that would maximize the worst case position reached based off an evaluation function. The evaluation function was then improved using reinforcement learning by making our agent play against an already well established chess AI (Stockfish) and improving the evaluation function using augmented random search GitHub: We used GitHub to manage our project as our team is spread across the world! (From as close to Temple as Swarthmore to as far as Singapore and Pune, India.
Challenges we ran into
This is the first time any of us have made use of QT. It's a powerful tool but has a steep learning curve. A lot of normal C++ objects are encapsulated into QT's own version (for example even string has a QT specific class of QString). While this was helpful in implementing the GUI, it was also often a reason to constantly refer back to documentation while grabbing coffee for our sleep-deprived brains!
Accomplishments that we're proud of
- Implementing a GUI
- Coding up minimax for the AI behind the computerized chess
- We improved the evaluation function of the minimax algorithm using augmented random search which is a reinforcement learning strategy
- Cute chess pieces on our GUI!!! (Yes we know it's a small accomplishment but it motivated us to keep working!)
- We used Google Cloud Speech API for our speech recognition
What we learned
- We learned to use the QT environment!
- We all learned a little bit about how the minimax algorithm works
- Speech recognition (A bigger annoyance than we thought at the start of the hackathon)!!!
- Implementing augmented random search in relation to the minimax algorithm!
What's next for VoiceControlledChess
- Create an online multiplayer that uses sockets to connect to another client so that they can play together online
- Implementation of the blindfold mode that we didn't get to. Basically blacks out the screen so that the user is entirely reliant on voice feedback and voice input.