Inspiration
We were inspired to make our own reinforcement learning algorithm by the incredible results that AlphaGoZero has recently yielded.
What it does
PokerOmega is an entirely new type of learning algorithm that combines aspects of Q-Learning, evolutionary networks, and adversarial networks to learn to play Texas Hold 'Em Poker on its own by playing itself.
How we built it
Building PokerOmega was definitely an arduous process. We outlined our experience pretty thoroughly in the links to the site provided. To put it simply, we mapped out each phase of our design in theory and attempted to implement it once we had a solid plan in place. With constant and tireless effort, we constructed the necessary pieces one by one.
Challenges we ran into
A big challenge was the Python poker library that we decided to use as the base for our simulation. The documentation was very difficult to decipher and the code was overall difficult to read. The library handled poker games in an oddly specific and counter intuitive way, so we had to re-write large amounts of it in order to stop it from inhibiting our algorithm from learning.
Accomplishments that we're proud of
We are extremely proud that we were able to achieve creating a working AI that achieved a 94% winrate over other instances of itself, and managed to beat us several times while playing against it.
What we learned
Creating PokerOmega was an extremely rewarding experience for all of us. Of course, we taught each other things about code and machine learning, but outside of that we each learned how to be part of a team, and that you can accomplish amazing things when you collaborate.
What's next for PokerOmega
Built With
- keras
- numpy
- python
- scikit-learn
- tensorflow
Log in or sign up for Devpost to join the conversation.