While I don't play World of Warcraft much anymore, they are one of the few game companies that offer a rich API you can integrate with and do fun things with the data. Mythic Keystone dungeons are a fun system in the game hat offer an increased difficulty to the dungeons in the game, and there is a scoring system built around it that rewards points that time the dungeons within par based on the level dungeon they completed.
What it does
This tool keeps track of all the mythic keystone dungeons completed and provides statistics analysis on various angles of the system.
How we built it
I built it in two halves - the main "brains" is a Ruby on Rails project running on an ECS cluster of t4g Graviton2 instances. The handles the background workers as well as acting as a JSON api for the front end. Everything public is heavily cached for performance, as most of the data changes only every few hours. The main database is Postgres running on Graviton2 RDS instances. The other half is a front end made with Eleventy as a static site generator and Chart.js for all the chart handling. The entire front end is served from S3 + Cloudfront, no Server Side Rendering.
Challenges we ran into
Handling the sheer amount of volume of some of the statistics, and how to performantly analyze a lot of the data quickly. Some of the joins originally took 30 seconds to return from the database because of their complexity ( and difficulty of indexing some of the queries ), but that was shaved down significantly.
Accomplishments that we're proud of
I am always thrilled to see the data presented in a gratifying way after collecting it for a long while. There's pleasing feeling watching all of your work come together.
What we learned
I learned a lot about building for multiple architectures while deploying containers to ECS. All the images I build for the backend of the app are dual-archittecture of both amd64 and arm64, but I only deploy on ARM as it's by far the best price-performance ratio available.
What's next for Keystone.rip - Mythic Plugs Statistics
Increasing the number of statistics available to explore is #1, and then beyond that optimizing may of the existing ones to take less compute to produce.