Inspiration
A poor friend of ours had a sickly start to the day. To make matters worse, he lost multiple rounds of his favourite game. We wanted to brighten up his day.
What it does
A unique approach to a more holistic, statistical approach to actually helping my friend analyse his clash royale deck, and do a little bit of fun statistics learning along the way.
How we built it
Clash Markets is a full-stack quantitative analytics platform built on a FastAPI backend serving a React + Tailwind SPA with a Bloomberg Terminal aesthetic. All 120 Clash Royale cards are treated as financial assets — their win rates are normalised per market bracket by subtracting the market mean and recentering at 0.50 to correct for top-player bias, then six invented statistics are computed at startup from seeded CSVs: MPS (mispricing score, the z-scored residual of a per-market OLS regression $\hat{r}i = \beta_0 + \beta_1 \cdot \text{usage}i$), ESR (elixir Sharpe, $\frac{\text{WR}i - 0.50}{\sigma{\text{WR}i}} \cdot \frac{1}{\text{elixir}i}$), Meta Momentum ($\frac{\text{usage}{\text{GC}} - \text{usage}{\text{ladder}}}{\text{usage}{\text{ladder}}}$), Deck Beta (rarity-mapped meta sensitivity), Clash Alpha ($\overline{\text{MPS}z} \cdot 0.125 - \bar{\beta} \cdot \sigma{\text{patch}}$), and the headline stat DAR — Deck Alpha Rating, a sigmoidal composite $\text{DAR} = \tanh(0.35,\overline{\text{ESR}} + 0.35,\overline{\text{MPS}z} + 0.20,\overline{\text{UCB}{\text{norm}}} - 0.10,\bar{\beta})$ where the UCB term $\alpha \cdot \text{WR}{\text{personal}} + (1-\alpha)\cdot\text{WR}_{\text{global}} + 1.4\sqrt{\ln(T+1)/(n_i+1)}$ blends a player's personal win rate with meta data weighted by experience ($\alpha = \min(1,, n_i/30)$); the deck optimiser runs a 2,000-sample Monte Carlo over the card pool and selects the deck maximising DAR, while the efficient frontier plots win rate against average elixir cost across all sampled decks, and a Kaplan-Meier survival curve on 363 patch events measures how long post-buff alpha edges persist before the meta adapts.
Challenges we ran into
Primarily, coming up with a meaningful application which fitted the theme well, and produced a quantitative impact. Initially, we had other ideas for projects, but had to pivot quite late, realising other ideas weren't the best. For this project specifically, we understood that there was a lot of data to digest, and obtaining meaningful data was a challenge of its own. Once we had the data, presenting it in a interpretable format was hard, and involved us creating different statistical signals inspired by our quant interests, and eventually coming up with a holistical single figure which could contain all the information cleanly. Aka, our capstone.
Accomplishments that we're proud of
Our figuring out of different statistical signals, and application of our math knowledge to a new domain, having to do statistics which we hadn't been introduced to before at university, yet doing the research and learning to build approximators and signals to the best of our abilities.
What we learned
No models are perfect. We did the best we could with the data available.
What's next for KL Divergence
Working on our limitations
Log in or sign up for Devpost to join the conversation.