Inspiration
Coaches review thousands of post-match events, but only a small number of deaths actually decide games. Identifying those moments is manual, subjective, and time-consuming.
What it does
Mistake Gravity Index analyzes official GRID match data to: Filter out traded deaths Detect deaths under objective pressure Rank mistakes by strategic impact using an explainable scoring model The result is an automated game review agenda, not raw statistics.
How I built it
Python CLI built in JetBrains PyCharm Official GRID APIs (Central Data, Series State, Events) Deterministic scoring logic (no black-box predictions) Snapshot tests to guarantee analytical stability Rich-based CLI output for judge-readable demos Junie assisted with refactoring, test design, and compliance checks.
Challenges I ran into
Making sense of dense event streams Avoiding misleading “near objective” noise Designing a scoring model coaches can trust and explain
Accomplishments I'm proud of
Fully explainable “assistant coach” logic Automated macro review generation Production-grade CLI with real match data Snapshot testing for analytical correctness
What I learned
Raw data isn’t insight. Coaches need prioritization, context, and reasoning—not dashboards.
What’s next
Player-specific trend reports Multi-match aggregation Optional “what-if” modeling layered on top of stable insights
Built With
- centraldata
- events
- events)
- grid-esports-apis
- grid-esports-apis-(central-data
- jetbrains-junie
- jetbrains-pycharm
- pytest
- python
- rich
- series-state
- seriesstate

Log in or sign up for Devpost to join the conversation.