Inspiration
I started playing league sometime 12 years ago. Since then, Faker has gone on to win six world championships. I, on the other hand, went on to graduate high school and college, before becoming a professional Software Developer. When I saw this opportunity pop up, in a lot of ways it felt like my life coming in full circle. The part of my life where I spent many late nights emulating the guy from the music video of Warriors by Imagine Dragon and the very next part of my life where I spent many late nights debugging a semi colon seemed to be converging and meeting at a point. I could not let this opportunity go and so, keyboard and mouse in hand, I decided to give it my best shot. What resulted from that is right here - Rift Metrics.
What it does
riftMetrics is an AI powered analytics tool specifically for league of legends. Data is a very powerful thing, but that is only true when the data is interpreted properly. This platform helps to not only show data in a meaningful way, but to take it one step further with the help of AI and provide feedback almost as if the user was sitting with an actual analyst and/or coach and listening to them comment on this user's play style - at least, to a certain degree.
How we built it
riftMetrics is built on a multi-layered architecture that brings together data collection, processing, and AI-powered analysis:
Core Technology Stack
- Frontend: Streamlit for rapid UI development with responsive components
- Data Source: Riot Games API for match history and player statistics
- AI Engine: Claude 4.5 Sonnet via AWS Bedrock
- Agent Frameworks: Strands SDK for tool-based AI interactions
Data Collection Architecture The foundation of riftMetrics is an efficient async data fetching system that respects Riot's API rate limits (20 requests/second, 100 requests/2 minutes):
async def fetch_match_details_async(region, match_id, client, semaphore):
async with semaphore: # Burst control
await asyncio.sleep(DELAY_PER_REQUEST) # 1.5s non-blocking delay
result = await fetch_url_quick(url, client)
if 'retry_after' in result:
wait_time = result['retry_after']
if wait_time > 5:
return {'long_wait_signal': wait_time}
await asyncio.sleep(wait_time + 0.5)
This asynchronous approach with semaphore-based burst limiting reduced data loading time by ~40% compared to synchronous implementation, while gracefully handling rate limit errors by extracting wait times from response headers.
Data Processing Pipeline
Raw match data undergoes extensive contextualization before reaching the AI:
- Match History Fetching: Retrieves 100 most recent ranked games (Solo/Duo and Flex)
- Statistical Aggregation: Calculates over 50 different metrics per player
- Role-Based Context Building: Separates analysis for laners, junglers, and supports
- Rich Context Generation: Transforms raw numbers into behavioral patterns
For example, instead of just storing "kills: 5, deaths: 3, assists: 8", the system calculates:
- Kill participation percentage (KP%)
- Combat efficiency related to gold earned (calculated on damage and tank effectiveness relative to gold earned)
- Performance volatility across wins vs losses
- Role-specific metrics (CS@10/CSD@10 for laners, objective control for junglers, vision dominance for supports)
AI Agent Architecture
riftMetrics employs three specialized AI agents, each optimized for specific tasks:
- Playstyle Analysis Agent (Temperature: 0.7)
- Generates personalized 4-5 sentence behavioral narratives
- Uses tools:
get_playstyle_fingerprint(), get_behavioral_patterns(), get_role_playstyle() - Creates dynamic style labels like "🔥 Aggressive Carry" or "🎯 Objective Focused"
- Interactive Coach Agent (Temperature: 0.8)
- Conversational coaching with access to 15+ specialized tools
- Responds to natural language questions: "Why do I die more in losses?"
- Provides role-aware advice
- Page Summary Agent (Temperature 0.3)
- Generates data-driven summaries for each analytics section
- Lower temperature ensures factual, deterministic analysis
- References specific numbers from displayed metrics
State Management & Caching
To minimize wait times, I implemented a FIFO cache strategy that stores up to 5 users' data:
if len(st.session_state.user_cache) >= MAX_CACHED_USERS:
oldest_key = next(iter(st.session_state.user_cache))
del st.session_state.user_cache[oldest_key]
# Cache includes staleness detection
if cached_latest_match_id == new_latest_match_id:
load_from_cache()
else:
fetch_fresh_data()
This approach provides near-instantaneous loading for cached users while intelligently detecting when cache data becomes stale.
Analytics Sections
The app provides eight distinct analysis views:
- Overview: Player tags, playstyle analysis, season performance
- Match History: Detailed game-by-game breakdown with expandable stats
- Champion Insights: A-D tier ranking system with performance scoring
- AI Coaching Session: Interactive Q&A with contextual tool access
- Advanced Stats: 12+ computed metrics (volatility, aggression, persistence scores)
- Early vs Late Game: Role-specific early game analysis (CS@10 for laners, jungle pressure for junglers, vision@10 for supports)
- Matchup Analysis: Head-to-head champion performance with role-aware stats
- Performance Trends: KDA trends, win/loss patterns, top performers
Each section features an AI Summary button that provides instant interpretation of the displayed metrics.
Challenges we ran into
- Asynchronous Rate Limit Handling
The biggest technical hurdle was transforming a synchronous, blocking data fetch into an efficient async system. Initially, the function used time.sleep() within loops and rendered Streamlit elements during execution, which are both major anti-patterns. Learning asynchronous Python on the fly while implementing semaphore-based concurrency control was intense, but the 40% performance improvement made it worthwhile.
- State Management Nightmares
Streamlit's session state proved deceptively complex. Edge cases kept emerging:
- Duo Problem: Two players who queued together for 100 games showed identical metrics because the cache was mistakenly identifying them as the same player
- The Refresh Problem: Discovering that session state clearing on refresh was expected behavior, not a bug, led to a last-minute pivot away from persistent JSON-based caching due to time constraints
- Queue Filter Context: Ensuring filtered data (Solo/Duo vs Flex) properly propagated to AI agents without context pollution required careful state management
Even beyond that, just a lot of time was spent scratching my head as user caches kept getting switched around and one player's playstyle description showed up in another player's profile and so on. Eventually, reduction became the answer. The build philosophy for this project has always been to get it right and make it work first. The "making it better" could wait.
- Dude, what's your lane?
Midway through development, I realized I suffered from a severe case of League of Legends ethnocentrism and so most, if not all of my metrics were laner-biased. I was judging junglers on CS@10 and supports on damage output. As a flag bearer for the idea of NOT "analyzing a fish's ability to climb a tree," I needed to change things up and fast. This meant:
- Refactoring the entire metrics calculation system
- Creating role-specific tools and context builders
- Rewriting AI prompts to be role-aware
- Adding jungle-specific metrics (objective control, counter-jungle score, pressure rating)
- Adding support-specific metrics (vision dominance, utility output, frontline presence)
It was a long and stressful process, but the rewards speak for themselves. I now have an app that went from being extremely laner biased to now being a lot more role aware than the average player in my elo.
- Data without meaningful interpretation is just numerical noise
With access to 100+ data points per player for 10 players in just one match, determining which metrics mattered was challenging and required a lot of sifting through dummy data. I relied on:
- Personal experience from thousands of games played and watched
- Analytics segments discussed by pros/analysts that I listened to over the years
- Conversations with Claude about League fundamentals
- Research on existing analytics sites (op.gg, mobalytics.gg)
- Iterative testing with different player profiles
An example for this could be the Persistence Score metric on the Advanced Stats tab. This "score" is essentially a weighted sum of a players average kill participation, objective score (the objectives they were contributing in getting for their team) and combat score (damage + tanking) for lost games specifically, normalized to a value within the range of 10.
$$ Persistence=\frac{(W_LKP​×LossKP)+(W_LOV​×ObjectiveScore)+(W_LCS​×CombatShare)​}{18.5}×10 $$
Here, \(W_LKP = 3.5\), \(W_LOV = 3.5\), \(W_LCS=3.5\) were the weights assigned because in losses, or in games that a player is behind, the ability to get kills and do damage/tank damage has an even higher "value" in showing that the player still has not given up, which is exactly what this stat aims to measure. Someone with a low persistence score is someone who tilts easily. This is just one of the examples of the kind of ways data was interpreted throughout the development of this application.
Accomplishments that we're proud of
Technical Achievements
- Efficient async architecture that handles Riot's strict rate limits while minimizing user wait time to ~2.5 minutes for 100 games
- Role-aware AI analysis that not only understands the data in context, but is also role aware in understanding that for example, supports should not be judged on CS and junglers should excel at objective control
- Dynamic caching system with staleness detection that provides instant re-loads for recent searches
- Three specialized AI agents working in harmony, each optimized for its specific purpose
User Experience
- streamlit comes with its own UI elements and styling, but adding on top of that and making it personal into making it look how it does now has been both challenging and extremely rewarding
- Playstyle tags (inspired by Mobalytics' tagging system) are one of the most fast ways a user can gather insight into their playstyle. I really like them as they are in Mobalytics and am pleased to be able to incorporate them in Rift Metrics as well
- Queue filtering that allows players to analyze Solo/Duo and Flex performance separately
- AI summaries on every page that translate raw numbers into actionable insights
- Interactive AI powered coach that answers even vague questions like "why am I losing games even though I am getting fed" with actionable insights and the data to back it up
What we learned
Technical Lessons
- Asynchronous Python is powerful but complex: Semaphores, rate limiting, and concurrent tasks require careful orchestration
- State management is harder than it looks: Streamlit's simplicity hides deep complexity when building stateful applications
- AI temperature matters: 0.3 for factual summaries, 0.8 for conversational personality, 0.7 for creative analysis
Domain Knowledge Insights
- Context is everything: The same metric (e.g., 150 DPM) means different things for a tank vs. an ADC
- Role diversity matters immensely: A one-size-fits-all metrics approach fails 40% of players (junglers and supports)
- Patterns emerge from aggregation: Individual game stats are noisy, but 50-100 games reveal true tendencies
Development Philosophy
- Make it work, then make it better: With time constraints, functional beats perfect
- Test with edge cases: Duo partners, one-tricks, multi-role players all broke assumptions
- Let the data guide you: My laner bias was only visible after testing with jungle/support accounts
The most rewarding realization though, was watching raw numbers tell rich stories when properly contextualized. For example, a player with high kill participation in wins but significantly higher deaths in losses reveals someone who forces plays without adjusting for game state. They're unaware they're behind or feel pressured to carry. This kind of behavioral pattern emerges naturally from aggregated statistics when you know what to look for.
What's next for riftMetrics
Short-term Improvements
- Persistent caching with Redis: Replace session state with proper database storage
- Timeline endpoint integration: Access minute-by-minute game data via the Riot API endpoint for match timeline for deeper analysis (gold leads over time, objective timing, lane phase breakpoints)
- Item build analysis: Provide recommendations based on matchup and game state
- Rank-aware benchmarking: Compare performance against rank-specific averages (Diamond CS@10 vs Gold CS@10)
Medium-term Features
- RAG-powered insights: Vector database of pro games and high-elo matches for contextual recommendations
- A2A (Agent-to-Agent) architecture: Multiple specialized agents collaborating (one for laning, one for macro, one for mentality)
- Duo synergy analysis: Identify which friends you perform best with
Long-term Vision
- Mobile app: Companion app for post-game quick reviews
- Draft coach: Real time drafting coach that gives personalized results of what to pick during draft based on user context, meta read, patch, etc.
Built With
- amazon-web-services
- api
- bedrock
- claude
- docker
- ec2
- ecs
- python
- riot-games
- strands-sdk
- streamlit

Log in or sign up for Devpost to join the conversation.