Inspiration
There is a player who has been grinding solo queue in the lowest ranks for over ten years. That player is me.
I keep playing because I love this game. I keep playing because I want to get better. I watch the pros and my heart pounds. I watch streamers review replays with their friends, laughing, learning, and I think — I want that too.
But I have never had a friend to do that with.
What I really want is someone to watch my plays. To give me feedback. To share the feeling of chasing a goal together. Not a stats dashboard. Not a tier list. Just someone to look at my replay with me and help me see what I missed.
Maybe the real reason I have been stuck in low ranks is not a lack of skill. It is the absence of someone to reflect with.
Victory is not something you achieve alone. So I built RePlection.
What it does
RePlection is a voice-powered AI companion that watches your League of Legends replays with you in real time.
Two-phase approach:
- Phase 1 (Pre-Analysis): Upload your match replay video. RePlection analyzes the full recording using the Gemini API, identifying death scenes, good plays, patterns, and areas for improvement. It also pulls match data from the Riot Games API (kills, deaths, objectives, timeline events) for factual grounding.
- Phase 2 (Live Session): Start a real-time voice conversation via the Gemini Live API. Instead of teaching from above, the AI sits next to you, watches the same screen, and reflects on the match together. It controls the replay client directly — seeking to key moments, pausing, adjusting camera angles, and slowing down playback — all through voice-triggered Function Calling.
Feedback Philosophy: First, affirm that continuing to play is something to be proud of. Sticking with a game for ten years in the lowest ranks takes real love. Harsh criticism does not push players forward — warmth does. But improvement is still necessary. So RePlection delivers both the good and the bad as objective facts, without sugarcoating, without blame. Then it gives exactly one thing to work on. Like honest feedback from a friend you are completely comfortable with — encouragement that builds confidence, facts delivered straight, and one clear next step.
How we built it
Phase 1 — Pre-Analysis Pipeline (Gemini API + Riot Games API):
The match video (MP4) is uploaded to the Gemini API via the Google GenAI SDK. A multi-stage pipeline processes it:
- Stage A (Riot Grounding): Riot Games Match-V5 and Timeline APIs provide deterministic match data — team compositions, kill events with coordinates, objective timers, ward events, building destructions. Scene candidates are generated algorithmically using importance scoring (bounty value, objective proximity, consecutive deaths, post-death losses).
- Stage 1 (Observer): Gemini analyzes the video with two passes. Pass 1 identifies 5-10 candidate scenes. Pass 2 deep-dives each clip for validation. Pass 3 produces structured observations (death scenes with root cause taxonomy, good plays with reusable rules).
- Stage 2 (Analyst): Patterns are extracted across all scenes — the player's core strength, primary weakness, recurring causes.
- Stage 3 (Coach): A tailored system instruction is generated for the live session, embedding all observations, match context, and personality guidelines.
The entire pipeline output is saved as a JSON artifact (CoachingContext) that feeds Phase 2.
Phase 2 — Live Session (Gemini Live API):
The live session uses gemini-2.5-flash-native-audio-preview with bidirectional audio streaming. Key components:
- RealtimeEngine: Manages the WebSocket connection, audio capture (16kHz input), audio playback (24kHz output), and barge-in handling. Push-to-talk with manual VAD (activity detection disabled, explicit
ActivityStart/ActivityEndsignals). - Function Calling: Four replay control tools —
seek_replay,pause_replay,resume_replay,slow_motion— declared asNON_BLOCKINGfunction declarations. When the AI decides to show a moment, it triggers a function call that hits the LoL Replay API (localhost:2999) via HTTP. - ReplayStateController: A desired-state reconciliation controller. Each tool call updates a desired state, issues the HTTP command, then reconciles against actual replay state. Barge-in triggers re-sync from actual state.
- Session management:
sessionResumptionhandles the 10-minute WebSocket limit. Context window compression (sliding_window) keeps long conversations within token limits.
Frontend — Tauri v2 Companion Window:
A frameless, always-on-top overlay built with Tauri v2 (Rust + TypeScript). Features a particle orb visualization (300 particles with sphere-surface animations) that reflects conversation state — idle, listening, speaking, processing, interrupted. Push-to-talk via spacebar or pointer hold. Polls the Python sidecar (FastAPI) at 300ms intervals for state synchronization.
Backend — Python FastAPI Sidecar:
Runs locally alongside the game client. Exposes REST endpoints for analyze, start/stop session, push-to-talk, state polling, and subtitle streaming. Designed for Google Cloud Run deployment.
Challenges we ran into
The 1 FPS vision bottleneck. The Gemini Live API accepts vision input at only 1 frame per second. For a game like League of Legends where fights last 2-3 seconds and positioning shifts frame-by-frame, that is not enough to analyze what happened. This constraint forced the two-phase architecture: pre-analyze the full video with the standard Gemini API (which processes video at full fidelity), then use the Live API purely for voice conversation with pre-computed knowledge.
Barge-in synchronization. When a user interrupts the AI mid-sentence, several things need to happen simultaneously: flush the playback queue, abort the audio stream, sync the replay controller's desired state with actual state, and transition the dialogue state machine. The AudioPlayer uses a generation-based locking pattern — each barge-in bumps a generation counter, and any in-flight write() calls from the previous generation silently bail out. Getting this right on Windows WASAPI without audio glitches took multiple iterations.
Session lifetime. The Gemini Live API has a ~10-minute WebSocket session limit. For a replay review that might run 20-30 minutes, we implemented sessionResumption — storing the resumption handle on every session_resumption_update event and passing it to the next connection. The outer loop in RealtimeEngine.start() reconnects automatically on go_away or connection errors.
AI personality calibration. The hardest design challenge was not technical. It was making the AI feel like a friend, not a coach. Early versions were too analytical — rattling off statistics and prescriptive advice. Players tuned it out the same way they tune out loading screen tips. The breakthrough was establishing the Feedback Philosophy: first affirm the player, then deliver facts as they are, then give exactly one thing to improve. The system instruction generation (Stage 3) bakes this personality into every session.
Accomplishments that we're proud of
AI-controlled replay playback. The AI does not just talk about moments — it shows them. Mid-conversation, it seeks the replay to the exact timestamp, pauses, adjusts camera, slows down, and walks you through what happened. This is not a scripted sequence; the AI decides when and how to use these tools based on the conversation flow.
Natural voice conversation with barge-in. You can interrupt the AI at any time. It stops talking, listens to you, and responds. The push-to-talk system with manual VAD makes this reliable even in noisy gaming environments.
The emotional experience. This one is hard to quantify but easy to feel. When the AI pauses the replay at your death, looks at the moment together with you, and gently points out the minimap signal you missed — that feels fundamentally different from reading a post-game stats page. It feels like watching replays with a friend who happens to have perfect memory.
What we learned
API constraints drive creative architecture. The 1 FPS limit initially felt like a dealbreaker. Instead, it forced a two-phase design that turned out to be strictly better than a single-phase approach. Pre-analysis with the full API produces richer, more accurate observations than real-time vision ever could. The constraint was a gift.
The most impactful AI feature is not intelligence. It is watching your play eye to eye. RePlection does not use the most powerful model. It does not have the most sophisticated analysis pipeline. But instead of teaching from above, it sits next to you, watches the same screen, and tells you what happened as it is. More than technical sophistication, what moves a player forward is the feeling that someone is right there beside you, seeing what you see.
Voice changes everything. The same advice delivered as text in a chatbot feels forgettable. Delivered as voice while you are watching the actual replay, it becomes visceral. You hear the AI say "watch the minimap right... here" as it pauses the replay, and you physically see what you missed. Voice is not just a convenience feature. It is a fundamentally different interaction modality that changes how deeply information lands.
Deterministic grounding prevents hallucination. By pulling factual match data from the Riot Games API (who killed whom, at what time, with what bounty), the AI cannot fabricate events. The analysis pipeline anchors every observation to verifiable facts. This is critical for trust — if the AI says "Malphite ulted you at 15:41," you can scrub to 15:41 and confirm it.
What's next for RePlection
Long-term progress tracking. Currently, each session is independent. The next step is tracking patterns across multiple sessions — "You have died to flanks in 4 of your last 6 games. Let's focus on map awareness today." This turns RePlection from a one-off review tool into an ongoing improvement companion.
Discord integration. Replay review is better with friends. A Discord bot that lets a group watch a replay together while the AI facilitates discussion would combine the social element of team review with the analytical depth of AI.
Proactive suggestions. Instead of waiting for the player to upload a video, detect patterns from match history and suggest which games are worth reviewing. "Your last game had 3 deaths in the first 10 minutes. Want to look at those?"
Built with: Gemini API, Gemini Live API, Google GenAI SDK, Google Cloud Run, Python, FastAPI, Tauri, TypeScript, Rust, Riot Games API
Built With
- fastapi
- gemini-api
- gemini-live-api
- google-cloud-run
- google-genai-sdk
- python
- riotgamesapi
- rust
- tauri
- typescript

Log in or sign up for Devpost to join the conversation.