Table Number 70C
🧠 Inspiration
There are tons of League of Legends overlay tools out there — but almost all of them focus on pre-game builds or post-game analytics. None of them actually help you while you’re playing, when the decisions matter most.
League has an incredibly high skill floor — both strategically and mechanically — and we’ve seen so many players quit before they even start enjoying the game because learning it feels impossible alone. So we thought: what if there was an AI coach that could guide you live — giving real-time feedback, like a pro player whispering advice in your ear?
That idea became Souma, an AI overlay coach for League of Legends.
⚙️ What It Does
Souma reads live in-game information like your health, mana, gold, items, matchups, and minimap awareness, then generates real-time, context-specific commands — things like:
“Freeze the wave near tower.” “Recall now for item spike.” “Watch for jungler — enemy mid missing.”
It helps players improve their game sense, decision-making, and situational awareness while they play — turning frustration into learning and wins.
🏗️ How We Built It
Backend
- FastAPI — High-performance async API for handling real-time data streams.
- OpenCV — Screen capture, ROI extraction, and live frame analysis.
- Tesseract / EasyOCR — Extract in-game stats (gold, HP, mana, etc.) directly from the screen.
- Riot API Client — Used carefully with rate-limits to verify player states and match data.
- AI Engines
- Rule Engine — Deterministic modules for safety warnings (e.g. low HP, tower dives) and recall timing.
- LLM Engine — Contextual modules for wave management, objective control, and strategic decisions.
Frontend
- Electron + React + TypeScript — Cross-platform overlay with real-time updates.
- Zustand — Lightweight state management for UI.
- TailwindCSS — Clean, reactive styling.
- WebSockets — Low-latency connection between the FastAPI backend and the overlay interface.
🧩 Challenges We Ran Into
Image processing in a fast-paced game like League is extremely hard — color variations, animations, and camera movements all add noise.
Audio crossmatching for detecting ability cues and teamfight moments pushed the limits of real-time performance.
Integrating voice input into an LLM for hands-free communication was both incredibly fun and frustrating — but so worth it once it worked.
🏆 Accomplishments We're Proud Of
Building a working computer vision pipeline that can detect champion data and in-game states live.
Designing specific, situational feedback tied to ROIs (regions of interest) like gold, mana, and minimap data.
Creating an AI system that doesn’t just react — it coaches with actual game sense.
Watching the first full demo where Souma called out a gank before it happened — that was surreal.
💡 What We Learned
Never give up. Debugging real-time systems is brutal, but persistence pays off.
AI + image processing still has a long way to go for gaming, but the potential is enormous.
Audio-visual fusion (crossmatching sound and screen events) is a powerful way to detect gameplay states.
Voice-to-LLM input makes interaction feel genuinely natural — almost like talking to a real coach.
Building Souma taught us that blending AI reasoning, human coaching, and live game data can create something truly new — a way to make learning complex games actually fun again.
Built With
- electron
- fastapi
- openaiapi
- opencv
- python
- react
- riotapi
- typescript

Log in or sign up for Devpost to join the conversation.