Inspiration
Most AI agents fail in live environments for 2 reasons.
- Amnesia. They lose state when a session ends or an instance restarts.
- Latency. They are reactive. They wait for a user prompt instead of tracking an event stream.
Live sports is the opposite of a chat app. It is a high velocity stream of discrete events: touchdowns, interceptions, penalties, and momentum shifts. If AI is going to feel real, it has to watch the stream, remember what happened earlier, and respond instantly in the right voice.
I built Neuron Mission Control to prove a simple idea.
Real time data unlocks real world AI experiences.
What it does
Neuron Mission Control is an event driven AI analyst and broadcaster for live sports.
It watches
It ingests game events through Confluent Kafka. Touchdowns, turnovers, penalties, and more become durable events in a stream.It reasons
Each event triggers a multi agent workflow on Google Cloud Run. Gemini generates the commentary output for each agent voice.It keeps identity
The same event produces distinct voices, a fan style reaction and an analyst style response, without collapsing into generic assistant language.It remembers
It persists context through Firestore so the system can recover state across restarts and scaling events.It replays
Kafka is a replay log. We can re run from an offset to reproduce a moment deterministically for debugging and evaluation.It broadcasts
It streams live updates to a React dashboard using Server Sent Events, and can announce events audibly with Text to Speech.
The result is creator ready commentary in seconds, built on a real event backbone.
How we built it
We engineered a decoupled architecture that separates Data, Intelligence, Memory, and Presentation.
Data layer. Confluent Kafka
Kafka is the event stream and replay log. It replaces fragile request response workflows with durable topics that can scale with game volatility.
Intelligence layer. Cloud Run plus Gemini
Cloud Run runs the multi agent pipeline serverlessly and auto scales as game pace changes. Gemini generates the text outputs that drive fan and analyst commentary.
Memory layer. Firestore
To solve statelessness, we persist the minimum required context so a new instance can recover what it needs to stay coherent across the game.
Safety and observability
We implemented circuit breakers that validate outputs before they are streamed. We log incidents and metrics for post game analysis, and we use tracing to see latency and failure modes across the pipeline.
Interface. React dashboard
The dashboard shows live feed, excitement level, sub second response status, event counters, an operator deck for triggering events, and capability demos such as replay, memory, and circuit breakers. Text to Speech can announce events audibly for a creator workflow.
Challenges we ran into
Stateful behavior on ephemeral compute
Cloud Run instances come and go. We had to persist just enough context to keep the system coherent without slowing the real time loop.Deterministic replay for evaluation
Replay is only valuable if the pipeline behaves predictably. We built replay controls so we can reproduce moments and verify improvements.Real time safety
Streaming AI output is risky. Circuit breakers needed to validate content fast enough to protect the feed without adding visible latency.Shipping a working end to end system
A live link has no excuses. The demo had to be reliable and fast.
Accomplishments we are proud of
Sub second loop
Kafka ingestion to Cloud Run to Gemini to dashboard updates in a live demo run.Kafka replay as a first class feature
Deterministic re run from an offset for debugging and evaluation.Identity cognition in production form
A fan voice and an analyst voice reacting to the same event, reliably and visibly.Memory that survives restarts
Context persists through Firestore so the system recovers state instead of resetting mid game.Live audio mode
Text to Speech can announce events audibly as the stream updates.
What we learned
Decoupling is survival
Kafka keeps the stream stable even if compute scales or recovers.Identity needs an eval loop
Distinct voices are not just a prompt writing problem. They require measurement, replay, and verification.Observability is non negotiable
Without tracing and logs, distributed agent systems are black boxes.Live demos force real engineering
The fastest way to discover truth is to ship a clickable system.
What is next
Voice quality and localization
Upgrade from basic announcements to locale aware broadcast style delivery, with team specific tone and pacing.Audio plus analytics
Generate a synced audio stream plus a live excitement and sentiment timeline so creators can clip moments instantly.Swarm expansion
Add specialist agents like stat keeper, betting analyst, and referee analyst, all debating in the same Kafka stream.
Built With
- bigquery
- confluent
- docker
- fastapi
- firestore
- gemini
- google-cloud
- kafka
- opentelemetry
- python
- react
- vertex-ai
Log in or sign up for Devpost to join the conversation.