Inspiration

Most AI agents fail in live environments for 2 reasons.

  1. Amnesia. They lose state when a session ends or an instance restarts.
  2. Latency. They are reactive. They wait for a user prompt instead of tracking an event stream.

Live sports is the opposite of a chat app. It is a high velocity stream of discrete events: touchdowns, interceptions, penalties, and momentum shifts. If AI is going to feel real, it has to watch the stream, remember what happened earlier, and respond instantly in the right voice.

I built Neuron Mission Control to prove a simple idea.

Real time data unlocks real world AI experiences.

What it does

Neuron Mission Control is an event driven AI analyst and broadcaster for live sports.

  1. It watches
    It ingests game events through Confluent Kafka. Touchdowns, turnovers, penalties, and more become durable events in a stream.

  2. It reasons
    Each event triggers a multi agent workflow on Google Cloud Run. Gemini generates the commentary output for each agent voice.

  3. It keeps identity
    The same event produces distinct voices, a fan style reaction and an analyst style response, without collapsing into generic assistant language.

  4. It remembers
    It persists context through Firestore so the system can recover state across restarts and scaling events.

  5. It replays
    Kafka is a replay log. We can re run from an offset to reproduce a moment deterministically for debugging and evaluation.

  6. It broadcasts
    It streams live updates to a React dashboard using Server Sent Events, and can announce events audibly with Text to Speech.

The result is creator ready commentary in seconds, built on a real event backbone.

How we built it

We engineered a decoupled architecture that separates Data, Intelligence, Memory, and Presentation.

Data layer. Confluent Kafka

Kafka is the event stream and replay log. It replaces fragile request response workflows with durable topics that can scale with game volatility.

Intelligence layer. Cloud Run plus Gemini

Cloud Run runs the multi agent pipeline serverlessly and auto scales as game pace changes. Gemini generates the text outputs that drive fan and analyst commentary.

Memory layer. Firestore

To solve statelessness, we persist the minimum required context so a new instance can recover what it needs to stay coherent across the game.

Safety and observability

We implemented circuit breakers that validate outputs before they are streamed. We log incidents and metrics for post game analysis, and we use tracing to see latency and failure modes across the pipeline.

Interface. React dashboard

The dashboard shows live feed, excitement level, sub second response status, event counters, an operator deck for triggering events, and capability demos such as replay, memory, and circuit breakers. Text to Speech can announce events audibly for a creator workflow.

Challenges we ran into

  1. Stateful behavior on ephemeral compute
    Cloud Run instances come and go. We had to persist just enough context to keep the system coherent without slowing the real time loop.

  2. Deterministic replay for evaluation
    Replay is only valuable if the pipeline behaves predictably. We built replay controls so we can reproduce moments and verify improvements.

  3. Real time safety
    Streaming AI output is risky. Circuit breakers needed to validate content fast enough to protect the feed without adding visible latency.

  4. Shipping a working end to end system
    A live link has no excuses. The demo had to be reliable and fast.

Accomplishments we are proud of

  1. Sub second loop
    Kafka ingestion to Cloud Run to Gemini to dashboard updates in a live demo run.

  2. Kafka replay as a first class feature
    Deterministic re run from an offset for debugging and evaluation.

  3. Identity cognition in production form
    A fan voice and an analyst voice reacting to the same event, reliably and visibly.

  4. Memory that survives restarts
    Context persists through Firestore so the system recovers state instead of resetting mid game.

  5. Live audio mode
    Text to Speech can announce events audibly as the stream updates.

What we learned

  1. Decoupling is survival
    Kafka keeps the stream stable even if compute scales or recovers.

  2. Identity needs an eval loop
    Distinct voices are not just a prompt writing problem. They require measurement, replay, and verification.

  3. Observability is non negotiable
    Without tracing and logs, distributed agent systems are black boxes.

  4. Live demos force real engineering
    The fastest way to discover truth is to ship a clickable system.

What is next

  1. Voice quality and localization
    Upgrade from basic announcements to locale aware broadcast style delivery, with team specific tone and pacing.

  2. Audio plus analytics
    Generate a synced audio stream plus a live excitement and sentiment timeline so creators can clip moments instantly.

  3. Swarm expansion
    Add specialist agents like stat keeper, betting analyst, and referee analyst, all debating in the same Kafka stream.

Built With

Share this project:

Updates