Blasto - Master AI Engineering Through a Game

Inspiration

The AI Engineering course market is booming and for good reason. The demand is massive, yet we're all still figuring out the fundamentals. Building reliable agentic systems, orchestrating multi-step pipelines, evaluating AI outputs. These are skills the industry needs, and even experienced engineers are learning them in real time.

At the same time, Generative AI for image and video has made extraordinary progress in the past year. Tools like Veo, Nano Banana, and others can now produce stunning visuals that were unthinkable just months ago.

I wanted to connect these two worlds: use the power of generative media to create an immersive learning experience for AI Engineering. A game where you play a character and learn by doing, with an AI mentor that adapts to you in real time.

That's Blasto.

What it does

1. Choose your biome

Each biome represents a different professional context for AI work:

Biome Setting Learning Focus
Miragora Startup Rapid prototyping, fighting for every token
Strandis Corporation Designing systems ready for massive scale
Orbium Halls University Implementing methodologies from research papers
Emberfield NGO Working with local models and optimization

2. Choose your character

Your character defines the difficulty and style of challenges you'll face.

3. Run missions

Inside the mission cockpit you work with:

  • A built-in code editor (Monaco) to write and test AI solutions
  • A course panel with animated explanations (powered by Manim)
  • A mentor chatbot - an AI agent that sees your code, your progress, and your conversation history

The Feynman Side Quest

After submitting code, the agent asks you to explain the concept in simple words. This tests real understanding, not just copy-paste skills. The Feynman Agent scores your explanation ($score \in [0, 100]$, pass threshold $\geq 70$) and rewards you with in-game resources if you demonstrate genuine comprehension.

Tech Stack

Backend — Python 3.12 + FastAPI, deployed via Docker on Hetzner Frontend — React 19 + TypeScript + Vite + Tailwind CSS + Zustand for state management AI — Gemini 3 Flash API via PydanticAI agents Database — Supabase (Postgres) Code Sandbox — Gemini 3 Flash API on isolated containers on a dedicated Hetzner server with a proxy layer Landing Page — Next.js on Netlify Assets & Demo — Generated with Nano Banana Pro + Veo 3.1 Course Animations — Manim

The Agentic System

The core of Blasto is a multi-agent teaching system:

Student writes/runs code
        |
        v
+-----------------------------------------------+
|         4-Agent Teaching Pipeline              |
|                                                |
|  1. Code Analyst                               |
|     Analyzes code against mission requirements |
|     Outputs: TrajectoryTrace                   |
|         |                                      |
|  2. Reflector                                  |
|     Identifies learning patterns and gaps      |
|     Outputs: ReflectionInsights                |
|         |                                      |
|  3. Curator                                    |
|     Merges updates into student playbook       |
|     Outputs: CuratedPlaybook                   |
|         |                                      |
|  4. Teaching Strategist                        |
|     Plans the optimal intervention             |
|     Outputs: TeachingPlan                      |
+-----------------------------------------------+
        |
        v
   Mentor Agent
   Streams response with
   the teaching plan as context

The Mentor Agent has access to tools:

  • evaluate_explanation — triggers the Feynman evaluator
  • award_energy — grants in-game resources upon successful explanation
  • check_student_progress — reads mission state from the database

Each agent uses PydanticAI with dynamic per-run instructions injected via @agent.instructions, so the system prompt adapts to the biome, mission, character, and conversation history.

Challenges I faced

*The Dashboard UX * - The mission dashboard had to feel deeply connected to the biome's atmosphere while remaining functional. I spent few days balancing immersion with usability before finding the right middle ground.

Code Execution Latency - The sandbox runs on a separate Hetzner server with containerized execution. It works, but the round-trip latency from code submission to results is noticeable. This is an area for future optimization.

Hackathon Time Pressure - I had only three weeks, I had to make ruthless prioritization decisions. I planned diverse mission types (not just coding), but only coding missions made it into the demo. The agent system, mission logic, and resource management all had to come together in the final days.

What I learned

  • AI-generated assets are production-ready in 2026 - using Veo 3.1 and Nano Banana Pro, I built a visually rich application almost entirely with AI-generated imagery and video. The quality gap between "AI placeholder" and "polished product" has effectively closed.
  • Building a code sandbox from scratch - setting up isolated container execution with a proxy layer on a dedicated server was completely new to me. It works, latency aside.
  • Manim is a gem for educational content - first time using 3Blue1Brown's animation library, and it's remarkably expressive for explaining technical concepts visually.
  • Video and asset production with Veo and Gemini Image - this hackathon pushed me to level up my generative media workflow: creating demo videos, environment visuals, and character assets almost entirely through AI tooling.

What's next

  • Finalize the rest of missions, biomes and characters
  • Deeper storylines
  • More mission types beyond coding
  • Voice-based mentor interactions
  • Multiplayer/co-op missions
  • Performance optimization for the code sandbox

Built With

Share this project:

Updates