RaceForge Sim: Simulate. Compete. Dominate.
Inspiration
The thrill of Formula E races, the precision of drone swarms, and the chaos of optimized supply chains inspired us to create RaceForge Sim. We envisioned a lightweight simulator that captures the intensity of competitive mobility—where agents dynamically adapt to overtakes, crashes, or traffic jams, all while streaming live leaderboards.
Drawing from recent advances in Multi-Agent Reinforcement Learning (MARL) and high-speed vectorized physics (like VMAS), we aimed to build a scalable, hackathon-ready platform that brings these scenarios to life on a laptop. We're democratizing mobility simulation for applications like MotoGP, drone races, or logistics showdowns.
What it does
RaceForge Sim is a real-time, multi-agent mobility simulator that models hundreds of moving agents—cars, drones, or delivery bots—competing in dynamic environments.
It supports:
- Competitive Scenarios: Simulates Formula E races with overtaking, drone swarms avoiding obstacles, or supply-chain races optimizing delivery times.
- Dynamic Events: Injects disruptions like weather changes, crashes, or traffic signals, forcing agents to adapt via their AI-driven policies.
- Live Leaderboards: Streams real-time rankings (e.g., lap times, delivery efficiency) via a web-based dashboard.
- Lightweight Design: Runs efficiently on consumer hardware, handling 100+ agents with vectorized physics and emergent AI behaviors.
For example, in our Formula E sim, agents use MARL to optimize speed and energy consumption ($E = \frac{1}{2}mv^2$) while navigating collisions, with the results streamed to a live leaderboard.
How we built it
We built RaceForge Sim in 48 hours using a modular, open-source stack:
- Core Engine: Leveraged VMAS (Vectorized Multi-Agent Simulator) for GPU-accelerated 2D physics, modeling agent dynamics ($\vec{v}i = \vec{a}_i \Delta t + \vec{v}{i-1}$) for up to 200 agents.
- Environment Design: Used MetaUrban to procedurally generate urban tracks with roads, obstacles, and signals, ensuring diverse race scenarios.
- Agent Behaviors: Integrated lightweight MARL policies from RLlib (using PPO) to train agents for adaptive decision-making, such as learning to overtake or avoid collisions.
- Leaderboard & Visuals: Streamed agent states (positions, speeds) via Socket.io to a Plotly Dash dashboard for live rankings and 2D visualizations.
- Deployment: Packaged in Docker for portability, tested on Colab for team collaboration, and optimized for <1GB RAM.
Challenges we ran into
- Scalability: Balancing 100+ agents with real-time performance was tough. We optimized by fully vectorizing the physics updates ($\vec{x}_{t+1} = \vec{x}_t + \vec{v}_t \Delta t$) in VMAS.
- AI Stability: Random events (e.g., crashes) initially destabilized the MARL training. We had to carefully tune the reward functions ($R = w_1 \cdot \text{speed} - w_2 \cdot \text{collision}$) to teach agents resilience.
- UI Latency: Initial leaderboard updates via REST APIs lagged with 50+ agents. Switching to Socket.io for real-time streaming cut this latency by 60%.
- Team Sync: Coordinating across time zones on Colab led to merge conflicts; we adopted Git submodules for smoother workflows.
Accomplishments
- 100-Agent Sim: Achieved real-time simulation of 100+ agents on a single laptop, far exceeding the setup speed of heavier tools like MATSim or SUMO.
- AI-Driven Agents: Successfully trained MARL agents that adapt to dynamic events like crashes and slowdowns, showcasing emergent competitive behaviors (like drafting and blocking) without being hard-coded.
- Live Interactive Dashboard: Delivered a slick, live, web-based dashboard showing real-time rankings, speeds, and event impacts.
- 48-Hour Prototype: Built a polished, end-to-end prototype—from physics to AI to visuals—all fully open-source and Dockerized for easy sharing.
What we learned
- Vectorized Sims are Key: Frameworks like VMAS are game-changers. Batched, GPU-accelerated physics is the only way to get real-time performance for large-scale multi-agent systems.
- Simple Rewards > Complex Rules: For a hackathon, a simple, well-tuned MARL reward function converges faster and produces more robust behaviors than a complex, hand-coded rule engine.
- Real-Time Streaming is a Must: For demos, a laggy UI is a killer. Tools like Socket.io and Dash are hackathon MVPs for building engaging live visualizations quickly.
- Teamwork Hacks: Docker and Colab are great, but a strict Git workflow (like submodules or clear branch rules) is non-negotiable under time pressure.
What's Next
- Generative AI Strategy: Our next big step is to integrate LLM-driven reasoning (using frameworks like GATSim) for high-level strategy, allowing agents to make complex, human-like decisions (e.g., "This tire is worn, I should pit" or "My teammate is behind me, I should block").
- 3D Visualization: Porting the visualization layer to a 3D engine like Unreal or Unity for cinematic race replays.
- Real-World Data: Training our agents on real telemetry data from Formula E or MotoGP to mimic professional driver styles.
Log in or sign up for Devpost to join the conversation.