Inspiration

We were inspired by the challenge of making autonomous mobility systems more human-like in their intelligence. Traditional racing simulators focus on speed and mechanics, but not on strategic decision-making. In real motorsport and e-mobility, success depends on adapting to dynamic factors like battery health, tire wear, and opponent behavior. This led us to imagine an AI-driven racing world where each car learns, adapts, and evolves in real time—turning every lap into a new strategic experience.

What it does

NeuroDrive is an adaptive racing simulator where autonomous driver agents compete using real-time intelligence. Each AI agent learns to balance speed, energy efficiency, and risk while responding dynamically to track conditions and rival strategies. The system visualizes race progress through a live dashboard showing energy metrics, leaderboards, and evolving strategies. It transforms racing into a data-driven competition of adaptive intelligence rather than just speed.

How we built it

We built NeuroDrive using Unity ML-Agents for realistic race simulation and vehicle dynamics.
The AI agents were trained using Deep Reinforcement Learning (PPO and DQN) in PyTorch, where each driver optimizes its policy to minimize energy waste and improve lap times.
A Flask/FastAPI backend connects the simulation to a ReactJS dashboard via WebSockets, enabling live telemetry, strategy analytics, and leaderboard updates.

We designed a reward function to balance velocity (v_t), energy consumption (E_t), and collision penalties (C_t):

[ \text{maximize } R = \sum_{t=0}^{T} \left[ v_t - \lambda_1 E_t - \lambda_2 C_t \right] ]

This approach allowed agents to learn intelligent trade-offs between aggression, safety, and efficiency—mimicking real driver decision-making.

Challenges we ran into

  • Maintaining stable reinforcement learning across multiple agents often led to unpredictable behavior.
  • Synchronizing real-time simulation data with AI inference introduced latency issues.
  • Modeling race events such as tire degradation, pit stops, and weather dynamically required complex environment scripting.
  • Ensuring the dashboard remained responsive during heavy data streaming required WebSocket optimization and efficient state management in React.

Accomplishments that we're proud of

  • Built a working multi-agent AI racing simulation within hackathon constraints.
  • Achieved adaptive behavior where AI drivers altered strategy mid-race.
  • Developed a live, visually interactive dashboard that explains AI decision-making intuitively.
  • Created a scalable framework that could be extended to motorsport analytics, fleet management, and autonomous driving research.

What we learned

We learned how multi-agent reinforcement learning can be used to simulate real-world competitive behavior and strategic intelligence.
We also discovered the importance of reward shaping in balancing performance and efficiency.
Beyond the technical aspects, we learned how critical real-time visualization is for communicating complex AI concepts effectively to audiences and judges.

What's next for NeuroDrive

  • Extend NeuroDrive into a full-scale digital twin platform for electric racing teams and fleet testing.
  • Include weather prediction models, AI pit crew decision-making, and data integration from real race telemetry.
  • Use the same framework for autonomous vehicle coordination and energy optimization in smart mobility networks.
Share this project:

Updates