Introduction
We’ll build the simulator using a physics-based engine that updates agent positions in fixed time steps. Each agent (car, drone, etc.) will be controlled by a logic unit (both AI or manual input). A central event system will handle all events (for eg: collisions, pit stops, penalties, and weather effects). The backend will run the simulation and stream real-time data via web sockets to a web dashboard and will have a direct control panel (on the hardware) to control the agents and the simulation. The dashboard will show live leaderboards, telemetry, and replays. We’ll use deterministic seeding to ensure fair, reproducible simulations and spatial partitioning for scalability so thousands of agents can run smoothly in real time.
Technology
The backend technology will use GPU compute to run the simulations. Since the program has to be lightweight, it will be optimized to run on consumer GPU hardware. When GPU is not found or unsupported, multi threaded CPU programs will run the simulation. Since efficiency is key in simulation systems, low level languages will be used. The agent behavior will be modeled using high level languages for experimentation and will be integrated into the simulation once the experiments are done.
There will also be a console that is directly attached to the hardware for real time agent and system manipulation. There will be a web interface that will display essential information.
Tech Stack
The reason the technology section uses words such as low level languages is due to the fact that any language can be used in that place. This will be clarified here. C++ will be used for window and state management. CUDA or other parallel programming techniques will be used to simulate the scenario and the agents. This means that the agent and the world will work asynchronously making the simulation much more realistic. Another reason this will work best is due to the fact that kernels are scalable. For very large and very realistic simulations, multi GPU setups can help run the simulation more smoothly. Using this framework, worlds can be made and stored easily. For the agent modelling, either rule based modelling or machine learning will be used. Pre trained agents will be available for use on the simulation software. New agents can be trained on training worlds as well. Using atomic operations on the GPU, events can be dynamically generated and using cuda streams, they can be sent to the CPU for display and analysis. The web based framework will contain a canvas and will connect to the backend to render the results.
Experimentation and Research
Experimenting with the system and tuning it will be an important part as the agents may not behave as we want. Integrating non ideal scenarios in the simulation and in the agent behavior and tuning them to match real world scenarios is also important. Data will be collected during this process for analysis.

Log in or sign up for Devpost to join the conversation.