Inspiration

Sometimes you need to train reinforcement learning algorithms on non-standard environments, that you have to develop yourself. Obviously, using game engine for this purpose makes development faster. However, engines, that support RL training like Unity and Unreal are heavy and designed to work on Windows. Godot is the only feature-complete open-source engine, that was developed and tested on Linux operating systems. Moreover, it generates lightweight binaries and takes around 5-10 minutes to compile from the source code (Unreal takes several hours to do so).

What it does

I developed the code to convert your Godot project into OpenAI Gym-like environment. The final result runs from your system python distribution, that adds to convenience.

How I built it

I used boost::interprocess to communicate between the engine distribution and python code. It passes tensors to Godot Arrays and back and uses semaphores for synchronization.

Challenges I ran into

Optimized (level 3) release distribution had some weird problems, where semaphores end up in an impossible states. I ended up using "release-debug" distribution, that retains debug symbols.

What's next for GodotAIGym

See to do list in the repository

Built With

Share this project:
×

Updates