Huskarl PyPI version

Huskarl is a framework for deep reinforcement learning focused on research and fast prototyping. It's built on TensorFlow 2.0 and uses the tf.keras API when possible for conciseness and readability.

Huskarl makes it easy to parallelize computation of environment dynamics across multiple CPUs. This is useful for speeding up on-policy learning algorithms that benefit from multiple concurrent sources of experience such as A2C or PPO. It is specially useful for computationally intensive environments such as physics-based ones.

Huskarl works seamlessly with OpenAI Gym environments.

There are plans to support multi-agent environments and Unity3D environments.

Built With

Share this project:

Updates