Inspiration

There does not exist a reinforcement learning package (in any machine learning library) that enables fast prototyping of Hierarchical RL models.

I hope to be the first, and enable more students in other countries (where starting from the ground up might be challenging) to perform research in Hierarchical RL, using a centralized and well abstracted api. with more people doing research with a centralized evaluation protocol, innovation is certain to happen.

What it does

This framework is a scaffold. With a single line of code, the student can use state-of-the-art reinforcement learning algorithms to train hierarchical models on any environment they create. In the demo that we attach, we show how (in time-lapse) our framework was used to train a hierarchical model to control a robot gripper.

Our framework is designed to scale well. In our demo, we make use of massive parallelization to train a hierarchical model in an equivalent of a few hours of real-world data. For domains where data collection is expensive, this is essential.

How I built it

I started with a rough sketch of the necessary abstraction barriers, some high level planning in order to make certain rl algorithms compatible that otherwise would not be, and then I got to work coding.

Challenges I ran into

The main challenge here was debugging. Many state-of-the-art rl algorithms suffer from a high sensitivity to hyper parameters and susceptibility to small bugs. My usage of a standard testing package helped me resolve the latter, and my thorough evaluation on standard benchmarks helped me approach the former.

Accomplishments that I'm proud of

This is the first package (in any machine learning framerwork) that supports training Hierarchical RL models with any state of the art rl algorithm.

What's next for torchforce

In the next few months, I will include model based RL algorithms, such as SOLAR, MBPO, and LQR. I hope to also develop unsupervised algorithms.

Share this project:
×

Updates