Skip to content
/ gym Public
forked from openai/gym

A toolkit for developing and comparing reinforcement learning algorithms.

License

Notifications You must be signed in to change notification settings

IAmAMaggot/gym

 
 

Repository files navigation

pre-commit Code style: black

Important Notice

The team that has been maintaining Gym since 2021 has moved all future development to Gymnasium, a drop in replacement for Gym (import gymnasium as gym), and Gym will not be receiving any future updates. Please switch over to Gymnasium as soon as you're able to do so. If you'd like to read more about the story behind this switch, please check out this blog post.

Gym

Gym is an open source Python library for developing and comparing reinforcement learning algorithms by providing a standard API to communicate between learning algorithms and environments, as well as a standard set of environments compliant with that API. Since its release, Gym's API has become the field standard for doing this.

Gym documentation website is at https://www.gymlibrary.dev/, and you can propose fixes and changes to it here.

Gym also has a discord server for development purposes that you can join here: https://discord.gg/nHg2JRN489

Installation

To install the base Gym library, use pip install gym.

This does not include dependencies for all families of environments (there's a massive number, and some can be problematic to install on certain systems). You can install these dependencies for one family like pip install gym[atari] or use pip install gym[all] to install all dependencies.

We support Python 3.7, 3.8, 3.9 and 3.10 on Linux and macOS. We will accept PRs related to Windows, but do not officially support it.

API

The Gym API's API models environments as simple Python env classes. Creating environment instances and interacting with them is very simple- here's an example using the "CartPole-v1" environment:

import gym
env = gym.make("CartPole-v1")
observation, info = env.reset(seed=42)

for _ in range(1000):
    action = env.action_space.sample()
    observation, reward, terminated, truncated, info = env.step(action)

    if terminated or truncated:
        observation, info = env.reset()
env.close()

Notable Related Libraries

Please note that this is an incomplete list, and just includes libraries that the maintainers most commonly point newcommers to when asked for recommendations.

  • CleanRL is a learning library based on the Gym API. It is designed to cater to newer people in the field and provides very good reference implementations.
  • Tianshou is a learning library that's geared towards very experienced users and is design to allow for ease in complex algorithm modifications.
  • RLlib is a learning library that allows for distributed training and inferencing and supports an extraordinarily large number of features throughout the reinforcement learning space.
  • PettingZoo is like Gym, but for environments with multiple agents.

Environment Versioning

Gym keeps strict versioning for reproducibility reasons. All environments end in a suffix like "_v0". When changes are made to environments that might impact learning results, the number is increased by one to prevent potential confusion.

MuJoCo Environments

The latest "_v4" and future versions of the MuJoCo environments will no longer depend on mujoco-py. Instead mujoco will be the required dependency for future gym MuJoCo environment versions. Old gym MuJoCo environment versions that depend on mujoco-py will still be kept but unmaintained. To install the dependencies for the latest gym MuJoCo environments use pip install gym[mujoco]. Dependencies for old MuJoCo environments can still be installed by pip install gym[mujoco_py].

Citation

A whitepaper from when Gym just came out is available https://arxiv.org/pdf/1606.01540, and can be cited with the following bibtex entry:

@misc{1606.01540,
  Author = {Greg Brockman and Vicki Cheung and Ludwig Pettersson and Jonas Schneider and John Schulman and Jie Tang and Wojciech Zaremba},
  Title = {OpenAI Gym},
  Year = {2016},
  Eprint = {arXiv:1606.01540},
}

Release Notes

There used to be release notes for all the new Gym versions here. New release notes are being moved to releases page on GitHub, like most other libraries do. Old notes can be viewed here.

About

A toolkit for developing and comparing reinforcement learning algorithms.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 99.9%
  • Other 0.1%