Code to reproduce experiments from "User-Interactive Offline Reinforcement Learning" (ICLR 2023)
-
Updated
Apr 14, 2023 - Python
Code to reproduce experiments from "User-Interactive Offline Reinforcement Learning" (ICLR 2023)
Implemenation of CORL for Fetch and Unitree A1 tasks
Codes for "Efficient Offline Policy Optimization with a Learned Model", ICLR2023
Code for Undergrad Final Year Project “Offline Risk-Averse Actor-Critic with Curriculum Learning”
Offline to Online RL: AWAC & IQL PyTorch Implementation
Official code for paper: Conservative objective models are a special kind of contrastive divergence-based energy model
Need 4 Speed, FYP 2023-24 @ Monash.
Codes accompanying the paper "On the Role of Discount Factor in Offline Reinforcement Learning" (ICML 2022)
オフライン強化学習用フレームワーク及びSCQL,SCQL+Dの実装
Package for recording Transitions in OpenAI Gym Environments.
🧠 Learning World Value Functions without Exploration
Summarising the research of Offline RL in Federated Setting.
Author's repository for GSM8K-AI-SubQ reasoning dataset
Clean single-file implementation of offline RL algorithms in JAX
Direct port of TD3_BC to JAX using Haiku and optax.
Code for NeurIPS 2023 paper Accountability in Offline Reinforcement Learning: Explaining Decisions with a Corpus of Examples
PyTorch Implementation of Offline Reinforcement Learning algorithms
D2C(Data-driven Control Library) is a library for data-driven control based on reinforcement learning.
Code for Continuous Doubly Constrained Batch Reinforcement Learning, NeurIPS 2021.
The Medkit-Learn(ing) Environment: Medical Decision Modelling through Simulation (NeurIPS 2021) by Alex J. Chan, Ioana Bica, Alihan Huyuk, Daniel Jarrett, and Mihaela van der Schaar.
Add a description, image, and links to the offline-rl topic page so that developers can more easily learn about it.
To associate your repository with the offline-rl topic, visit your repo's landing page and select "manage topics."