Implementation of stable-baselines3 in rust with burn
-
Updated
Jun 28, 2024 - Rust
Implementation of stable-baselines3 in rust with burn
Numerical Evidence for Sample Efficiency of Model-Based over Model-Free Reinforcement Learning Control of Partial Differential Equations [ECC'24]
Nokia's classic 'snake' game, written in NumPy and converted into a Gymnasium Environment() for use with gradient-based reinforcement learning algorithms
Using Imitation Learning for a Wordle agent
A reinforcement learning A3C implementation trained to play Super Mario Bros 3
Dockerized Container Architecture for Parallel Training of CARLA Gym Environments
Train quadruped locomotion using reinforcement learning in Mujoco
Predator-Prey-Grass gridworld environment using PettingZoo, with dynamic deletion and spawning of partially observant agents.
Unified framework enabling machine learning-based training, simulation, and deployment of legged robots, compatible with various robot models and reinforcement learning algorithms, with PyBullet simulation and ROS integration.
Worst-case MSE Minimization for RIS-assisted mmWave MU-MISO Systems with Hardware Impairments and CSI Imperfection
MLPro: Integration StableBaselines3
Superconducting RadioFrequency cavity Frequency Control by Reinforcement Learning
Explore the capabilities of RealROS and MultiROS in training robots for real-world tasks. This repository showcases real-world training and Gazebo simulation-based training for a reach task based on the ReactorX 200 robot manipulator.
This package provides ROS support for Stable Baselines3. It allows you to train robotics RL agents in the real world and simulations using ROS and SB3.
A grid-like environment (multi-agent system) used by an intelligent agent (or more than one agent) in order for it/them to carry the orbs to the pits in a limited number of movements.
PyBullet Gymnasium environments for single and multi-agent reinforcement learning of quadcopter control
Curriculum–Based Reinforcement Learning for Pedestrian Simulation
Deep reinforcement learning for intelligent power control in IoT
Trained model of a PPO agent playing LunarLander-v2 using the stable-baselines3 library.
PettingZoo ConnectFour and TicTacToe examples, configured with Rye as dependency manager
Add a description, image, and links to the stable-baselines3 topic page so that developers can more easily learn about it.
To associate your repository with the stable-baselines3 topic, visit your repo's landing page and select "manage topics."