Skip to content

Xenoverse is a collection of randomized RL, Language, and general-purpose simulation environments, designed for training General-Purpose Learning Agents (GLAs).

License

Notifications You must be signed in to change notification settings

FutureAGI/Xenoverse

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Xenoverse: Toward Training General-Purpose Learning Agents (GLA) with Randomized Worlds

xenoverse instead of a single universe

The recent research indicates that the generalization ability of learning agents is primarily dependent on the diversity of training environments. However, the real-world poses a significant limitation on the diversity itself, e.g., physical laws, the gravitational constant is almost constant. We believe this limitation is serious bottleneck to incentivize artificial general intelligence (AGI).

Xenoverse is a collection of extremely diverse worlds by procedural generation based on completely random parameters. We propose that AGI should not be trained and adapted in a single universe, but in xenoverse.

collection of xenoverse environments

  • AnyMDP: Procedurally generated unlimited general-purpose Markov Decision Processes (MDP) in discrete spaces.

  • AnyMDPv2: Procedurally generated unlimited general-purpose Markov Decision Processes (MDP) in continuous spaces.

  • MetaLanguage: Pseudo-language generated from randomized neural networks, benchmarking in-context language learning (ICLL).

  • MazeWorld: Procedurally generated immersed 3D mazes with diverse maze structures.

Installation

pip install xenoverse

Reference

Related works

@article{wang2024benchmarking,
  title={Benchmarking General Purpose In-Context Learning},
  author={Wang, Fan and Lin, Chuan and Cao, Yang and Kang, Yu},
  journal={arXiv preprint arXiv:2405.17234},
  year={2024}
}
@article{wang2025omnirl,
  title={OmniRL: In-Context Reinforcement Learning by Large-Scale Meta-Training in Randomized Worlds},
  author={Wang, Fan and Shao, Pengtao and Zhang, Yiming and Yu, Bo and Liu, Shaoshan and Ding, Ning and Cao, Yang and Kang, Yu and Wang, Haifeng},
  journal={arXiv preprint arXiv:2502.02869},
  year={2025}
}