Skip to content

Official code for the paper: Embodied Multi-Modal Agent trained by an LLM from a Parallel TextWorld

Notifications You must be signed in to change notification settings

stevenyangyj/Emma-Alfworld

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 

Repository files navigation

EMMA

This repository is the official implementation of the following paper.

Embodied Multi-Modal Agent trained by an LLM from a Parallel TextWorld
Yijun Yang, Tianyi Zhou, Kanxue Li, Dapeng Tao, Lvsong Li, Li Shen, Xiaodong He, Jing Jiang, Yuhui Shi

License arXiv

Abstract

While large language models (LLMs) excel in a simulated world of texts, they struggle to interact with the more realistic world without perceptions of other modalities such as visual or audio signals. Although vision-language models (VLMs) integrate LLM modules (1) aligned with static image features, and (2) may possess prior knowledge of world dynamics (as demonstrated in the text world), they have not been trained in an embodied visual world and thus cannot align with its dynamics. On the other hand, training an embodied agent in a noisy visual world without expert guidance is often challenging and inefficient. In this paper, we train a VLM agent living in a visual world using an LLM agent excelling in a parallel text world. Specifically, we distill LLM's reflection outcomes (improved actions by analyzing mistakes) in a text world's tasks to finetune the VLM on the same tasks of the visual world, resulting in an Embodied Multi-Modal Agent (EMMA) quickly adapting to the visual world dynamics. Such cross-modality imitation learning between the two parallel worlds is achieved by a novel DAgger-DPO algorithm, enabling EMMA to generalize to a broad scope of new tasks without any further guidance from the LLM expert. Extensive evaluations on the ALFWorld benchmark's diverse tasks highlight EMMA's superior performance to SOTA VLM-based agents, e.g., 20%-70% improvement in the success rate.

TODO

  • Release sft dataset for ALFWorld
  • Release a 13b instructblip model finetuned on the sft dataset
  • Release imitation learning code (just for reference and wait for refactoring)
  • [] Note that it might be impossible to precisely reproduce our results shown in the paper due to the OAI has deprecated the LLM (i.e., text-davinci-003) we used in the experiment. Hence, we plan to release a new EMMA trained by the replacement (gpt-3.5-turbo-instruct) via Dagger with DPO
  • [] Support to train EMMA using open-sourced LLMs

How to finetune InstructBLIP on the ALFWorld sft dataset

  1. Download our dataset from huggingface
  2. Install LAVIS via "pip install -e ."
  3. Download pretrained vicuna-7/13b-v1.1 model from here
  4. Update the configuration file (./LAVIS/lavis/projects/instructblip/finetuning/alfworld_ft.yaml) to indicate the paths of the sft dataset and the pretrained model
  5. "bash LAVIS/run_scripts/instructblip/finetuning/ft_caption_alfworld.sh"

Citing EMMA

If you use the code in EMMA, please kindly cite our paper using the following BibTeX entry.

@inproceedings{yang2024embodied,
  title={Embodied multi-modal agent trained by an llm from a parallel textworld},
  author={Yang, Yijun and Zhou, Tianyi and Li, Kanxue and Tao, Dapeng and Li, Lusong and Shen, Li and He, Xiaodong and Jiang, Jing and Shi, Yuhui},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={26275--26285},
  year={2024}
}

About

Official code for the paper: Embodied Multi-Modal Agent trained by an LLM from a Parallel TextWorld

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages