Skip to content

Code repo for "Read Anywhere Pointed: Layout-aware GUI Screen Reading with Tree-of-Lens Grounding"

Notifications You must be signed in to change notification settings

eric-ai-lab/Screen-Point-and-Read

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

32 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Read Anywhere Pointed: Layout-aware GUI Screen Reading with Tree-of-Lens Grounding

Paper Project Webpage Hugging Face Data Page

Abstract

Graphical User Interfaces (GUIs) are central to our interaction with digital devices. Recently, growing efforts have been made to build models for various GUI understanding tasks. However, these efforts largely overlook an important GUI-referring task: screen reading based on user-indicated points, which we name the Screen Point-and-Read (ScreenPR) task. This task is predominantly handled by rigid accessible screen reading tools, in great need of new models driven by advancements in Multimodal Large Language Models (MLLMs). In this paper, we propose a Tree-of-Lens (ToL) agent, utilizing a novel ToL grounding mechanism, to address the ScreenPR task. Based on the input point coordinate and the corresponding GUI screenshot, our ToL agent constructs a Hierarchical Layout Tree. Based on the tree, our ToL agent not only comprehends the content of the indicated area but also articulates the layout and spatial relationships between elements. Such layout information is crucial for accurately interpreting information on the screen, distinguishing our ToL agent from other screen reading tools. We also thoroughly evaluate the ToL agent against other baselines on a newly proposed ScreenPR benchmark, which includes GUIs from mobile, web, and operating systems. Last but not least, we test the ToL agent on mobile GUI navigation tasks, demonstrating its utility in identifying incorrect actions along the path of agent execution trajectories.

Todos

  • Add script inputs details
  • Add detail setup instruction
  • Add Evaluation scripts

ToL agent

  • Train GUI region detection Model

    • We train a GUI region detection model to detect the local and global regions for each GUI screenshot. The GUI region detection model is fine-tuned on the DINO detection model with MMDetection, which is a git submodule for our main project. You need to use the following commands to finish the initialization

      git submodule init
      git submodule sync
      git submodule update --remote
    • To setup required datasets for training and inference, please follow the guide in data/README.md to download Android Screen Hierarchical Layout (ASHL), Screen Point-and-Read Benchmark (ScreenPR) and Mobile Trajectory Verification datasets.

    • About the details about training and inference, please check the step-by-step guide in src/models/tol_gui_region_detection/README.md.

  • GUI Hierarchical Layout Tree construction, Target Path Selection & Prompting with Multi-lens

    • Based on the output (a json file) from the previous GUI region detection, the pipeline of our ToL agent includes:

      1. construct the Hierarchical Layout Trees based on the detected regions.

      2. select the target path in the tree based on the input point coordinate.

      3. generate lenses as prompts.

    • We use the following script for the listed process:

      python src/ToL_after_region_detection.py
  • Cycle consistency evaluation

  • Verification of Mobile Navigation Agent Actions

    • We also apply Lot agent to verifying agent trajectory generated by MagicWonder on MagicWand platform with source code under src/mobile_agent. The verification process includes:

      1. initialize storage of target trajectories in data/failed_agent_trajectory
      2. do inference for each screen in target trajectories
      3. analyze agent trajectory and generate verification result

    Checking more details in Readme of Verification of Mobile Navigation Agent Actions.

Citation

If you find it useful for your research and applications, please cite using this BibTeX:

@misc{fan2024readpointedlayoutawaregui,
      title={Read Anywhere Pointed: Layout-aware GUI Screen Reading with Tree-of-Lens Grounding}, 
      author={Yue Fan and Lei Ding and Ching-Chen Kuo and Shan Jiang and Yang Zhao and Xinze Guan and Jie Yang and Yi Zhang and Xin Eric Wang},
      year={2024},
      eprint={2406.19263},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2406.19263}, 
}

Author

By Fan, Yue and Lei, Ding(Orlando), 2024@June.

Releases

No releases published

Packages

No packages published