Skip to content

Official Code Repository for the paper "PAIREVAL: Open-domain Dialogue Evaluation with Pairwise Comparison" (COLM 2024).

Notifications You must be signed in to change notification settings

ddehun/PairEval

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PairEval

Official Code Repository for the paper "PAIREVAL: Open-domain Dialogue Evaluation with Pairwise Comparison" (COLM 2024).

Abstract

Overall illustration of PairEval Building a reliable and automated evaluation metric is a necessary but challenging problem for open-domain dialogue systems. Recent studies proposed evaluation metrics that assess generated responses by considering their relevance to previous dialogue histories. Although effective, these metrics evaluate individual responses directly rather than considering their relative quality compared to other responses. To handle this, we propose PAIREVAL, a novel dialogue evaluation metric for assessing responses by comparing their quality against responses in different conversations. PAIREVAL is built on top of open-sourced and moderate-size language models, and we make them specialized in pairwise comparison between dialogue responses. Extensive experiments on multiple benchmarks demonstrate that our metric exhibits a higher correlation with human judgments than baseline metrics. We also find that the proposed comparative metric is more robust in detecting common failures from open-domain dialogue systems, including repetition and speaker insensitivity.


QuickStart

  1. Install the following packages.
torch
transformers
accelerate
bitsandbytes
scipy
tqdm
  1. Download our LoRA checkpoints and datasets from here and locate them on the main directory.

  2. Obtain your access to meta-llama/Llama-2-7b-chat-hf.

  3. Execute the following code to evaluate PairEval on the preprocessed turn-level FED meta-evaluation dataset released by this paper.

python inference.py
  1. Check evaluation results on output/ directory.

Evaluation of Custom Dataset

  1. Please reformat your dataset following data/evaluaton/fed_turn.jsonl.
  2. change --eval_data_name argument in args.py.

FAQ

Please make an issue on this repository or directly contact to [email protected].

About

Official Code Repository for the paper "PAIREVAL: Open-domain Dialogue Evaluation with Pairwise Comparison" (COLM 2024).

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages