Speaker Sensitive Response Evaluation Model

JinYeong Bak, Alice Oh

Abstract Paper Share

Dialogue and Interactive Systems Long Paper

Session 11B: Jul 8 (06:00-07:00 GMT)
Session 13B: Jul 8 (13:00-14:00 GMT)
Abstract: Automatic evaluation of open-domain dialogue response generation is very challenging because there are many appropriate responses for a given context. Existing evaluation models merely compare the generated response with the ground truth response and rate many of the appropriate responses as inappropriate if they deviate from the ground truth. One approach to resolve this problem is to consider the similarity of the generated response with the conversational context. In this paper, we propose an automatic evaluation model based on that idea and learn the model parameters from an unlabeled conversation corpus. Our approach considers the speakers in defining the different levels of similar context. We use a Twitter conversation corpus that contains many speakers and conversations to test our evaluation model. Experiments show that our model outperforms the other existing evaluation metrics in terms of high correlation with human annotation scores. We also show that our model trained on Twitter can be applied to movie dialogues without any additional training. We provide our code and the learned parameters so that they can be used for automatic evaluation of dialogue response generation models.
You can open the pre-recorded video in a separate window.
NOTE: The SlidesLive video may display a random order of the authors. The correct author list is shown at the top of this webpage.

Similar Papers

Evaluating Dialogue Generation Systems via Response Selection
Shiki Sato, Reina Akama, Hiroki Ouchi, Jun Suzuki, Kentaro Inui,
A representative figure from paper main.55
Multi-Domain Dialogue Acts and Response Co-Generation
Kai Wang, Junfeng Tian, Rui Wang, Xiaojun Quan, Jianxing Yu,
A representative figure from paper main.638