Towards Faithfully Interpretable NLP Systems: How Should We Define and Evaluate Faithfulness?
Alon Jacovi, Yoav Goldberg
Interpretability and Analysis of Models for NLP Short Paper
Session 7B: Jul 7
(09:00-10:00 GMT)
Session 8B: Jul 7
(13:00-14:00 GMT)
Abstract:
With the growing popularity of deep-learning based NLP models, comes a need for interpretable systems. But what is interpretability, and what constitutes a high-quality interpretation? In this opinion piece we reflect on the current state of interpretability evaluation research. We call for more clearly differentiating between different desired criteria an interpretation should satisfy, and focus on the faithfulness criteria. We survey the literature with respect to faithfulness evaluation, and arrange the current approaches around three assumptions, providing an explicit form to how faithfulness is "defined" by the community. We provide concrete guidelines on how evaluation of interpretation methods should and should not be conducted. Finally, we claim that the current binary definition for faithfulness sets a potentially unrealistic bar for being considered faithful. We call for discarding the binary notion of faithfulness in favor of a more graded one, which we believe will be of greater practical utility.
You can open the
pre-recorded video
in a separate window.
NOTE: The SlidesLive video may display a random order of the authors.
The correct author list is shown at the top of this webpage.
Similar Papers
FEQA: A Question Answering Evaluation Framework for Faithfulness Assessment in Abstractive Summarization
Esin Durmus, He He, Mona Diab,

NILE : Natural Language Inference with Faithful Natural Language Explanations
Sawan Kumar, Partha Talukdar,

Towards Faithful Neural Table-to-Text Generation with Content-Matching Constraints
Zhenyi Wang, Xiaoyang Wang, Bang An, Dong Yu, Changyou Chen,

Learning to Faithfully Rationalize by Construction
Sarthak Jain, Sarah Wiegreffe, Yuval Pinter, Byron C. Wallace,
