Publication
ICML 2021
Workshop paper
Towards Automated Evaluation of Explanations in Graph Neural Networks
Abstract
Explaining Graph Neural Networks predictions to end users of AI applications in easy to understand formats remains an unsolved problem. Based on recent application trends and our own experience in explaining GNN model predictions in real world problems, we propose three broad approaches and discuss their pros and cons. In particular, we address the problem of automatically evaluating explanations in a way that is closer to how users consume the explanations.