Who needs to know what, when?: Broadening the Explainable AI (XAI) Design Space by Looking at Explanations across the AI Lifecycle
Abstract
The interpretability or explainability of AI systems (XAI) has been a topic gaining renewed attention in recent years across AI and HCI communities. Recent work has drawn attention to the emergent explainability requirements of in situ, applied projects, yet further exploratory work is needed to more fully understand this space. This paper investigates applied AI projects and reports on a qualitative interview study of individuals working on AI projects at a large technology and consulting company. Presenting an empirical understanding of the range of stakeholders in industrial AI projects, this paper also draws out the emergent explainability practices that arise as these projects unfold, highlighting the range of explanation audiences (who), as well as how their explainability needs evolve across the AI project lifecycle (when). We discuss the importance of adopting a sociotechnical lens in designing AI systems, noting how the "AI lifecycle"can serve as a design metaphor to further the XAI design field.