|Researchers:||Kevin Bauer, Moritz von Zahn|
|Category:||Financial Markets, Experiment Center|
Topic and Objectives
Contemporary AI systems’ high predictive performance frequently comes at the expense of users’ understanding of why systems produce a certain output. For AI systems that provide predictions to augment highly consequential processes such as hiring decisions, investment decisions, or medical diagnosing, this “black box” nature can create considerable downsides. These issues include impaired user trust, reduced error safeguarding, restricted contestability, and limited accountability. Having recognized these problems, organizations developing AI and governments increasingly adopt principles and regulations effectively stipulating that AI systems need to provide meaningful explanations about why they make certain predictions. Considering these developments, the implementation, and use of explainable AI (XAI) methods are becoming more widespread and mandated by law.
The purpose of XAI methods is to make AI systems’ hidden logic intelligible to humans by answering the question: why does an AI system make the predictions it does? Thereby, XAI methods aim to achieve high predictive performance and interpretability at the same time. Many state-of-the-art XAI techniques convey insights into AI systems’ logic post-training and explain behaviors by depicting the contribution of individual input features to the outputted prediction. While there is reason to believe that XAI can mitigate black-box problems, the pivotal question is how users respond to modern explanations, given that the human factor frequently creates unanticipated, unintended consequences even in well-designed information systems.
Nascent research on human-XAI interaction examines how explainability affects humans’ perceptions, attitudes, and use of the system, e.g., trust, detection of malfunctioning, (over)reliance, and task performance. Prior research, however, does not consider the potential consequences of providing explanations for users’ situational information processing (the use of currently available information in the given situation) and mental models (cognitive representations that encode beliefs, facts, and knowledge). By depicting the contribution of individual features to specific predictions, feature-based XAI enables users to recognize previously unknown relationships between features and ground truth labels that the AI system autonomously learned from complex data structures. In that sense, XAI may constitute the channel through which AI systems impact humans’ conceptualization and understanding of their environment. This effect could reinforce the already considerable influence contemporary AI systems have on human societies by, for better or worse, allowing human users to adopt systems’ inner logic and problem-solving strategies. Despite the increasing (legally required) implementation of XAI methods, a systematic study of these effects is yet missing. We tried to fill this gap.
- AI systems have to provide explanations to influence the way people make sense of and leverage information, both situationally and more permanently.
- There exists an asymmetric enduring effect that can foster preconceptions and spill over to other decisions, thereby promoting certain (possibly biased) behaviors.
- The asymmetric effect appears to be a confirmation bias, indicating that the employment of explainable AI methods may open the door for human biases in AI-supported decision-making processes.
- Our results indicate that broad, indiscriminate implementation of XAI methods may create unintended downstream ramifications.
- Users’ confirmatory adjustments of mental models and their inclination to carry over learned patterns to other domains may, in an extreme case, foster discrimination and social divisions.
|Kevin Bauer, Oliver Hinz, Moritz von Zahn||Expl(AI)ned: The Impact of Explainable Artificial Intelligence on Users' Information Processing|
forthcoming in Information Systems Research
|2023||Financial Markets, Experiment Center||XAI, explainable machine learning, Information Processing, Belief up-dating, algorithmic transparency|
|315||Kevin Bauer, Oliver Hinz, Moritz von Zahn||Expl(AI)ned: The Impact of Explainable Artificial Intelligence on Users' Information Processing||2021||Financial Markets, Experiment Center||XAI, explainable machine learning, Information Processing, Belief up-dating, algorithmic transparency|