Expl(AI)ned: The impact of explainable artificial intelligence on cognitive processes
View/ Open
Date
2021-06-25
Author
Bauer, Kevin
Zahn, Moritzvon
Hinz, Oliver
SAFE No.
315
Metadata
Show full item record
Abstract
This paper explores the interplay of feature-based explainable AI (XAI) techniques, information processing, and human beliefs. Using a novel experimental protocol, we study the impact of providing users with explanations about how an AI system weighs inputted information to produce individual predictions (LIME) on users' weighting of information and beliefs about the task-relevance of information. On the one hand, we find that feature-based explanations cause users to alter their mental weighting of available information according to observed explanations. On the other hand, explanations lead to asymmetric belief adjustments that we interpret as a manifestation of the confirmation bias. Trust in the prediction accuracy plays an important moderating role for XAI-enabled belief adjustments. Our results show that feature-based XAI does not only superficially influence decisions but really change internal cognitive processes, bearing the potential to manipulate human beliefs and reinforce stereotypes. Hence, the current regulatory efforts that aim at enhancing algorithmic transparency may benefit from going hand in hand with measures ensuring the exclusion of sensitive personal information in XAI systems. Overall, our findings put assertions that XAI is the silver bullet solving all of AI systems' (black box) problems into perspective.
Research Area
Financial Markets
Experiment Center
Experiment Center
Keywords
xai, explainable machine learning, information processing, belief updating, algorithmic transparency
Research Data
Topic
Investor Behaviour
Saving and Borrowing
Corporate Governance
Saving and Borrowing
Corporate Governance
Relations
1
Publication Type
Working Paper
Link to Publication
Collections
- LIF-SAFE Working Papers [334]