面向“一带一路”,服务新工科建设

航空UAV虚拟/增强现实飞行错觉实验教学平台

陕西师范大学航空人因工程与人工智能心理学实验室

banner3 banner2 banner1
本网站平台严格遵守国家相关法律法规要求,请同学们文明使用,共同维护网络秩序安全。
学习资料
【文章推荐】Are Explanations Helpful? A Comparative Study of the Effects of Explanations in AI-Assisted Decision-Making
作者: 来源:转载 日期:2023/2/10 19:54:05


Are Explanations Helpful? A Comparative Study of the Effects of Explanations in AI-Assisted Decision-Making

Xinru Wang ;  Ming Yin

Abstract: This paper contributes to the growing literature in empirical evaluation of explainable AI (XAI) methods by presenting a comparison on the effects of a set of established XAI methods in AI-assisted decision making. Specifically, based on our review of previous literature, we highlight three desirable properties that ideal AI explanations should satisfy—improve people’s understanding of the AI model, help people recognize the model uncertainty, and support people’s calibrated trust in the model. Through randomized controlled experiments, we evaluate whether four types of common model-agnostic explainable AI methods satisfy these properties on two types of decision making tasks where people perceive themselves as having different levels of domain expertise in (i.e., recidivism prediction and forest cover prediction). Our results show that the effects of AI explanations are largely different on decision making tasks where people have varying levels of domain expertise in, and many AI explanations do not satisfy any of the desirable properties for tasks that people have little domain expertise in. Further, for decision making tasks that people are more knowledgeable, feature contribution explanation is shown to satisfy more desiderata of AI explanations, while the explanation that is considered to resemble how human explain decisions (i.e., counterfactual explanation) does not seem to improve calibrated trust. We conclude by discussing the implications of our study for improving the design of XAI methods to better support human decision making.

Keywords: interpretable machine learning, explainable AI, trust, trust calibration, human-subject experiments



解释有用吗?人工智能辅助决策中解释效果的比较研究

Xinru Wang ;  Ming Yin

摘要:本文通过比较一系列成熟的可解释人工智能(XAI)方法在人工智能辅助决策中的效果,为可解释人工智能(XAI)方法实证评估方面不断增长的文献做出了贡献。具体而言,基于对以往文献的回顾,我们强调了理想的人工智能解释应满足的三个理想属性--改善人们对人工智能模型的理解、帮助人们认识模型的不确定性以及支持人们对模型的校准信任。通过随机对照实验,我们评估了四种常见的与模型无关的可解释人工智能方法在人们认为自己具有不同程度的领域专业知识的两类决策任务(即累犯预测和森林覆盖率预测)中是否满足这些特性。我们的研究结果表明,在人们具有不同程度的领域专业知识的决策任务中,人工智能解释的效果大不相同,而在人们几乎不具备领域专业知识的任务中,许多人工智能解释并不满足任何理想特性。此外,在人们知识水平较高的决策任务中,特征贡献解释能满足人工智能解释的更多理想属性,而被认为与人类解释决策的方式相似的解释(即反事实解释)似乎并不能提高校准信任度。最后,我们将讨论我们的研究对改进 XAI 方法设计以更好地支持人类决策的意义。

关键词:关键词:可解释机器学习、可解释人工智能、信任、信任校准、人类主体实验


来源:Xinru Wang and Ming Yin. 2021. Are Explanations Helpful? A Comparative Study of the Effects of Explanations in AI-Assisted Decision-Making. In 26th International Conference on Intelligent User Interfaces (IUI ’21), April 14–17, 2021, College Station, TX, USA. ACM, New York, NY, USA, 11 pages. https://doi.org/10.1145/3397481.3450650