面向“一带一路”,服务新工科建设

航空UAV虚拟/增强现实飞行错觉实验教学平台

陕西师范大学航空人因工程与人工智能心理学实验室

banner3 banner2 banner1
本网站平台严格遵守国家相关法律法规要求,请同学们文明使用,共同维护网络秩序安全。
学习资料
【文章推荐】Effects of Explainable Artificial Intelligence on trust and human behavior in a high-risk decision task
作者: 来源:转载 日期:2023/1/30 11:40:40


Effects of Explainable Artificial Intelligence on trust and human behavior in a high-risk decision task

Benedikt Leichtmann∗ , Christina Humer , Andreas Hinterreiter  , Marc Streit , Martina Mara 

Abstract:Understanding the recommendations of an artificial intelligence (AI) based assistant for decision-making is especially important in high-risk tasks, such as deciding whether a mushroom is edible or poisonous. To foster user understanding and appropriate trust in such systems, we assessed the effects of explainable artificial intelligence (XAI) methods and an educational intervention on AI-assisted decision-making behavior in a 2 × 2 between subjects online experiment with 𝑁 = 410 participants. We developed a novel use case in which users go on a virtual mushroom hunt and are tasked with picking edible and leaving poisonous mushrooms. Users were provided with an AI-based app that showed classification results of mushroom images. To manipulate explainability, one subgroup additionally received attribution-based and example-based explanations of the AI’s predictions; for the educational intervention one subgroup received additional information on how the AI worked. We found that the group that received explanations outperformed that which did not and showed better calibrated trust levels. Contrary to our expectations, we found that the educational intervention, domainspecific (i.e., mushroom) knowledge, and AI knowledge had no effect on performance. We discuss practical implications and introduce the mushroom-picking task as a promising use case for XAI research.

Keywords:XAI ,AI literacy, Domain-specific knowledge, Mushroom identification, Trust calibration, Visual explanation




可解释人工智能对高风险决策任务中的信任和人类行为的影响

Benedikt Leichtmann∗ , Christina Humer , Andreas Hinterreiter  , Marc Streit , Martina Mara 

摘要:理解基于人工智能(AI)的决策助手的建议对于高风险任务尤为重要,例如判断蘑菇是可食用的还是有毒的。为了促进用户对此类系统的理解和适当信任,我们在一个有𝑁 = 410 名参与者的 2 × 2 主体间在线实验中,评估了可解释人工智能(XAI)方法和教育干预对人工智能辅助决策行为的影响。我们开发了一个新颖的使用案例,用户在其中进行虚拟蘑菇狩猎,任务是采摘可食用的蘑菇并留下有毒的蘑菇。我们向用户提供了一款基于人工智能的应用程序,该程序会显示蘑菇图片的分类结果。为了操纵可解释性,一个分组额外收到了对人工智能预测的基于归因和基于实例的解释;在教育干预方面,一个分组收到了关于人工智能如何工作的额外信息。我们发现,接受解释的小组表现优于未接受解释的小组,并显示出更好的校准信任水平。与我们的预期相反,我们发现教育干预、特定领域(即蘑菇)知识和人工智能知识对成绩没有影响。我们讨论了实际意义,并介绍了采蘑菇任务,将其作为XAI研究的一个有前途的使用案例。

关键词:可解释性人工智能, 人工智能素养, 特定领域知识, 蘑菇识别, 信任校准, 视觉解释


来源:Computers in Human Behavior, 2022