How to Evaluate Trust in AI-Assisted Decision Making? A Survey of Empirical Methodologies
OLEKSANDRA VERESCHAK,  GILLES BAILLY∗
,  CARAMIAUX∗ 
Abstract: The spread of AI-embedded systems involved in human decision making makes studying human trust in these systems critical. However, empirically investigating trust is challenging. One reason is the lack of standard protocols to design trust experiments. In this paper, we present a survey of existing methods to empirically investigate trust in AI-assisted decision making and analyse the corpus along the constitutive elements of an experimental protocol. We find that the definition of trust is not commonly integrated in experimental protocols, which can lead to findings that are overclaimed or are hard to interpret and compare across studies. Drawing from empirical practices in social and cognitive studies on human-human trust, we provide practical guidelines to improve the methodology of studying Human-AI trust in decision-making contexts. In addition, we bring forward research opportunities of two types: one focusing on further investigation regarding trust methodologies and the other on factors that impact Human-AI trust.
Keywords: trust, artificial intelligence, decision making, methodology
如何评估人工智能辅助决策中的信任度?实证方法调查
OLEKSANDRA VERESCHAK,  GILLES BAILLY∗ ,  CARAMIAUX∗ 
摘要:参与人类决策的人工智能嵌入式系统的普及使得研究人类对这些系统的信任变得至关重要。然而,对信任度进行实证研究具有挑战性。原因之一是缺乏设计信任实验的标准协议。在本文中,我们介绍了对人工智能辅助决策中的信任进行实证研究的现有方法,并根据实验协议的构成要素对语料库进行了分析。我们发现,信任的定义并未普遍纳入实验方案中,这可能导致研究结果被过度宣称,或难以在不同研究中进行解释和比较。我们借鉴人与人之间信任的社会和认知研究中的经验做法,为改进决策环境中人与人工智能信任的研究方法提供了实用指南。此外,我们还提出了两类研究机会:一类是关于信任方法论的进一步研究,另一类是关于影响人类-人工智能信任的因素的研究。
关键词:信任、人工智能、决策、方法论
来源:Oleksandra Vereschak, Gilles Bailly, and Baptiste Caramiaux. 2021. How to Evaluate Trust in AI-Assisted Decision Making? A Survey of Empirical Methodologies. Proc. ACM Hum.-Comput. Interact. 5, CSCW2, Article 327 (October 2021), 39 pages. https://doi.org/10.1145/3476068