Human Trust in Artificial Intellgence: Review of Empirical Research
ELLA GLIKSON, ANITA WILLIAMS WOOLLEY
Abstract: Artificial intelligence (AI) characterizes a new generation of technologies capable of interacting with the environment and aiming to simulate human intelligence. The success of integrating AI into organizations critically depends on workers’ trust in AI technology. This review explains how AI differs from other technologies and presents the existing empirical research on the determinants of human “trust” in AI, conducted in multiple disciplines over the last 20 years. Based on the reviewed literature, we identify the form of AI representation (robot, virtual, and embedded) and its level of machine intelligence (i.e., its capabilities) as important antecedents to the development of trust and propose a framework that addresses the elements that shape users’ cognitive and emotional trust. Our review reveals the important role of AI’s tangibility, transparency, reliability, and immediacy behaviors in developing cognitive trust, and the role of AI’s anthropomorphism specifically for emotional trust. We also note several limitations in the current evidence base, such as the diversity of trust measures and overreliance on short-term, small sample, and experimental studies, where the development of trust is likely to be different than in longer-term, higher stakes field environments. Based on our review, we suggest the most promising paths for future research.
人工智能中的人类信任: 实证研究综述
Ella Glikson, Anita Williams woolley
摘要:人工智能(AI)是能够与环境互动并以模拟人类智能为目标的新一代技术。能否成功地将人工智能融入组织,关键取决于员工对人工智能技术的信任。本综述解释了人工智能与其他技术的不同之处,并介绍了过去 20 年来多个学科对人工智能中人类 "信任 "决定因素的现有实证研究。根据所回顾的文献,我们将人工智能的表现形式(机器人、虚拟和嵌入式)及其机器智能水平(即其能力)确定为信任发展的重要前因,并提出了一个框架,以解决形成用户认知和情感信任的要素。我们的综述揭示了人工智能的有形性、透明度、可靠性和即时性行为在培养认知信任中的重要作用,以及人工智能的拟人化在情感信任中的具体作用。我们还注意到目前的证据基础存在一些局限性,例如信任测量方法的多样性,以及过度依赖短期、小样本和实验研究,在这些研究中,信任的发展可能与长期、高风险的实地环境不同。根据我们的回顾,我们提出了未来最有希望的研究方向。
来源:Academy of Management Annals 2020, Vol. 14, No. 2, 627–660. https://doi.org/10.5465/annals.2018.0057