面向“一带一路”,服务新工科建设

航空UAV虚拟/增强现实飞行错觉实验教学平台

陕西师范大学航空人因工程与人工智能心理学实验室

banner3 banner2 banner1
本网站平台严格遵守国家相关法律法规要求,请同学们文明使用,共同维护网络秩序安全。
学习资料
【文章推荐】Who Should I Trust: AI or Myself? Leveraging Human and AI Correctness Likelihood to Promote Appropriate Trust in AI-Assisted Decision-Making
作者: 来源:转载 日期:2023/3/30 12:39:57


Who Should I Trust: AI or Myself? Leveraging Human and AI Correctness Likelihood to Promote Appropriate Trust 

in AI-Assisted Decision-Making

SHUAI MA,  YING LEI,  XINRU WANG, CHENGBO ZHENG, CHUHAN SHI,  MING YIN,  XIAOJUAN MA

Abstract: In AI-assisted decision-making, it is critical for human decision-makers to know when to trust AI and when to trust themselves. However, prior studies calibrated human trust only based on AI confidence indicating AI’s correctness likelihood (CL) but ignored humans’ CL, hindering optimal team decision-making. To mitigate this gap, we proposed to promote humans’ appropriate trust based on the CL of both sides at a task-instance level. We first modeled humans’ CL by approximating their decision-making models and computing their potential performance in similar instances. We demonstrated the feasibility and effectiveness of our model via two preliminary studies. Then, we proposed three CL exploitation strategies to calibrate users’ trust explicitly/implicitly in the AI-assisted decision-making process. Results from a between-subjects experiment (N=293) showed that our CL exploitation strategies promoted more appropriate human trust in AI, compared with only using AI confidence. We further provided practical implications for more human-compatible AI-assisted decision-making.

Keywords: AI-Assisted Decision-making, Human-AI Collaboration, Trust in AI, Trust Calibration



我应该相信谁?人工智能还是我自己?

——利用人类和人工智能的正确性可能性促进人工智能辅助决策中的适当信任 

SHUAI MA,  YING LEI,  XINRU WANG, CHENGBO ZHENG, CHUHAN SHI,  MING YIN,  XIAOJUAN MA

摘要:在人工智能辅助决策中,人类决策者必须知道何时信任人工智能,何时信任自己。然而,之前的研究仅根据人工智能的正确性可能性(CL)来校准人类的信任度,却忽略了人类的正确性可能性,从而阻碍了团队决策的优化。为了缩小这一差距,我们建议在任务-实例层面上根据双方的CL来促进人类的适当信任。我们首先通过近似人类的决策模型来模拟人类的信任度,并计算他们在类似情况下的潜在表现。我们通过两项初步研究证明了模型的可行性和有效性。然后,我们提出了三种CL利用策略,以校准用户在人工智能辅助决策过程中的显性/隐性信任。主体间实验(N=293)的结果表明,与仅使用人工智能信心相比,我们的CL利用策略促进了人类对人工智能更适当的信任。我们还为更多与人类兼容的人工智能辅助决策提供了实际意义。

关键词:人工智能辅助决策、人机协作、人工智能信任、信任校准


来源:Shuai Ma, Ying Lei, Xinru Wang, Chengbo Zheng, Chuhan Shi, Ming Yin, and Xiaojuan Ma. 2023. Who Should I Trust: AI or Myself? Leveraging Human and AI Correctness Likelihood to Promote Appropriate Trust in AI-Assisted Decision-Making. 1, 1 (January 2023), 27 pages. https://doi.org/10.1145/nnnnnnn.nnnnnnn