面向“一带一路”,服务新工科建设

航空UAV虚拟/增强现实飞行错觉实验教学平台

陕西师范大学航空人因工程与人工智能心理学实验室

banner3 banner2 banner1
本网站平台严格遵守国家相关法律法规要求,请同学们文明使用,共同维护网络秩序安全。
学习资料
【文章推荐】Efect of Confidence and Explanation on Accuracy and Trust Calibration in AI-Assisted Decision Making
作者: 来源:转载 日期:2023/4/19 12:56:16

Efect of Confidence and Explanation on Accuracy and Trust Calibration in AI-Assisted Decision Making

Yunfeng Zhang, Q. Vera Liao, Rachel K. E. Bellamy

Abstract: Today, AI is being increasingly used to help human experts make decisions in high-stakes scenarios. In these scenarios, full automation is often undesirable, not only due to the signiicance of the outcome, but also because human experts can draw on their domain knowledge complementary to the model’s to ensure task success. We refer to these scenarios as AI-assisted decision making, where the individual strengths of the human and the AI come together to optimize the joint decision outcome. A key to their success is to appropriately calibrate human trust in the AI on a case-by-case basis; knowing when to trust or distrust the AI allows the human expert to appropriately apply their knowledge, improving decision outcomes in cases where the model is likely to perform poorly. This research conducts a case study of AI-assisted decision making in which humans and AI have comparable performance alone, and explores whether features that reveal case-speciic model information can calibrate trust and improve the joint performance of the human and AI. Speciically, we study the efect of showing conidence score and local explanation for a particular prediction. Through two human experiments, we show that conidence score can help calibrate people’s trust in an AI model, but trust calibration alone is not suicient to improve AI-assisted decision making, which may also depend on whether the human can bring in enough unique knowledge to complement the AI’s errors. We also highlight the problems in using local explanation for AI-assisted decision making scenarios and invite the research community to explore new approaches to explainability for calibrating human trust in AI.

Keywords: decision support, trust, conidence, explainable AI



人工智能辅助决策中置信度和解释对准确性和信任校准的影响

Yunfeng Zhang, Q. Vera Liao, Rachel K. E. Bellamy

摘要:如今,人工智能正被越来越多地用于帮助人类专家在高风险场景中做出决策。在这些场景中,完全自动化往往是不可取的,这不仅是因为结果的重要性,还因为人类专家可以利用他们的领域知识与模型知识互补,以确保任务的成功。我们将这些情况称为人工智能辅助决策,在这种情况下,人类和人工智能的个人优势将共同优化联合决策结果。成功的关键在于根据具体情况适当调整人类对人工智能的信任度;知道何时该信任或不信任人工智能,可以让人类专家恰当地运用自己的知识,在模型可能表现不佳的情况下改善决策结果。本研究对人工智能辅助决策进行了案例研究,其中人类和人工智能的单独表现不相上下,本研究还探讨了揭示特定案例模型信息的特征是否可以校准信任度并改善人类和人工智能的共同表现。具体来说,我们研究了显示特定预测的置信度和局部解释的效果。通过两个人类实验,我们发现置信度可以帮助校准人们对人工智能模型的信任,但仅靠信任校准并不足以改善人工智能辅助决策,这可能还取决于人类是否能带来足够的独特知识来补充人工智能的错误。我们还强调了在人工智能辅助决策场景中使用局部解释的问题,并邀请研究界探索可解释性的新方法,以校准人工智能中的人类信任。

关键词:决策支持、信任、信心、可解释的人工智能


来源:Yunfeng Zhang, Q. Vera Liao, and Rachel K. E. Bellamy. 2020. Efect of Conidence and Explanation on Accuracy and Trust Calibration in AI-Assisted Decision Making. In Conference on Fairness, Accountability, and Transparency (FAT* ’20), January 27ś30, 2020, Barcelona, Spain. ACM, New York, NY, USA, 11 pages. https://doi.org/10.1145/3351095.3372852