面向“一带一路”,服务新工科建设

航空UAV虚拟/增强现实飞行错觉实验教学平台

陕西师范大学航空人因工程与人工智能心理学实验室

banner3 banner2 banner1
本网站平台严格遵守国家相关法律法规要求,请同学们文明使用,共同维护网络秩序安全。
学习资料
【文章推荐】Trust Engineering for Human-AI Teams
作者: 来源:转载 日期:2023/6/3 13:14:52

Trust Engineering for Human-AI Teams

Neta Ezer, Sylvain Bruni, Yang Cai, Sam J. Hepenstal, Christopher A. Miller,  Dylan D. Schmorrow

Abstrsct: Human-AI teaming refers to systems in which humans and artificial intelligence (AI) agents collaborate to provide significant mission performance improvements over that which humans or AI can achieve alone. The goal is faster and more accurate decision-making by integrating the rapid data ingest, learning, and analyses capabilities of AI with the creative problem solving and abstraction capabilities of humans. The purpose of this panel is to discuss research directions in Trust Engineering for building appropriate bidirectional trust between humans and AI. Discussions focus on the challenges in systems that are increasingly complex and work within imperfect information environments. Panelists provide their perspectives on addressing these challenges through concepts such as dynamic relationship management, adaptive systems, co-discovery learning, and algorithmic transparency. Mission scenarios in command and control (C2), piloting, cybersecurity, and criminal intelligence analysis demonstrate the importance of bidirectional trust in human-AI teams.


人类-人工智能团队的信任工程

Neta Ezer, Sylvain Bruni, Yang Cai, Sam J. Hepenstal, Christopher A. Miller, Dylan D. Schmorrow

摘要:人类-人工智能团队指的是人类和人工智能(AI)代理合作的系统,与人类或人工智能单独完成的任务相比,该系统能显著提高任务性能。其目标是通过将人工智能的快速数据摄取、学习和分析能力与人类的创造性问题解决和抽象能力相结合,做出更快、更准确的决策。本小组旨在讨论信任工程的研究方向,以便在人类和人工智能之间建立适当的双向信任。讨论的重点是在日益复杂和信息不完善的环境中工作的系统所面临的挑战。小组成员通过动态关系管理、自适应系统、共同发现学习和算法透明度等概念,提出了应对这些挑战的观点。指挥与控制 (C2)、驾驶、网络安全和犯罪情报分析等任务场景表明了人类-人工智能团队中双向信任的重要性。


来源:Proceedings of the Human Factors and Ergonomics Society 2019 Annual Meeting