Human Decision Making with Machine Advice: An Experiment on Bailing and Jailing
NINA GRGIĆ-HLAČA, CHRISTOPH ENGEL, KRISHNA P. GUMMADI
Abstract: Much of political debate focuses on the concern that machines might take over. Yet in many domains it is much more plausible that the ultimate choice and responsibility remain with a human decision-maker, but that she is provided with machine advice. A quintessential illustration is the decision of a judge to bail or jail a defendant. In multiple jurisdictions in the US, judges have access to a machine prediction about a defendant’s recidivism risk. In our study, we explore how receiving machine advice influences people’s bail decisions. We run a vignette experiment with laypersons whom we test on a subsample of cases from the database of this prediction tool. In study 1, we ask them to predict whether defendants will recidivate before tried, and manipulate whether they have access to machine advice. We find that receiving machine advice has a small effect, which is biased in the direction of predicting no recidivism. In the field, human decision makers sometimes have a chance, after the fact, to learn whether the machine has given good advice. In study 2, after each trial we inform participants of ground truth. This does not make it more likely that they follow the advice, despite the fact that the machine is (on average) slightly more accurate than real judges. This also holds if initially the advice is mostly correct, or if it initially is mostly to predict (no) recidivism. Real judges know that their decisions affect defendants’ lives. They may also be concerned about reelection or promotion. Hence a lot is at stake. In study 3 we emulate high stakes by giving participants a financial incentive. An incentive to find the ground truth, or to avoid false positive or false negatives, does not make participants more sensitive to machine advice. But an incentive to follow the advice is effective.
Keywords: Machine-Assisted Decision Making; Human-Centered Machine Learning; Algorithmic Decision Making; Algorithmic Fairness, Accountability, and Transparency
人工决策与机器建议: 保释和监禁实验
NINA GRGIĆ-HLAČA, CHRISTOPH ENGEL, KRISHNA P. GUMMADI
摘要:许多政治辩论都集中在对机器可能取代人的担忧上。然而,在许多领域,最终的选择和责任仍然由人类决策者来承担,但她会得到机器的建议,这一点要合理得多。一个典型的例子就是法官决定保释还是监禁被告。在美国的多个司法管辖区,法官都可以通过机器预测被告的再犯风险。在我们的研究中,我们探讨了接受机器建议如何影响人们的保释决定。我们对该预测工具数据库中的子样本进行了测试。在研究 1 中,我们要求他们预测被告在受审前是否会再犯,并操纵他们是否能获得机器建议。我们发现,接受机器建议的效果很小,偏向于预测不会再犯。在实战中,人类决策者有时有机会在事后了解机器是否给出了好的建议。在研究 2 中,每次试验后我们都会告知参与者基本事实。尽管机器的准确性(平均而言)略高于真正的法官,但这并不会提高他们采纳建议的可能性。如果最初的建议大多是正确的,或者最初的建议大多是为了预测(不)累犯,那么这一点也同样成立。真正的法官知道他们的决定会影响被告的生活。他们还可能担心连任或晋升。因此,这事关重大。在研究 3 中,我们通过向参与者提供经济激励来模拟高风险。找到基本真相或避免假阳性或假阴性的激励并不会使参与者对机器建议更加敏感。但是,激励参与者遵从建议却是有效的。
关键词:机器辅助决策;以人为本的机器学习;算法决策;算法公平性、问责制和透明度
来源:Nina Grgić-Hlača, Christoph Engel, and Krishna P. Gummadi. 2019. Human Decision Making with Machine Advice: An Experiment on Bailing and Jailing. Proc. ACM Hum.-Comput. Interact. 3, CSCW, Article 178 (November 2019), 25 pages. https://doi.org//10.1145/3359280