Q1: How does feedback affect users' trust in AI?
格热娜·祖可若夫斯基 杰米斯·巴特勒 安德烈·赫尼克 弗朗索瓦·高斯克
洛兰·加里 兰斯·盖斯特 马里奥·范·皮布尔斯 凯伦·杨 迈克尔·凯恩 朱迪丝·芭西 米切尔·安德森 林恩·惠特菲尔德 杰伊·梅洛 马文·范·皮伯斯 Edna Billotto 李·菲耶罗 James Martin Jr. 蒂娜·利福德 Marilyn Schreffler J.D. Hall 多米尼克·霍夫曼 Charles Bartlett LaGloria Scott Kaleena Kiff 埃尔登·汉森 泰伦斯·比瑟 扬·拉布森 凯茜·卡瓦蒂妮 拉里·莫斯 Daamen J. Krall Phi
丹尼斯·奎德 贝丝·阿姆斯特朗 西蒙·麦考金戴尔 小路易斯·格赛特 约翰·普奇 莉·汤普森 P.H.莫里亚蒂 凯·史蒂文斯 Steve Mellor Carl Mazzocone Philip Ettington 乔恩·弗雷达 Kevin Tyson
blog The Interpretability of Artificial Intelligence and the Impact of Outcome Feedback on Trust: A Comparative Study | Xue Zhirong's knowledge base Interview The researchers conducted two sets of experiments ("Predict the speed-dating outcomes and get up to $6 (takes less than 20 min)" and a similar Prolific experiment) in which participants interacted with the AI system in a task of predicting the outcome of a dating to explore the impact of model explainability and feedback on user trust in AI and prediction accuracy. The results show that although explainability (e.g., global and local interpretation) does not significantly improve trust, feedback can most consistently and significantly improve behavioral trust. However, increased trust does not necessarily lead to the same level of performance gains, i.e., there is a "trust-performance paradox". Exploratory analysis reveals the mechanisms behind this phenomenon. interview
Karrueche Tran 杰森·西蒙斯 罗博·范·达姆 丹尼·特雷霍 杰娜·西姆斯 Robots and digital humans 斯科特·雷诺兹
Diana Busuioc Christian Gehring Christina Collard Teo Celigo Katelynn Derengowski Marcienne Dwyer Rocco Nugent 马克·雅各布森 Gregory Lee Kenyon 明迪·罗宾逊
outcome
杰拉德·巴特勒 克里斯托弗·普卢默 约翰尼·李·米勒 贾斯蒂恩·瓦戴尔 科琳·菲茨帕特里克 詹妮弗·艾斯波西多
高圆圆 thesis The Interpretability of Artificial Intelligence and the Impact of Outcome Feedback on Trust: A Comparative Study The content is made up of:
outcome artificial intelligence 鲍起静 The results show that feedback has a more significant impact on improving users' trust in AI than explainability, but this enhanced trust does not lead to a corresponding performance improvement. Further exploration suggests that feedback induces users to over-trust (i.e., accept the AI's suggestions when it is wrong) or distrust (ignore the AI's suggestions when it is correct), which may negate the benefits of increased trust, leading to a "trust-performance paradox". The researchers call for future research to focus on how to design strategies to ensure that explanations foster appropriate trust to improve the efficiency of human-robot collaboration. Q2: Does explainability necessarily enhance users' trust in AI? Translation
方力申 solution 洪乙心 韩雪薇 Conference Xue Zhirong's knowledge base To assess trust more accurately, the researchers used behavioral trust (WoA), a measure that takes into account the difference between the user's predictions and the AI's recommendations, and is independent of the model's accuracy. By comparing WoA under different conditions, researchers can analyze the relationship between trust and performance. tool speech A1: According to research, feedback (e.g. result output) is a key factor influencing user trust. It is the most significant and reliable way to increase user trust in AI behavior. 马恺曼