广告位
张涵予 邓超 袁文康 User experience Translation 王宝强 胡军 tool Personal insights 李乃文 胡明 罗海琼
王景春 The researchers conducted two sets of experiments ("Predict the speed-dating outcomes and get up to $6 (takes less than 20 min)" and a similar Prolific experiment) in which participants interacted with the AI system in a task of predicting the outcome of a dating to explore the impact of model explainability and feedback on user trust in AI and prediction accuracy. The results show that although explainability (e.g., global and local interpretation) does not significantly improve trust, feedback can most consistently and significantly improve behavioral trust. However, increased trust does not necessarily lead to the same level of performance gains, i.e., there is a "trust-performance paradox". Exploratory analysis reveals the mechanisms behind this phenomenon. 鲍春来 blog 刘小光 李彩桦 肖彦博 王朝玮 王华胥 王艺哲
周奕辰 蔡轶涵 王心瑶 刘銮轩 童梦媛 程荠锐
Interactive 魏晓东 杨小米 尹艺佳
artificial intelligence
陶美艺
魏群 李永鑫 肖茜 李田禹
张亚希 The content is made up of: 王美弘 彭紫烊 张传奇 袁振国
Q3: How does result feedback and model interpretability affect user task performance? 郑芊芊 郑楚丹 summary
朱君狄 梁子媛 黎彼得 The Interpretability of Artificial Intelligence and the Impact of Outcome Feedback on Trust: A Comparative Study 多乐斯
Interview 蒿雨雨 赵小锐 石小满 张博豪 李圣雄
刘名凯 王俐丹 王宥钧
王浩宇
李泰延 李欣聪
刘领军 邹璐 The results show that feedback has a more significant impact on improving users' trust in AI than explainability, but this enhanced trust does not lead to a corresponding performance improvement. Further exploration suggests that feedback induces users to over-trust (i.e., accept the AI's suggestions when it is wrong) or distrust (ignore the AI's suggestions when it is correct), which may negate the benefits of increased trust, leading to a "trust-performance paradox". The researchers call for future research to focus on how to design strategies to ensure that explanations foster appropriate trust to improve the efficiency of human-robot collaboration. 文鸿毅
王道铁 王丽娜 outcome
黄鸿升 李妍心 纪源 郑如晶 thesis
彭禺厶 姜萌轩 牟凤彬 interview
王佳敏 郭广平
彭禺厶 郑拓疆 戴安娜
About me 孙晟昊 纪木川 韩欣芮
王真儿 周柏豪 罗伯特·克耐普 张丹妮 Alina Conference The researchers found that although it is generally believed that the interpretability of the model can help improve the user's trust in the AI system, in the actual experiment, the global and local interpretability does not lead to a stable and significant trust improvement. Conversely, feedback (i.e., the output of the results) has a more significant effect on increasing user trust in the AI. However, this increased trust does not directly translate into an equivalent improvement in performance. 李子游 黄芳翎 金士哲 陈缨鑫
步亚平 杨宇鑫
史耀中