Dissertation Summary
路易斯·赫拉尔多·门德斯 鲍琳娜·盖坦 艾琳·阿苏埃拉
洪金宝 李铭顺 洪天照
MIT Licensed | Copyright © 2024-present Zhirong Xue's knowledge base 霍政谚 Q3: How does result feedback and model interpretability affect user task performance? A1: According to research, feedback (e.g. result output) is a key factor influencing user trust. It is the most significant and reliable way to increase user trust in AI behavior. blog summary Problem finding The researchers conducted two sets of experiments ("Predict the speed-dating outcomes and get up to $6 (takes less than 20 min)" and a similar Prolific experiment) in which participants interacted with the AI system in a task of predicting the outcome of a dating to explore the impact of model explainability and feedback on user trust in AI and prediction accuracy. The results show that although explainability (e.g., global and local interpretation) does not significantly improve trust, feedback can most consistently and significantly improve behavioral trust. However, increased trust does not necessarily lead to the same level of performance gains, i.e., there is a "trust-performance paradox". Exploratory analysis reveals the mechanisms behind this phenomenon.
Линда Лапиньш Мария Кулик Владимир Верёвочкин Юлия Волкова Алексей Чадов Игорь Верник Валерий Скорокосов
劳尔·阿雷瓦洛 维多利亚·盖拉 格蕾塔·费尔南德斯 布鲁诺·加利亚索 玛丽亚·巴斯克斯 Iñaki Mur 丹尼尔·霍瓦斯 马尔滕·丹嫩贝格 Judith Fernández Samuel López Martin Aslan Iria del Río Carmela Lloret Julen Alba Isaac Dos Santos
Pedro Maurizi Mora Fisz Tomás Kirzner
乌莉卡·法尔希 克拉拉·亨利 蒂娜·波尔·达沃
谢盈萱 Xue Zhirong, Designer, Interaction Design, Human-Computer Interaction, Artificial Intelligence, Official Website, Blog, Creator, Author, Engineer, Paper, Product Design, Research, AI, HCI, Design, Learning, Knowledge Base, xuezhirong, UX, Design, Research, AI, HCI, Designer, Engineer, Author, Blog, Papers, Product Design, Study, Learning, User Experience tool The Interpretability of Artificial Intelligence and the Impact of Outcome Feedback on Trust: A Comparative Study | Xue Zhirong's knowledge base Q2: Does explainability necessarily enhance users' trust in AI? 陈竹昇 The Interpretability of Artificial Intelligence and the Impact of Outcome Feedback on Trust: A Comparative Study 蓝苇华 陈家逵 杨铭威 solution A2: Although it is generally believed that the explanatory nature of the model helps to improve user trust, the experimental results show that this enhancement is not significant and not as effective as feedback. In specific cases, such as areas of low expertise, some form of interpretation may result in only a modest increase in appropriate trust. A3: The study found that the feedback of the results can improve the accuracy of the user's predictions (reducing the absolute error), thereby improving the performance of working with AI. However, interpretability does not have as much impact on user task performance as it does on trust. This may mean that we should pay more attention to how to effectively use feedback mechanisms to improve the usefulness and effectiveness of AI-assisted decision-making. 阙铭佑 温升豪 朱芷莹 Q1: How does feedback affect users' trust in AI? 金美满 林芷薰 赵自强 Original address: 王宥谦
thesis 伊西娅尔·伊图诺 帕特里夏·洛佩斯·阿奈斯 埃玛·苏亚雷斯 阿娜·瓦格纳
安德烈斯·贝达 亚历杭德罗·库塔拉 Brenda De Arrigunaga 豪尔赫·吉迪 伊纳基·戈多伊 赫克特·吉门雷兹 卡罗莱纳·米兰达 罗科·纳瓦 Lluvia Rodriguez Fernando Sansores Jason Silva
贝戈尼亚·瓦加斯 Berta Vázquez 阿玛雅·萨拉曼卡 贝琳达 罗拉·罗德里格兹 阿娜·瓦格纳 阿马亚·阿贝拉斯图里 César Mateo 吉列尔莫·普宁 克劳迪娅·特鲁希略 塞尔吉奥·莫莫 布兰卡·罗梅罗 Albert Baró Berta Castañé Ana Mena 汤米·阿奎莱拉 乔斯因·本格特伊 奥斯卡·福龙达 Lucía Guerrero
黛博拉·法拉贝拉 莱昂德拉·利尔 泰丝·阿劳茹 Thainá Duarte 卡米拉•皮坦佳 路易斯·卡洛斯·瓦斯康赛罗斯 罗密欧·布拉加 古斯塔屋·法尔桑 Vitor Thiré Bruno Goya 古斯塔沃·瓦兹 Ravel Andrade 丹尼尔·里贝罗 Kay Sara Sakura Andô 克劳迪乌·亚博朗迪 Flávio Rocha 布鲁诺·帕基利亚 Marcos de Andrade 哈法埃尔·普林莫特 Luti Angelelli 塞缪尔·德·阿西斯 Daniel Volpi Misamisaaa_a