首页 | 本学科首页   官方微博 | 高级检索  
     

Explainable,Domain-Adaptive,and Federated Artificial Intelligence in Medicine
作者姓名:Ahmad Chaddad  Qizong Lu  Jiali Li  Yousef Katib  Reem Kateb  Camel Tanougast  Ahmed Bouridane  Ahmed Abdulkadir
作者单位:1. the School of Artificial intelligence, Guilin Universiy of Electronic Technology;3. the College of Medicine, Taibah University;4. College of Computer Science and Engineering, Taibah University;5. the Laboratory of Design, Optimization and Modeling Systems, University of Lorraine;6. IEEE;7. the Center for Data Analytics and Cybersecurity (CDAC), University of Sharjah;8. the Laboratoire de recherche en neuroimagerie, Centre Hospitalier Universitaire Vaudois
基金项目:supported in part by the National Natural Science Foundation of China (82260360);
摘    要:Artificial intelligence(AI) continues to transform data analysis in many domains.Progress in each domain is driven by a growing body of annotated data,increased computational resources,and technological innovations.In medicine,the sensitivity of the data,the complexity of the tasks,the potentially high stakes,and a requirement of accountability give rise to a particular set of challenges.In this review,we focus on three key methodological approaches that address some of the particular challenges...


Explainable,Domain-Adaptive,and Federated Artificial Intelligence in Medicine
Authors:Ahmad Chaddad  Qizong Lu  Jiali Li  Yousef Katib  Reem Kateb  Camel Tanougast  Ahmed Bouridane  Ahmed Abdulkadir
Abstract:Artificial intelligence (AI) continues to transform data analysis in many domains. Progress in each domain is driven by a growing body of annotated data, increased computational resources, and technological innovations. In medicine, the sensitivity of the data, the complexity of the tasks, the potentially high stakes, and a requirement of accountability give rise to a particular set of challenges. In this review, we focus on three key methodological approaches that address some of the particular challenges in AI-driven medical decision making. 1) Explainable AI aims to produce a human-interpretable justification for each output. Such models increase confidence if the results appear plausible and match the clinicians expectations. However, the absence of a plausible explanation does not imply an inaccurate model. Especially in highly non-linear, complex models that are tuned to maximize accuracy, such interpretable representations only reflect a small portion of the justification. 2) Domain adaptation and transfer learning enable AI models to be trained and applied across multiple domains. For example, a classification task based on images acquired on different acquisition hardware. 3) Federated learning enables learning large-scale models without exposing sensitive personal health information. Unlike centralized AI learning, where the centralized learning machine has access to the entire training data, the federated learning process iteratively updates models across multiple sites by exchanging only parameter updates, not personal health data. This narrative review covers the basic concepts, highlights relevant corner-stone and state-of-the-art research in the field, and discusses perspectives. 
Keywords:Domain adaptation  explainable artificial intelligence  federated learning
点击此处可从《》浏览原始摘要信息
点击此处可从《》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号