首页 | 本学科首页   官方微博 | 高级检索  
     

面向图像分类的深度模型可解释性研究综述
引用本文:杨朋波,桑基韬,张彪,冯耀功,于剑.面向图像分类的深度模型可解释性研究综述[J].软件学报,2023,34(1):230-254.
作者姓名:杨朋波  桑基韬  张彪  冯耀功  于剑
作者单位:北京交通大学 计算机与信息技术学院, 北京 100044;北京交通大学 人工智能研究院, 北京 100044
基金项目:国家重点研发计划(2017YFC1703506); 国家自然科学基金(61632004); 中央高校基本科研业务费专项资金(2020YJS027)
摘    要:深度学习目前在计算机视觉、自然语言处理、语音识别等领域得到了深入发展,与传统的机器学习算法相比,深度模型在许多任务上具有较高的准确率.然而,作为端到端的具有高度非线性的复杂模型,深度模型的可解释性没有传统机器学习算法好,这为深度学习在现实生活中的应用带来了一定的阻碍.深度模型的可解释性研究具有重大意义而且是非常必要的,近年来许多学者围绕这一问题提出了不同的算法.针对图像分类任务,将可解释性算法分为全局可解释性和局部可解释性算法.在解释的粒度上,进一步将全局解释性算法分为模型级和神经元级的可解释性算法,将局部可解释性算法划分为像素级特征、概念级特征以及图像级特征可解释性算法.基于上述分类框架,总结了常见的深度模型可解释性算法以及相关的评价指标,同时讨论了可解释性研究面临的挑战和未来的研究方向.认为深度模型的可解释性研究和理论基础研究是打开深度模型黑箱的必要途径,同时可解释性算法存在巨大潜力可以为解决深度模型的公平性、泛化性等其他问题提供帮助.

关 键 词:深度学习  可解释性  图像分类  特征
收稿时间:2020/12/14 0:00:00
修稿时间:2021/3/21 0:00:00

Survey on Interpretability of Deep Models for Image Classification
YANG Peng-Bo,SANG Ji-Tao,ZHANG Biao,FENG Yao-Gong,YU Jian.Survey on Interpretability of Deep Models for Image Classification[J].Journal of Software,2023,34(1):230-254.
Authors:YANG Peng-Bo  SANG Ji-Tao  ZHANG Biao  FENG Yao-Gong  YU Jian
Affiliation:School of Computer and Information Technology, Beijing Jiaotong University, Beijing 100044, China;Institute of Artificial Intelligence, Beijing Jiaotong University, Beijing 100044, China
Abstract:Deep learning has made great achievements in various fields such as computer vision, natural language processing, speech recognition, and other fields. Compared with traditional machine learning algorithms, deep models have higher accuracy on many tasks. Because deep learning is an end-to-end, highly non-linear, and complex model, the interpretability of deep models is not as good as traditional machine learning algorithms, which brings certain obstacles to the application of deep learning in real life. It is of great significance and necessary to study the interpretability of depth model, and in recent years many scholars have proposed different algorithms on this issue. For image classification tasks, this study divides the interpretability algorithms into global interpretability and local interpretability algorithms. From the perspective of interpretation granularity, global interpretability algorithms are further divided into model-level and neuron-level interpretability algorithms, and local interpretability algorithms are divided into pixel-level features, concept-level features, and image-level feature interpretability algorithms. Based on the above framework, this study mainly summarizes the common deep model interpretability research algorithms and related evaluation indicators, and discusses the current challenges and future research directions for deep model interpretability research. It is believed that conducting research on the interpretability and theoretical foundation of deep model is a necessary way to open the black box of the deep model, and interpretability algorithms have huge potential to provide help for solving other problems of deep models, such as fairness and generalization.
Keywords:deep learning  interpretability  image classification  feature
点击此处可从《软件学报》浏览原始摘要信息
点击此处可从《软件学报》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号