首页 | 本学科首页   官方微博 | 高级检索  
     

机器学习的可解释性
引用本文:陈珂锐, 孟小峰. 机器学习的可解释性[J]. 计算机研究与发展, 2020, 57(9): 1971-1986. DOI: 10.7544/issn1000-1239.2020.20190456
作者姓名:陈珂锐  孟小峰
作者单位:1.1(河南财经政法大学计算机与信息工程学院 郑州 450002);2.2(中国人民大学信息学院 北京 100872) (chenke0616@163.com)
基金项目:国家自然科学基金;河南财经政法大学学术创新骨干支持计划
摘    要:近年来,机器学习发展迅速,尤其是深度学习在图像、声音、自然语言处理等领域取得卓越成效.机器学习算法的表示能力大幅度提高,但是伴随着模型复杂度的增加,机器学习算法的可解释性越差,至今,机器学习的可解释性依旧是个难题.通过算法训练出的模型被看作成黑盒子,严重阻碍了机器学习在某些特定领域的使用,譬如医学、金融等领域.目前针对机器学习的可解释性综述性的工作极少,因此,将现有的可解释方法进行归类描述和分析比较,一方面对可解释性的定义、度量进行阐述,另一方面针对可解释对象的不同,从模型的解释、预测结果的解释和模仿者模型的解释3个方面,总结和分析各种机器学习可解释技术,并讨论了机器学习可解释方法面临的挑战和机遇以及未来的可能发展方向.

关 键 词:机器学习  可解释性  神经网络  黑盒子  模仿者模型

Interpretation and Understanding in Machine Learning
Chen Kerui, Meng Xiaofeng. Interpretation and Understanding in Machine Learning[J]. Journal of Computer Research and Development, 2020, 57(9): 1971-1986. DOI: 10.7544/issn1000-1239.2020.20190456
Authors:Chen Kerui  Meng Xiaofeng
Affiliation:1.1(School of Computer & Information Engineering, Henan University of Economics and Law, Zhengzhou 450002);2.2(School of Information, Renmin University of China, Beijing 100872)
Abstract:In recent years, machine learning has developed rapidly, especially in the deep learning, where remarkable achievements are obtained in image, voice, natural language processing and other fields. The expressive ability of machine learning algorithm has been greatly improved; however, with the increase of model complexity, the interpretability of computer learning algorithm has deteriorated. So far, the interpretability of machine learning remains as a challenge. The trained models via algorithms are regarded as black boxes, which seriously hamper the use of machine learning in certain fields, such as medicine, finance and so on. Presently, only a few works emphasis on the interpretability of machine learning. Therefore, this paper aims to classify, analyze and compare the existing interpretable methods; on the one hand, it expounds the definition and measurement of interpretability, while on the other hand, for the different interpretable objects, it summarizes and analyses various interpretable techniques of machine learning from three aspects: model understanding, prediction result interpretation and mimic model understanding. Moreover, the paper also discusses the challenges and opportunities faced by machine learning interpretable methods and the possible development direction in the future. The proposed interpretation methods should also be useful for putting many research open questions in perspective.
Keywords:machine learning  interpretation  neural network  black box  mimic model
本文献已被 万方数据 等数据库收录!
点击此处可从《计算机研究与发展》浏览原始摘要信息
点击此处可从《计算机研究与发展》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号