首页 | 本学科首页   官方微博 | 高级检索  
     

结合源域差异性与目标域不确定性的深度迁移主动学习方法
引用本文:刘大鹏,曹永锋,苏彩霞,张伦. 结合源域差异性与目标域不确定性的深度迁移主动学习方法[J]. 模式识别与人工智能, 2021, 34(10): 898-908. DOI: 10.16451/j.cnki.issn1003-6059.202110003
作者姓名:刘大鹏  曹永锋  苏彩霞  张伦
作者单位:贵州师范大学 大数据与计算机科学学院 贵阳550025
基金项目:贵州省科学技术基金项目(黔科合基础[2018]1114)资助
摘    要:针对训练深度模型时样本标注成本较大的问题,文中提出结合源域差异性与目标域不确定性的深度迁移主动学习方法.以源任务网络模型作为目标任务初始模型,在主动学习迭代中结合源域差异性和目标域不确定性挑选对模型最具有贡献的目标域样本进行标注,根据学习阶段动态调整两种评价指标的权重.定义信息榨取比概念,提出基于信息榨取比的主动学习批次训练策略及T&N训练策略.两个跨数据集迁移实验表明,文中方法在取得良好性能的同时可有效降低标注成本,提出的主动学习训练策略可优化计算资源在主动学习过程中的分配,即让方法在初始学习阶段对样本学习更多次数,在终末学习阶段对样本学习较少次数.

关 键 词:深度主动学习  深度迁移学习  源域差异性  目标域不确定性  信息榨取比
收稿时间:2021-01-22

Deep Transfer Active Learning Method Combining Source Domain Difference and Target Domain Uncertainty
LIU Dapeng,CAO Yongfeng,SU Caixia,ZHANG Lun. Deep Transfer Active Learning Method Combining Source Domain Difference and Target Domain Uncertainty[J]. Pattern Recognition and Artificial Intelligence, 2021, 34(10): 898-908. DOI: 10.16451/j.cnki.issn1003-6059.202110003
Authors:LIU Dapeng  CAO Yongfeng  SU Caixia  ZHANG Lun
Affiliation:1. School of Big Data and Computer Science, Guizhou Normal University, Guiyang 550025
Abstract:Training deep neural network models comes with a heavy labeling cost. To reduce the cost, a deep transfer active learning method combining source domain and target domain is proposed. With the initial model transferred from source task, the current task samples with larger contribution to the model performance improvement are labeled by using a dynamical weighting combination of source domain difference and target domain uncertainty. Information extraction ratio(IER) is concretely defined in the specific case. An IER-based batch training strategy and a T&N batch training strategy are proposed to deal with model training process. The proposed method is tested on two cross-dataset transfer learning experiments. The results show that the transfer active learning method achieves good performance and reduces the cost of annotation effectively and the proposed strategies optimize the distribution of computing resources during the active learning process. Thus, the model learns more times from samples in the early phases and less times in the later and end phases.
Keywords:Deep Active Learning  Deep Transfer Learning  Source Domain Difference  Target Domain Uncertainty  Information Extraction Ratio  
本文献已被 万方数据 等数据库收录!
点击此处可从《模式识别与人工智能》浏览原始摘要信息
点击此处可从《模式识别与人工智能》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号