首页 | 本学科首页   官方微博 | 高级检索  
     

基于知识回顾与特征解耦的目标检测蒸馏
引用本文:张瑶,潘志松.基于知识回顾与特征解耦的目标检测蒸馏[J].计算机应用研究,2023,40(5).
作者姓名:张瑶  潘志松
作者单位:陆军工程大学,陆军工程大学
基金项目:国家自然科学基金资助项目(62076251)
摘    要:当前的知识蒸馏算法均只在对应层间进行蒸馏,为了解决这一问题,提高知识蒸馏的性能,首先分析了教师模型的低层特征对学生模型高层特征的指导作用,并在此基础上提出了基于知识回顾解耦的目标检测蒸馏方法。该方法首先将学生模型的高层特征与低层特征对齐、融合并区分空间和通道提取注意力,使得学生的高层特征能够渐进式地学到教师的低层和高层知识;随后将前背景解耦,分别蒸馏;最后通过金字塔池化在不同尺度上计算其与教师模型特征的相似度。在不同的目标检测模型上进行了实验,实验表明,提出的方法简单且有效,能够适用于各种不同的目标检测模型。骨干网络为ResNet-50的RetinaNet和FCOS分别在COCO2017数据集上获得了39.8%和42.8%的mAP,比基准提高了2.4%和2.3%。

关 键 词:知识蒸馏    目标检测    知识回顾    特征解耦
收稿时间:2022/9/8 0:00:00
修稿时间:2023/4/10 0:00:00

Distilling object detectors via knowledge review and decouple
zhang yao and pan zhi song.Distilling object detectors via knowledge review and decouple[J].Application Research of Computers,2023,40(5).
Authors:zhang yao and pan zhi song
Abstract:Current knowledge distillation algorithms only distill between the corresponding layers. In order to solve the problem and improve the performance of knowledge distillation, this paper first analyzed the inference of the low-level features of the teacher model on the high-level features of the student model. On this basis, this paper proposed a knowledge distillation method for object detectors, which is based on knowledge review and feature decouple. Firstly, it aligned and fused the high-level feature maps of the student model, then extracted attention maps on spatial and channel dimensions separately, so that the high-level features of students could learn the low-level and high-level knowledge of teachers progressively. Afterwards, it decoupled and distilled the foreground and background separately. Finally, it used a pyramid pooling to calculate the similarity between the features of the teacher model at different scales. This paper conducted experiments on different object detectors. Experiments show that the proposed method is simple but effective, and can be applied to a variety of different object detectors. RetinaNet and FCOS with backbone networks of ResNet-50 obtained 39.8% and 42.8% mAP on the COCO2017 dataset, respectively, which are 2.4% and 2.3% higher than the benchmark.
Keywords:knowledge distillation  object detection  knowledge review  feature decouple
点击此处可从《计算机应用研究》浏览原始摘要信息
点击此处可从《计算机应用研究》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号