首页 | 本学科首页   官方微博 | 高级检索  
     

基于增强特征融合网络的行人重识别方法
引用本文:刘玉杰,周彩云,李宗民,李华.基于增强特征融合网络的行人重识别方法[J].计算机辅助设计与图形学学报,2021,33(2):232-240.
作者姓名:刘玉杰  周彩云  李宗民  李华
作者单位:中国石油大学(华东)计算机科学与技术学院 青岛 266580;中国石油大学(华东)计算机科学与技术学院 青岛 266580;中国石油大学(华东)计算机科学与技术学院 青岛 266580;中国科学院计算技术研究所智能信息处理重点实验室 北京 100190;中国科学院大学 北京 100049
基金项目:山东省自然科学基金;国家自然科学基金
摘    要:针对行人重识别技术受遮挡、背景冗余、光照、姿态以及检测误差等问题的影响,鲁棒的行人特征表达对正确检索行人越来越重要.为了利用对齐特征和度量学习的优势,进一步分析局部空间语义特征.首先,在特征层面:一是在ResNet50框架中嵌入空间变换结构,自适应对齐局部区域空间特征,解决因局部区域不对齐导致的空间语义不一致的问题;二是通过对齐的局部特征设计一种增强特征融合网络,充分利用语义信息间的关联性提取图像的细节特征.然后,在损失函数层面:提出一种排序矩阵方法选取区域样本对,设计了一种局部三元组损失计算方法,联合正则化分类损失共同训练网络,充分利用融合的增强特征,达到高效度量的效果.最后,文中方法结合现有的重排算法进一步提高了Rank-1与mAP检索精度,在行人重识别基准数据集Market-1501上的实验结果,证明了本文方法的有效性.

关 键 词:空间语义特征  增强特征融合网络  排序矩阵  局部三元组损失

Strong Feature Fusion Networks for Person Re-Identification
Liu Yujie,Zhou Caiyun,Li Zongmin,Li Hua.Strong Feature Fusion Networks for Person Re-Identification[J].Journal of Computer-Aided Design & Computer Graphics,2021,33(2):232-240.
Authors:Liu Yujie  Zhou Caiyun  Li Zongmin  Li Hua
Affiliation:(College of Computer Science and Technology,China University of Petroleum,Qingdao 266580;Key Laboratory of Intelligent Information Processing,Institute of Computing Technology,Chinese Academy of Sciences,Beijing 100190;University of Chinese Academy of Sciences,Beijing 100049)
Abstract:With the development of deep learning,the performance of person Re-Identification(Re-ID)has been significantly improved.It’s still a challenging task due to the challenges coming from large variations on persons such as occlusion,background clutter,pose,illumination and detection failure,etc.To retrieve true pedestrians,robust feature expression is significant.Instead of using external cues,this paper takes advantage of robust alignment features and metric learning.First,from the aspect of feature extraction,there were three contributions.(i)Embeded a spatial transformer network in the network architecture,which is called ResNet_STN in this paper,which can solve the problem of local spatial semantic feature inconsistency,accurately express the main characteristics of the target,and achieve pedestrian alignment.(ii)Designed a strong feature fusion network based on the aligned local features,which is named a Strong Feature Fusion Module(SFFM)and can make full use of the connection between semantic information to extract detailed features of images.Then,from the aspect of metric loss function,one contribution was put forward.(iii)Proposed a Ranking Matrix(RM)method to select local triplet samples and compute local triplet loss.We combined a regularized classification loss to train the network to unleash the discrimination ability of the learned strong representations of this network architecture.Finally,the proposed method with the existing re-ranking algorithm to further improves Rank-1 and mAP retrieval accuracy.Experimental results on Market-1501 dataset demonstrate the effectiveness of our proposed method.
Keywords:spatial semantic feature  strong feature fusion networks  ranking matrix  local triplet loss
本文献已被 CNKI 维普 万方数据 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号