首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
    
Compared with traditional visible–visible person re-identification, the modality discrepancy between visible and infrared images makes person re-identification more challenging. Existing methods rely on learning efficient transformation mechanisms in paired images to reduce the modality gap, which inevitably introduces noise. To get rid of these limitations, we propose a Hierarchical Cross-modal shared Feature Network (HCFN) to mine modality-shared and modality-specific information. Since infrared images lack color and other information, we construct an Intra-modal Feature Extraction Module (IFEM) to learn the content information and reduce the difference between visible and infrared images. In order to reduce the heterogeneous division, we apply a Cross-modal Graph Interaction Module (CGIM) to align and narrow the set-level distance of the inter-modal images. By jointly learning two modules, our method can achieve 66.44% Rank-1 on SYSU-MM01 dataset and 74.81% Rank-1 on RegDB datasets, respectively, which is superior compared with the state-of-the-art methods. In addition, ablation experiments demonstrate that HCFN is at least 4.9% better than the baseline network.  相似文献   

3.
针对真实环境中由于复杂背景和物体遮挡、角度变换、行人姿态变化带来的行人重识别(person re-identification,person re-ID)问题,设计了 基于通道注意力(efficient channel attention.ECA)机制和多尺度卷积(poly-scale convolution,PSConv)的行人重识别模型.首先利用残差网络提取全局特征,在网络末端加入基于ECA机制及PSConv的特征融合模块,将全局特征和该模块提取的全局特征进行融合,之后将新的全局特征进行分割得到局部特征,最后将新的全局特征和分割得到的局部特征融合得到最终特征,并计算损失函数.模型在Market1501和DukeMTMC-reID数据集上进行实验验证.在Market1501数据集中,Rank-1和平均精度均值分别达到94.3%和85.2%,在DukeMTMC-reID数据集中,上述两参数分别达到86.3%和75.4%.实验结果可知,该模型可应对实际环境中的复杂情况,增强行人特征的辨别力,有效提高行人重识别的准确率和精度.  相似文献   

4.
在车辆重识别(re-identification,Re-ID) 任务中,通过对全局及局部信息的联合提取已成为目前主流的方法,是许多重识别模型在提取局部信息时只关注了丰富程度而忽略了完整性。针对该问题,提出了一种基于关系融合和特征分解的算法。该算法从空间与通道维度出发,设计对骨干网络所提取的特征沿垂直、水平、通道3维度分割,首先,为了更好地凸显车辆的前景区域,提出一种混合注意力模块(mixed attention module,MAM) ,之后,为了在空间维度上挖掘丰富特征信息的同时使得网络关注更完整的感兴趣区域,设计对垂直及水平方向的分割后的特征实现基于图的关系融合。为了赋予网络捕捉更具判别性信息的能力,在通道方向上对分割后的局部特征实现特征分解。最后,在全局分支特征与局部分支下所提取的鲁棒性特征的共同作用下实现车辆重识别。实验结果表明,本文算法在两个主流车辆重识别数据集上取得了更先进的性能。  相似文献   

5.
针对无监督域自适应行人重识别中存在的聚类不准确导致网络识别准确率低的问题,提出一种基于生成对抗网络的无监督域自适应行人重识别方法。首先通过在池化层后使用批量归一化层、删除一层全连接层和使用Adam优化器等方法优化CNN模型;然后基于最小错误率贝叶斯决策理论分析聚类错误率和选择聚类关键参数;最后利用生成对抗网络调整聚类,有效提升了无监督域自适应行人重识别的识别准确率。在源域Market-1501和目标域DukeMTMC-reID下进行实验,mAP和Rank-1分别达到了53.7%和71.6%。  相似文献   

6.
视觉注意力机制已引起学界和产业界的广泛关注,但既有工作主要从场景观察者的视角进行注意力检测。然而,现实中不断涌现的智能应用场景需要从客体视角进行视觉注意力检测。例如,检测监控目标的视觉注意力有助于预测其后续行为,智能机器人需要理解交互对象的意图才能有效互动。该文结合客体视觉注意力的认知机制,提出一种基于渐进式学习与多尺度增强的客体视觉注意力估计方法。该方法把客体视域视为几何结构和几何细节的组合,构建层次自注意力模块(HSAM)获取深层特征之间的长距离依赖关系,适应几何特征的多样性;并利用方向向量和视域生成器得到注视点的概率分布,构建特征融合模块将多分辨率特征进行结构共享、融合与增强,更好地获取空间上下文特征;最后构建综合损失函数来估计注视方向、视域和焦点预测的相关性。实验结果表明,该文所提方法在公开数据集和自建数据集上对客体视觉注意力估计的不同精度评价指标都优于目前的主流方法。  相似文献   

7.
    
Visual attention for the diagnosis of Autism Spectrum Disorder (ASD) which is a kind of mental disorder has attracted the interests of increasing number of researchers. Although multiple visual attention prediction models have been proposed, this problem is still open. In this paper, considering the shift of visual attention, we propose that an image can be viewed as a pseudo sequence. Besides, we propose a novel visual attention prediction method for ASD with hierarchical semantic fusion (ASD-HSF). Specifically, the proposed model mainly contains a Spatial Feature Module (SFM) and a Pseudo Sequential Feature Module (PSFM). SFM is designed to extract spatial semantic features with a fully convolutional network, while PSFM implemented by two Convolutional Long Short-Term Memory networks (ConvLSTMs) is applied to learn pseudo sequential features. And the outputs of these two modules are fused to extract the final saliency map which simultaneously includes spatial semantic information and pseudo sequential information. Experimental results show that the proposed model not only outperforms ten state-of-the-art general saliency prediction counterparts, but also reaches the first and the second ranks under four metrics and the rest ones of ASD saliency prediction respectively.  相似文献   

8.
针对目前遵循基于检测的多目标跟踪范式存在的不足,本文以DeepSort为基础算法展开研究,以解决跟踪过程中因遮挡导致的目标ID频繁切换的问题。首先改进外观模型,将原始的宽残差网络更换为ResNeXt网络,在主干网络上引入卷积注意力机制,构造新的行人重识别网络,使模型更关注目标关键信息,提取更有效的特征;然后采用YOLOv5作为检测算法,加入检测层使得模型适应不同尺寸的目标,并在主干网络加入坐标注意力机制,进一步提升检测模型精度。在MOT16数据集上进行多目标跟踪实验,多目标跟踪准确率达到66.2%,多目标跟踪精确率达到80.8%,并满足实时跟踪的要求。  相似文献   

9.
    
Attention mechanism is a simple and effective method to enhance discriminative performance of person re-identification (Re-ID). Most of previous attention-based works have difficulty in eliminating the negative effects of meaningless information. In this paper, a universal module, named Cross-level Reinforced Attention (CLRA), is proposed to alleviate this issue. Firstly, we fuse features of different semantic levels using adaptive weights. The fused features, containing richer spatial and semantic information, can better guide the generation of subsequent attention module. Then, we combine hard and soft attention to improve the ability to extract important information in spatial and channel domains. Through the CLRA, the network can aggregate and propagate more discriminative semantic information. Finally, we integrate the CLRA with Harmonious Attention CNN (HA-CNN) and form a novel Cross-level Reinforced Attention CNN (CLRA-CNN) to optimize person Re-ID. Experiment results on several public benchmarks show that the proposed method achieves state-of-the-art performance.  相似文献   

10.
针对已有去雨网络在不同环境中去雨不彻底和图像细节信息损失严重的问题,本文提出一种基于注意力机制的多分支特征级联图像去雨网络。该模型结合多种注意力机制,形成不同类型的多分支网络,将图像空间细节和上下文特征信息在整体网络中自下而上地进行传递并级联融合,同时在网络分支间构建的阶段注意融合机制,可以减少特征提取过程中图像信息的损失,更大限度地保留特征信息,使图像去雨任务更加高效。实验结果表明,本文算法的客观评价指标优于其他对比算法,主观视觉效果得以有效提升,去雨能力更强,准确性更加突出,能够去除不同密度的雨纹,并且能够更好地保留图像背景中的细节信息。  相似文献   

11.
    
Most person re-identification methods are researched under various assumptions. However, viewpoint variations or occlusions are often encountered in practical scenarios. These are prone to intra-class variance. In this paper, we propose a multiscale global-aware channel attention (MGCA) model to solve this problem. It imitates the process of human visual perception, which tends to observe things from coarse to fine. The core of our approach is a multiscale structure containing two key elements: the global-aware channel attention (GCA) module for capturing the global structural information and the adaptive selection feature fusion (ASFF) module for highlighting discriminative features. Moreover, we introduce a bidirectional guided pairwise metric triplet (BPM) loss to reduce the effect of outliers. Extensive experiments on Market-1501, DukeMTMC-reID, and MSMT17, and achieve the state-of-the-art results on mAP. Especially, our approach exceeds the current best method by 2.0% on the most challenging MSMT17 dataset.  相似文献   

12.
    
Due to the influence of factors such as camera angle and pose changes, some salient local features are often suppressed in person re-identification tasks. Moreover, many existing person re-identification methods do not consider the relation between features. To address these issues, this paper proposes two novel approaches: (1) To solve the problem of being confused and misidentified when local features of different individuals have similar attributes, we design a contextual relation network that focuses on establishing the relationship between local features and contextual features, so that all local features of the same person both contain contextual information. (2) To fully and correctly express key local features, we propose an uncertainty-guided joint attention module. The module focuses on the joint representation of individual pixels and local spatial features to enhance the credibility of local features. Finally, our method achieves competitive performance on four widely recognized datasets compared with state-of-the-art methods.  相似文献   

13.
基于特征融合和L-M算法的车辆重识别方法   总被引:1,自引:0,他引:1  
车辆重识别是在视频监控系统中, 匹配不同外界条件下拍摄的同一车辆目标的技术。针对车辆重识别时不同摄像机中同一车辆的图像差异较大,单一特征难以稳定地描述图像的问题,采用多种特征融合实现车辆特征的提取,该方法将车辆图片的HSV特征和LBP特征进行融合,并对融合特征矩阵进行奇异值分解,提取特征值。针对重识别模型训练时传统BP算法收敛速度慢,精度不高的问题,采用Levenberg-Marguardt自适应调整算法优化BP神经网络。实验结果表明,该方法在车辆的同一性识别方面的识别率达到975%,且对光照变化、视角变化都具有较好的鲁棒性。  相似文献   

14.
Most recent occluded person re-identification (re-ID) methods usually learn global features directly from pedestrian images, or use additional pose estimation and semantic analysis model to learn local features, while ignoring the relationship between global and local features, thus incorrectly retrieving different pedestrians with similar attributes as the same pedestrian. Moreover, learning local features using auxiliary models brings additional computational cost. In this work, we propose a Transformer-based dual-branch feature learning model for occluded person re-ID. Firstly, we propose a global–local feature interaction module to learn the relationship between global and local features, thus enhancing the richness of information in pedestrian features. Secondly, we randomly erase local areas in the input image to simulate the real occlusion situation, thereby improving the model’s adaptability to the occlusion scene. Finally, a spilt group module is introduced to explore the local distinguishing features of pedestrian. Numerous experiments validate the effectiveness of our proposed method.  相似文献   

15.
    
Many previous occluded person re-identification(re-ID) methods try to use additional clues (pose estimation or semantic parsing models) to focus on non-occluded regions. However, these methods extremely rely on the performance of additional clues and often capture pedestrian features by designing complex modules. In this work, we propose a simple Fine-Grained Multi-Feature Fusion Network (FGMFN) to extract discriminative features, which is a dual-branch structure consisting of global feature branch and partial feature branch. Firstly, we utilize a chunking strategy to extract multi-granularity features to make the pedestrian information contained in it more comprehensive. Secondly, a spatial transformer network is introduced to localize the pedestrian’s upper body, and then introduce a relation-aware attention module to explore the fine-grained information. Finally, we fuse the features obtained from the two branches to obtain a more robust pedestrian representation. Extensive experiments verify the effectiveness of our method under the occlusion scenario.  相似文献   

16.
行人重识别是从多个数据源中检索出指定目标的任务。红外(IR)和可见光(VIS)的图像差距较大,可见光和红外图像跨模态检索是主要挑战之一。为了能在弱光或夜间也具备相同的检索能力,需要结合红外图像的跨模态模型实现判断。 本文提出一个通过人体关键点引导注意力的新方法,通过关键点引导将全局特征拆分为局部特征,再用生成的局部掩码重新训练原模型,强化对不同局部信息的注意力。使用这个方法,模型可以更好地理解和利用图像中的关键部位,从而提升行人重识别任务的准确率。  相似文献   

17.
18.
传统的行人再识别方法通常使用手工设计的视觉 特征来描述行人图像。然而,仅用单 个类型的视觉特征很难全面表征行人图像信息,导致识别性能达不到满意效果。提出一种利 用典型相关分析对不同类型特征融合的行人再识别方法。该方法首先分别对两种不同类型的 行人视觉特征进行典型相关分析,以获得两组不同特征之间的最大相关子空间。然后,分别 使用连接融合和相加融合两种策略对两种变换后的特征进行融合,使用融合后的特征用于行 人再识别。实验结果表明,相比单个类型的行人图像特征和简单的多特征连接方法,提出的 特征融合方法在保持特征维度较低的同时,能获得更好的识别率。  相似文献   

19.
提出一种基于多粒度融合和跨尺度感知的跨模态行人重识别网络,该网络能够有效提取行人图像特征并减少图像间的模态差异。首先,提出多尺度特征融合注意力机制并设计一种多粒度非局部融合框架,有效融合不同模态和不同尺度的图像特征;其次,提出一种跨尺度特征信息感知策略,该策略可有效降低因视角变化、行人背景变化等产生的无关噪声对行人判别的影响;最后,针对行人图像特征信息不足,设计并行空洞卷积残差模块,获取更为丰富的行人特征信息。将所提方法在2个标准公共数据集与当前先进的跨模态行人重识别方法比较。实验结果表明,所提方法在SYSU-MM01数据集的全搜索模式下的R-1和平均精度(mAP)分别达到75.9%和73.3%,在RegDB数据集的可见光到红外的搜索(VIS to IR)模式下的Rank-1和mAP分别达到93.7%和89.3%,优于所对比的方法,充分证实了所提方法的有效性。  相似文献   

20.
由于行人在真实场景下易受到背景、遮挡、姿态等问题的影响,为获取行人图像中更具辨别能力的特征,提出一种基于注意力机制和局部关联特征的行人重识别方法。首先,在网络框架中嵌入注意力模块以关注图像中表达能力强的特征;然后,利用图像中相邻区域的关联得到局部关联特征,并结合全局特征。本文方法在Market1501和DukeMTMC-ReID数据集上进行实验,Rank-1指标分别达到了95.3%和90.1%。结果证明,本文方法能充分获取判别力强的特征信息,使模型具有较强的识别能力。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号