首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
陈伊涵  郑茜颖 《计算机应用研究》2022,39(5):1569-1572+1585
针对现有融合方法缺乏通用性的问题,提出一种结合空间注意力和通道注意力的特征融合网络,设计一个端到端融合框架,采用两阶段的训练策略进行训练。在第一个阶段,训练一个自编码器用来提取图像的特征;在第二个阶段,使用提出的融合损失函数对融合网络进行训练。实验结果表明,该算法既能保留红外图像显著目标特征,还能在保留可见光图像细节上有很好的特性。主观和客观的实验分析验证了该算法的有效性。  相似文献   

2.

Fusion of infrared and visible image is a technology which combines information from two different sensors for the same scene. It also gives extremely effective information complementation, which is widely used for the monitoring systems and military fields. Due to limited field depth in an imaging device, visible images can’t identify some targets that may not be apparent due to poor lighting conditions or because that the background color is similar to the target. To deal with this problem, a simple and efficient image fusion approach of infrared and visible images is proposed to extract target’s details from infrared images and enhance the vision in order to improve the performance of monitoring systems. This method depends on maximum and minimum operations in neutrosophic fuzzy sets. Firstly, the image is transformed from its spatial domain to the neutrosophic domain which is described by three membership sets: truth membership, indeterminacy membership, and falsity membership. The indeterminacy in the input data is handled to provide a comprehensive fusion result. Finally, deneutrosophicised process is made which means that the membership values are retransformed into a normal image space. At the end of the study, experimental results are applied to evaluate the performance of this approach and compare it to the recent image fusion methods using several objective evaluation criteria. These experiments demonstrate that the proposed method achieves outstanding visual performance and excellent objective indicators.

  相似文献   

3.
In the image fusion field, the design of deep learning-based fusion methods is far from routine. It is invariably fusion-task specific and requires a careful consideration. The most difficult part of the design is to choose an appropriate strategy to generate the fused image for a specific task in hand. Thus, devising learnable fusion strategy is a very challenging problem in the community of image fusion. To address this problem, a novel end-to-end fusion network architecture (RFN-Nest) is developed for infrared and visible image fusion. We propose a residual fusion network (RFN) which is based on a residual architecture to replace the traditional fusion approach. A novel detail-preserving loss function, and a feature enhancing loss function are proposed to train RFN. The fusion model learning is accomplished by a novel two-stage training strategy. In the first stage, we train an auto-encoder based on an innovative nest connection (Nest) concept. Next, the RFN is trained using the proposed loss functions. The experimental results on public domain data sets show that, compared with the existing methods, our end-to-end fusion network delivers a better performance than the state-of-the-art methods in both subjective and objective evaluation. The code of our fusion method is available at https://github.com/hli1221/imagefusion-rfn-nest.  相似文献   

4.
针对红外和彩色可见光图像的融合,提出了一种基于二维经验模式分解(BEMD)的新的融合方法。源图像由BEMD分解成为本征模式函数集(IMFs)和残余;再将红外图像的IMFs和残余分别与对应可见光图像的IMFs和残余进行灰度范围匹配后,用加权平均方法进行融合;最后由BEMD重构成融合图像。用此法所得的融合图像增强了红外图像的细节并具有与可见光图像相似的自然色彩。实验中将此法与传统的小波变换方法和主成分分析方法进行了比较,还与经验模式分解(EMD)和复经验模式分解(CEMD)的方法进行了比较,实验结果都证明了该方法的融合效果最优。  相似文献   

5.
针对目前红外图像和可见光图像融合中,融合图像信息量不足的问题,将目标提取和NSCT方法相结合,对其中的高频目标区域提出了基于局部信息熵的融合规则。将其与小波变换法、拉普拉斯法、NSCT法、提升方向波变换法作比较,并通过熵、标准差、相关系数等参数对融合后的图像进行定量分析。实验结果表明,该方法不但较好地提高了融合图像信息量,而且能够更加有效、准确地提取源图像中的特征,在主观视觉效果与客观评价指标上均取得了较好的融合效果。  相似文献   

6.
Most present research of gender recognition focuses on visible facial images, which are sensitive to illumination changes. In this paper, we proposed hybrid methods for gender recognition by fusing visible and thermal infrared images. First, the active appearance model is used to extract features from visible images, as well as local binary pattern features and several statistical temperature features are extracted from thermal infrared images. Then, feature selection is performed by using the F-test statistic. Third, we propose using Bayesian Networks to perform explicit and implicit fusion of visible and thermal infrared image features. For explicit fusion, we propose two Bayesian Networks to perform decision-level and feature-level fusion. For implicit fusion, we propose using features from one modality as privileged information to improve gender recognition by another modality. Finally, we evaluate the proposed methods on the Natural Visible and Infrared facial Expression spontaneous database and the Equinox face database. Experimental results show that both feature-level and decision-level fusion improve the gender recognition performance, compared to that achieved from one modality. The proposed implicit fusion methods successfully capture the role of privileged information of one modality, thus enhance the gender recognition from another modality.  相似文献   

7.
Object matching using deformable templates   总被引:20,自引:0,他引:20  
We propose a general object localization and retrieval scheme based on object shape using deformable templates. Prior knowledge of an object shape is described by a prototype template which consists of the representative contour/edges, and a set of probabilistic deformation transformations on the template. A Bayesian scheme, which is based on this prior knowledge and the edge information in the input image, is employed to find a match between the deformed template and objects in the image. Computational efficiency is achieved via a coarse-to-fine implementation of the matching algorithm. Our method has been applied to retrieve objects with a variety of shapes from images with complex background. The proposed scheme is invariant to location, rotation, and moderate scale changes of the template  相似文献   

8.
石国强  赵霞 《计算机应用》2005,40(10):2822-2830
针对区域推荐网络(RPN)在目标跟踪任务中出现的最大分类分数与最佳边框不匹配的问题,提出一种基于联合优化的强耦合孪生区域推荐跟踪算法(SCSiamRPN)。首先,采用Bounded IoU方法来优化正样本交并比(IoU)值的计算,通过分解公式、固定变量、替换差值和约束近似的操作来简化计算过程。然后,优化损失函数结构,通过在分类损失函数中添加以IoU值为纽带的耦合因子来联合分类任务和边框回归任务,以提升高IoU样本的损失值;通过在边框回归损失函数中添加以IoU为主变量的加权系数来提高目标中心样本的贡献,以提升边框定位精度。仿真结果显示:SCSiamRPN算法在OTB100数据集上的精度和成功率为0.86和0.64;同基于孪生区域推荐候选网络的高性能单目标跟踪(SiamRPN)算法相比,均有3%的提升。实验结果表明:SCSiamRPN算法解决了最大分类分数与最佳边框不匹配的问题,增强了分类和边框回归任务的耦合性,且在不损失跟踪速度的前提下实现了跟踪精度的较大幅度提升。  相似文献   

9.
石国强  赵霞 《计算机应用》2020,40(10):2822-2830
针对区域推荐网络(RPN)在目标跟踪任务中出现的最大分类分数与最佳边框不匹配的问题,提出一种基于联合优化的强耦合孪生区域推荐跟踪算法(SCSiamRPN)。首先,采用Bounded IoU方法来优化正样本交并比(IoU)值的计算,通过分解公式、固定变量、替换差值和约束近似的操作来简化计算过程。然后,优化损失函数结构,通过在分类损失函数中添加以IoU值为纽带的耦合因子来联合分类任务和边框回归任务,以提升高IoU样本的损失值;通过在边框回归损失函数中添加以IoU为主变量的加权系数来提高目标中心样本的贡献,以提升边框定位精度。仿真结果显示:SCSiamRPN算法在OTB100数据集上的精度和成功率为0.86和0.64;同基于孪生区域推荐候选网络的高性能单目标跟踪(SiamRPN)算法相比,均有3%的提升。实验结果表明:SCSiamRPN算法解决了最大分类分数与最佳边框不匹配的问题,增强了分类和边框回归任务的耦合性,且在不损失跟踪速度的前提下实现了跟踪精度的较大幅度提升。  相似文献   

10.
《微型机与应用》2018,(2):122-126
针对可见光与红外图像在复杂背景下匹配准确度低的缺点,提出一种新的基于边缘特征的匹配算法,算法首先对图像进行小波去噪,然后提取图像的边缘及其角点,在匹配测度上提出了新的CSS-EDGE-WHD距离,该距离测度在加权Hausdorff的基础上,结合了边缘角点的方向差异性,并推出相应权值,既利用了边缘点的位置信息,又利用了边缘上曲率极大值点(角点)的方向特性,对不同的边缘点采用不同的权值进行Hausdorff加权,最后将两种边缘点Hausdorff距离结合得到最终的匹配算法。仿真实验验证了该算法在背景复杂的红外与可见光图像匹配中也能保持较高的准确度。  相似文献   

11.
目的 针对传统红外与可见光图像融合方法中人工设计特征提取和特征融合的局限性,以及基于卷积神经网络(convolutional neural networks, CNN)的方法无法有效提取图像中的全局上下文信息和特征融合过程中融合不充分的问题,本文提出了基于视觉Transformer和分组渐进式融合策略的端到端无监督图像融合网络。方法 首先,将在通道维度上进行自注意力计算的多头转置注意力模块和通道注意力模块组合成视觉Transformer,多头转置注意力模块解决了自注意力计算量随像素大小呈次方增大的问题,通道注意力可以强化突出特征。其次,将CNN和设计的视觉Transformer并联组成局部—全局特征提取模块,用来提取源图像中的局部细节信息和全局上下文信息,使提取的特征既具有通用性又具有全局性。此外,为了避免融合过程中信息丢失,通过将特征分组和构造渐进残差结构的方式进行特征融合。最后,通过解码融合特征得到最终的融合图像。结果 实验在TNO数据集和RoadScene数据集上与6种方法进行比较。主观上看,本文方法能够有效融合红外图像和可见光图像中的互补信息,得到优质的融合图像。从客观定量分析...  相似文献   

12.
针对红外与可见光图像融合存在融合图像对比度和清晰度降低、噪声干扰等问题,提出一种DTCWT域的红外与可见光图像融合算法。首先对源图像进行预增强处理;然后通过DTCWT正变换得到低频子带图像和高频子带图像;再分别利用基于直觉模糊集的融合规则融合低频子带图像,基于信息反差对比度的融合规则融合高频子带图像;最后对融合后的低频子带图像和高频子带图像进行DTCWT逆变换得到融合图像。实验结果表明,本文算法能有效提高融合图像对比度和清晰度,降低噪声干扰,客观评价指标总体优于现有算法的,运行效率也有所提升。  相似文献   

13.
有效的红外与可见光图像融合方法研究*   总被引:1,自引:1,他引:0  
针对红外图像可视化程度弱、对比度低的问题, 提出一种基于轮廓小波变换和区域能量的红外与可见光图像融合算法。首先进行多尺度小波分解, 然后进行多方向滤波; 引入循环平移方法来消除伪吉布斯失真;采用基于区域的能量融合规则, 重构变换系数得到最终融合结果;最后用信息熵、信噪比等指标来评价融合的性能。实验表明,该方法不论在客观评价还是在主观评价指标上都优于其他融合方法, 提高了融合图像的视觉效果, 可以得到更加清晰的融合图像。  相似文献   

14.
Zhang  Xufan  Wang  Yong  Yan  Jun  Chen  Zhenxing  Wang  Dianhong 《Multimedia Tools and Applications》2020,79(25-26):17331-17348
Multimedia Tools and Applications - Conventional saliency detection algorithms usually achieve good detection performance at the cost of high computational complexity, and most of them focus on...  相似文献   

15.
A simple method for the correction of the relative shift between the visible and thermal infrared GOES sensor images is introduced. It makes use of the variance operator and the cross-correlation between two patterns. Results indicate that the proposed method is very promising.  相似文献   

16.
提出一种基于LLF和RBD检测的红外和可见光图像融合方法。运用局部拉普拉斯滤波对红外图像平滑处理和对可见光增强处理,以充分利用红外图像的目标信息和可见光图像的细节信息。在此基础上,采用增强背景检测的RBD显著性检测算法处理红外图像,以很好地检测出目标。此外,为了增强目标信息,减弱背景干扰,对RBD检测的结果进行S曲线变换。然后,对红外和可见光图像应用NSST分解得到高频分量与低频分量。最后,使用S曲线变换后获得的显著图对低频分量进行加权融合,采用绝对值取大的规则对高频分量进行融合。实验结果表明,该方法能够得到红外目标突出,细节增强的融合图像。  相似文献   

17.
目的 红外与可见光图像融合的目标是获得具有完整场景表达能力的高质量融合图像。由于深度特征具有良好的泛化性、鲁棒性和发展潜力,很多基于深度学习的融合方法被提出,在深度特征空间进行图像融合,并取得了良好的效果。此外,受传统基于多尺度分解的融合方法的启发,不同尺度的特征有利于保留源图像的更多信息。基于此,提出了一种新颖的渐进式红外与可见光图像融合框架(progressive fusion, ProFuse)。方法 该框架以U-Net为骨干提取多尺度特征,然后逐渐融合多尺度特征,既对包含全局信息的高层特征和包含更多细节的低层特征进行融合,也在原始尺寸特征(保持更多细节)和其他更小尺寸特征(保持语义信息)上进行融合,最终逐层重建融合图像。结果 实验在TNO(Toegepast Natuurwetenschappelijk Onderzoek)和INO(Institut National D’optique)数据集上与其他6种方法进行比较,在选择的6项客观指标上,本文方法在互信息(mutual Information, MI)上相比FusionGAN(generative adversarial ...  相似文献   

18.
基于NSCT的红外与可见光图像融合   总被引:3,自引:0,他引:3       下载免费PDF全文
针对红外与可见光图像的不同特点,提出一种基于非采样Contourlet变换(NSCT)的红外与可见光图像融合算法。采用NSCT对源图像进行多尺度、多方向分解;分别采用基于局部能量和区域特征的融合规则得到融合图像的低频子带系数和带通方向子带系数;最后经过NSCT逆变换得到融合图像。实验结果表明,该算法能够获得较理想的融合图像,其融合效果优于基于Contourlet变换的图像融合算法。  相似文献   

19.
基于NSCT的红外与可见光图像融合   总被引:2,自引:0,他引:2       下载免费PDF全文
提出一种基于非下采样Contourlet变换的红外与可见光图像融合方法。该方法对源图像经非下采样Contourlet变换分解后的高频系数,考虑不同传感器的成像机理进行活性度量,并结合多分辨率系数间相关性来实现加权融合;低频系数则通过一种局部梯度进行活性度量,再采用加权与选择相结合的规则实现融合。最后,通过非下采样Contourlet逆变换重构获得融合图像。实验结果表明了该方法的有效性和可行性。  相似文献   

20.
Jung  Kyunghwa  Ha  Ho-Gun  Jeon  In-Ho  Hong  Jaesung 《Multimedia Tools and Applications》2022,81(27):39059-39075
Multimedia Tools and Applications - Conventional panorama techniques create a wide-angle image by stitching images taken from the same viewpoint. In contrast, the method proposed in this work...  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号