首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This study proposes a novel unsupervised network for IR/VIS fusion task, termed as RXDNFuse, which is based on the aggregated residual dense network. In contrast to conventional fusion networks, RXDNFuse is designed as an end-to-end model that combines the structural advantages of ResNeXt and DenseNet. Hence, it overcomes the limitations of the manual and complicated design of activity-level measurement and fusion rules. Our method establishes the image fusion problem into the structure and intensity proportional maintenance problem of the IR/VIS images. Using comprehensive feature extraction and combination, RXDNFuse automatically estimates the information preservation degrees of corresponding source images, and extracts hierarchical features to achieve effective fusion. Moreover, we design two loss function strategies to optimize the similarity constraint and the network parameter training, thus further improving the quality of detailed information. We also generalize RXDNFuse to fuse images with different resolutions and RGB scale images. Extensive qualitative and quantitative evaluations reveal that our results can effectively preserve the abundant textural details and the highlighted thermal radiation information. In particular, our results form a comprehensive representation of scene information, which is more in line with the human visual perception system.  相似文献   

2.
Benefitting from the strong feature extraction capability of deep learning, infrared and visible image fusion has made a great progress. Since infrared and visible images are obtained by different sensors with different imaging mechanisms, there exists domain discrepancy, which becomes stumbling block for effective fusion. In this paper, we propose a novel self-supervised feature adaption framework for infrared and visible image fusion. We implement a self-supervised strategy that facilitates the backbone network to extract features with adaption while retaining the vital information by reconstructing the source images. Specifically, we preliminary adopt an encoder network to extract features with adaption. Then, two decoders with attention mechanism blocks are utilized to reconstruct the source images in a self-supervised way, forcing the adapted features to contain vital information of the source images. Further, considering the case that source images contain low-quality information, we design a novel infrared and visible image fusion and enhancement model, improving the fusion method’s robustness. Experiments are constructed to evaluate the proposed method qualitatively and quantitatively, which show that the proposed method achieves the state-of-art performance comparing with existing infrared and visible image fusion methods. Results are available at https://github.com/zhoafan/SFA-Fuse.  相似文献   

3.
目的 在基于深度学习的红外与可见光图像融合方法中,多尺度分解是一种提取不同尺度特征的重要方式。针对传统多尺度分解方法里尺度设置粗糙的问题,提出了一种基于八度(octave)卷积的改进图像融合算法。方法 融合方法由4部分组成:编码器、特征增强、融合策略和解码器。首先,使用改进后的编码器获取源图像的多尺度上的低频、次低频和高频特征。这些特征会被从顶层到底层进行强化。其次,将这些特征按照对应的融合策略进行融合。最后,融合后的深度特征由本文设计的解码器重构为信息丰富的融合图像。结果 实验在TNO和RoadScene数据集上与9种图像融合算法进行比较。主观评价方面,所提算法可以充分保留源图像中的有效信息,融合结果也符合人的视觉感知;客观指标方面,在TNO数据集上所提算法在信息熵、标准差、视觉信息保真度、互信息和基于小波变换提取局部特征的特征互信息5个指标上均有最优表现,相较于9种对比方法中最优值分别提升了0.54%,4.14%,5.01%,0.55%,0.68%。在RoadScene数据集上所提算法在信息熵、标准差、视觉信息保真度和互信息4个指标上取得了最优值,相较9种对比方法的最优值分别提升了...  相似文献   

4.
首先应用基于变分的图像融合方法对DMSP气象卫星夜间微光云图和红外云图进行融合,在此基础上应用快速C-V模型图像分割方法对图像进行分割,将云分割出来。结果显示,与源图像相比,融合图像中包含了更多的关于云的信息,云体清晰度提高,纹理细致。在此基础上云体分割比较完整,比直接在各通道图像中进行分割的效果优越。  相似文献   

5.
图像融合是指把来自多传感器数据的互补信息合并形成一幅新的图像,以便达到进行目标监视与识别等目的。本文提出了基于小波变换的图像融合方法,并分析了可见光与红外的显微图像的融合结果。结果融合效果不错,结果图像比较清晰,在一图像中即可得到比较完整的信息。因而,这一方法具有广阔的应用前景。  相似文献   

6.
目的 针对图像融合中存在的目标信息减弱、背景细节不清晰、边缘模糊和融合效率低等不足,为了充分利用源图像的有用特征,将双尺度分解与基于视觉显著性的融合权重的思想融合在一起,提出了一种基于显著性分析和空间一致性的双尺度图像融合方法。方法 利用均值滤波器对源图像进行双尺度分解,先后得到源图像的基层图像信息和细节层图像信息;对基层图像基于加权平均规则融合,对细节层图像先基于显著性分析得到初始权重图,再利用引导滤波优化得到的最终权重图指导加权;通过双尺度重建得到融合图像。结果 根据传统方法与深度学习的不同特点,在TNO等公开数据集上从主观和客观两方面对所提方法进行评价。从主观分析来看,本文方法可以有效提取和融合源图像中的重要信息,得到融合质量高、视觉效果自然清晰的图像。从客观评价来看,实验验证了本文方法在提升融合效果上的有效性。与各种融合结果进行量化比较,在平均梯度、边缘强度、空间频率、特征互信息和交叉熵上的平均精度均为最优;与深度学习方法相比,熵、平均梯度、边缘强度、空间频率、特征互信息和交叉熵等指标均值分别提升了6.87%、91.28%、91.45%、85.10%、0.18%和45.45%。结论 实验结果表明,所提方法不仅在目标、背景细节和边缘等信息的增强效果显著,而且能快速有效地利用源图像的有用特征。  相似文献   

7.
In the image fusion field, the design of deep learning-based fusion methods is far from routine. It is invariably fusion-task specific and requires a careful consideration. The most difficult part of the design is to choose an appropriate strategy to generate the fused image for a specific task in hand. Thus, devising learnable fusion strategy is a very challenging problem in the community of image fusion. To address this problem, a novel end-to-end fusion network architecture (RFN-Nest) is developed for infrared and visible image fusion. We propose a residual fusion network (RFN) which is based on a residual architecture to replace the traditional fusion approach. A novel detail-preserving loss function, and a feature enhancing loss function are proposed to train RFN. The fusion model learning is accomplished by a novel two-stage training strategy. In the first stage, we train an auto-encoder based on an innovative nest connection (Nest) concept. Next, the RFN is trained using the proposed loss functions. The experimental results on public domain data sets show that, compared with the existing methods, our end-to-end fusion network delivers a better performance than the state-of-the-art methods in both subjective and objective evaluation. The code of our fusion method is available at https://github.com/hli1221/imagefusion-rfn-nest.  相似文献   

8.
目的 红外与可见光图像融合的目标是获得具有完整场景表达能力的高质量融合图像。由于深度特征具有良好的泛化性、鲁棒性和发展潜力,很多基于深度学习的融合方法被提出,在深度特征空间进行图像融合,并取得了良好的效果。此外,受传统基于多尺度分解的融合方法的启发,不同尺度的特征有利于保留源图像的更多信息。基于此,提出了一种新颖的渐进式红外与可见光图像融合框架(progressive fusion, ProFuse)。方法 该框架以U-Net为骨干提取多尺度特征,然后逐渐融合多尺度特征,既对包含全局信息的高层特征和包含更多细节的低层特征进行融合,也在原始尺寸特征(保持更多细节)和其他更小尺寸特征(保持语义信息)上进行融合,最终逐层重建融合图像。结果 实验在TNO(Toegepast Natuurwetenschappelijk Onderzoek)和INO(Institut National D’optique)数据集上与其他6种方法进行比较,在选择的6项客观指标上,本文方法在互信息(mutual Information, MI)上相比FusionGAN(generative adversarial ...  相似文献   

9.
Infrared and visible image fusion aims to synthesize a single fused image containing salient targets and abundant texture details even under extreme illumination conditions. However, existing image fusion algorithms fail to take the illumination factor into account in the modeling process. In this paper, we propose a progressive image fusion network based on illumination-aware, termed as PIAFusion, which adaptively maintains the intensity distribution of salient targets and preserves texture information in the background. Specifically, we design an illumination-aware sub-network to estimate the illumination distribution and calculate the illumination probability. Moreover, we utilize the illumination probability to construct an illumination-aware loss to guide the training of the fusion network. The cross-modality differential aware fusion module and halfway fusion strategy completely integrate common and complementary information under the constraint of illumination-aware loss. In addition, a new benchmark dataset for infrared and visible image fusion, i.e., Multi-Spectral Road Scenarios (available at https://github.com/Linfeng-Tang/MSRS), is released to support network training and comprehensive evaluation. Extensive experiments demonstrate the superiority of our method over state-of-the-art alternatives in terms of target maintenance and texture preservation. Particularly, our progressive fusion framework could round-the-clock integrate meaningful information from source images according to illumination conditions. Furthermore, the application to semantic segmentation demonstrates the potential of our PIAFusion for high-level vision tasks. Our codes will be available at https://github.com/Linfeng-Tang/PIAFusion.  相似文献   

10.
红外与可见光图像融合是机器视觉的一个重要领域,在日常生活中应用广泛。近年来,虽然红外与可见光图像融合领域已有多种融合算法,但目前该领域还缺乏能够衡量多种融合算法性能的算法框架和融合基准。在简要概述了红外与可见光图像融合的最新进展后,提出了一种扩展VIFB的红外与可见光图像融合基准,该基准由56对图像、32种融合算法和16种评价指标组成。基于该融合基准进行了大量实验,用来测评所选取的融合算法的性能。通过定性和定量结果分析,确定了性能优良的图像融合算法,并对红外与可见光图像融合领域的未来前景进行了展望。  相似文献   

11.
针对红外和可见光图像融合结果背景信息不足、对比度较低的问题,提出一种结合引导滤波和快速共现滤波的融合方法。首先,以高斯滤波将源图像分解为细节层和基础层。然后以去除值域滤波器、全局统计共现信息的方式简化共现滤波,形成快速共现滤波,再用其融合细节层;此外,引入窗口因子,用图像大小与窗口因子比值确定引导滤波窗口值,再用其融合基础层。实验结果表明该算法增加了图像背景细节,提高了人物与背景的对比度。主观和客观的实验分析验证了该算法的有效性。  相似文献   

12.
提出一种基于区域显著性融合规则的非降采样Contourlet变换的红外与可见光图像融合方法.首先,对来自同一场景图像的红外与可见光图像的Ⅰ分量进行非降采样Contourlet变换;然后,依照区域匹配度量和显著性度量规则进行融合,而融合图像通过非降采样Contourlet反变换即可得到.最后,针对此方法进行大量实验,并将其融合结果与基于小波变换及拉普拉斯金子塔变换的融合结果进行比较,同时分析它们在不同噪声条件下的性能指标.  相似文献   

13.
基于图像金字塔的分维融合算法   总被引:3,自引:0,他引:3  
孙玉秋  田金文  柳健 《计算机应用》2005,25(5):1064-1065,1075
数据融合是进行目标检测和识别的重要技术之一,但是在融合过程中不可避免地会产生信息损失。如何保留尽可能多的有用信息,关键在于融合算法的选取。由于图像金字塔中不同级的图像之间具有自相似性,而自相似性又是分形分维的基础。于是,根据图像金字塔的特点,提出了一种新的图像融合方法——基于图像金字塔的分维融合算法,将不同源图像分别分解成不同尺度的金字塔图像序列,在相应的尺度上以分形分维作为权系数进行融合,并采用中波红外和长波红外图像作为源图像进行融合实验,实验结果表明该方法是可行的。  相似文献   

14.
针对动态红外和可见光图像融合,提出了一种新的基于统计模型的融合方法,即将图像的小波分解系数用广义高斯分布来建模。首先,源图像分别用双树复小波进行分解;然后,采用加权平均融合规则来进行小波系数的融合,其中加权系数由估计的广义高斯分布参数来计算;最后,将融合后的系数重构为一幅图像。融合图像采用熵、互信息和边缘保持度QAB/F来进行质量评价,实验结果表明方法的性能优于其他两种动态图像融合方法。  相似文献   

15.
提出了一种新的红外与可见光图像融合算法,首先应用非抽样Contourlet变换(NSCT)对图像进行多尺度、多方向变换,对变换的低频子带采用改进的能量加权法融合,带通子带融合采用最大系数与区域方差加权相结合方法,然后对融合的2个子带系数进行NSCT反变换,得到融合图像。对不同算法的融合实验结果进行比较,通过主观和客观评价,该算法融合效果较好。  相似文献   

16.
The goal of infrared (IR) and visible image fu- sion is for the fused image to contain IR object features from the IR image and retain the visual details provided by the visible image. The disadvantage of traditional fusion method based on independent component analysis (ICA) is that the primary feature information that describes the IR objects and the secondary feature information in the IR image are fused into the fused image. Secondary feature information can de- press the visual effect of the fused image. A novel ICA-based IR and visible image fusion scheme is proposed in this paper. ICA is employed to extract features from the infrared image, and then the primary and secondary features are distinguished by the kurtosis information of the ICA base coefficients. The secondary features of the IR image are discarded during fu- sion. The fused image is obtained by fusing primary features into the visible image. Experimental results show that the pro- posed method can provide better perception effect.  相似文献   

17.
The goal of infrared (IR) and visible image fusion is for the fused image to contain IR object features from the IR image and retain the visual details provided by the visible image. The disadvantage of traditional fusion method based on independent component analysis (ICA) is that the primary feature information that describes the IR objects and the secondary feature information in the IR image are fused into the fused image. Secondary feature information can depress the visual effect of the fused image. A novel ICA-based IR and visible image fusion scheme is proposed in this paper. ICA is employed to extract features from the infrared image, and then the primary and secondary features are distinguished by the kurtosis information of the ICA base coefficients. The secondary features of the IR image are discarded during fusion. The fused image is obtained by fusing primary features into the visible image. Experimental results show that the proposed method can provide better perception effect.  相似文献   

18.
一种基于小波变换的多尺度图像融合方法   总被引:4,自引:4,他引:4  
近年来图像的数据融合技术在图像处理领域中得到了广泛的重视和应用。如何对同一目标的多源遥感图像数据进行有效的融合,最大限度地利用多源遥感数据中的有用信息,提高系统的正确识别、判断和决策能力,是遥感数据融合研究的重要内容之一。在小波变换金字塔结构的基础上,提出了一种基于小波变换的多尺度图像融合方法,对热红外与可见光图像进行了融合处理。实验结果表明,该融合方法十分有效,获得的融合图像更符合人们的视觉特性,更有利于机器视觉。  相似文献   

19.
基于NSCT的红外与可见光图像融合   总被引:2,自引:0,他引:2       下载免费PDF全文
提出一种基于非下采样Contourlet变换的红外与可见光图像融合方法。该方法对源图像经非下采样Contourlet变换分解后的高频系数,考虑不同传感器的成像机理进行活性度量,并结合多分辨率系数间相关性来实现加权融合;低频系数则通过一种局部梯度进行活性度量,再采用加权与选择相结合的规则实现融合。最后,通过非下采样Contourlet逆变换重构获得融合图像。实验结果表明了该方法的有效性和可行性。  相似文献   

20.
目的 红外与可见光图像融合的目标是将红外图像与可见光图像的互补信息进行融合,增强源图像中的细节场景信息。然而现有的深度学习方法通常人为定义源图像中需要保留的特征,降低了热目标在融合图像中的显著性。此外,特征的多样性和难解释性限制了融合规则的发展,现有的融合规则难以对源图像的特征进行充分保留。针对这两个问题,本文提出了一种基于特有信息分离和质量引导的红外与可见光图像融合算法。方法 本文提出了基于特有信息分离和质量引导融合策略的红外与可见光图像融合算法。设计基于神经网络的特有信息分离以将源图像客观地分解为共有信息和特有信息,对分解出的两部分分别使用特定的融合策略;设计权重编码器以学习质量引导的融合策略,将衡量融合图像质量的指标应用于提升融合策略的性能,权重编码器依据提取的特有信息生成对应权重。结果 实验在公开数据集RoadScene上与6种领先的红外与可见光图像融合算法进行了对比。此外,基于质量引导的融合策略也与4种常见的融合策略进行了比较。定性结果表明,本文算法使融合图像具备更显著的热目标、更丰富的场景信息和更多的信息量。在熵、标准差、差异相关和、互信息及相关系数等指标上,相较于对比算法...  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号