首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, we present a gradient domain image fusion framework based on the Markov Random Field (MRF) fusion model. In this framework, the salient structures of the input images are fused in the gradient domain, then the final fused image is reconstructed by solving a Poisson equation which forces the gradients of the fused image to be close to the fused gradients. To fuse the structures in the gradient domain, an effective MRF-based fusion model is designed based on both the per-pixel fusion rule defined by the local saliency and also the smoothness constraints over the fusion weights, which is optimized by graph cut algorithm. This MRF-based fusion model enables the accurate estimation of region-based fusion weights for the salient objects or structures. We apply this method to the applications of multi-sensor image fusion, including infrared and visible image fusion, multi-focus image fusion and medical image fusion. Extensive experiments and comparisons show that the proposed fusion model is able to better fuse the multi-sensor images and produces high-quality fusion results compared with the other state-of-the-art methods.  相似文献   

2.
针对曲面去噪问题,提出了一种基于整体曲率变分水平集方法的曲面去噪模型。该模型是应用于图像去噪的ROF模型在几何形状处理中的自然拓展。它基于隐式水平集变分方法,能够自动处理曲面拓扑结构的变化。对该能量模型采用变分水平集方法求其梯度最速下降方程,通过演化该方程,最终得到模型最优解。为使计算结果更加准确,采用了半点差分格式离散。实验结果表明,该模型具有良好的去噪性能,同时能有效地保持曲面中的特征信息。  相似文献   

3.
于明  邢章浩  刘依 《控制与决策》2023,38(9):2487-2495
目前大多数RGB-D显著目标检测方法在RGB特征和Depth特征的融合过程中采用对称结构,对两种特征进行相同的操作,忽视了RGB图像和Depth图像的差异性,易造成错误的检测结果.针对该问题,提出一种基于非对称结构的跨模态融合RGB-D显著目标检测方法,利用全局感知模块提取RGB图像的全局特征,并设计了深度去噪模块滤除低质量Depth图像中的大量噪声;再通过所提出的非对称融合模块,充分利用两种特征间的差异性,使用Depth特征定位显著目标,用于指导RGB特征融合,补足显著目标的细节信息,利用两种特征各自的优势形成互补.通过在4个公开的RGB-D显著目标检测数据集上进行大量实验,验证所提出的方法优于当前的主流方法.  相似文献   

4.
A new method for remote-sensing image fusion based on variational methods and the image objective evaluation model is proposed. Different from the previous methods, the proposed method does not make big improvement on the variational model but focuses on how to make the calculation of existing method more accurate. The problem is that in the solving process of some variational models, it cannot be determined by the information of input images to gain the accurate calculation results. To solve this problem, a new model based on the average gradient of the objective evaluation index is proposed. The measured value of the proposed model is used in the iterations of the fusion algorithm as a feedback to adaptively adjust the algorithm to improve the quality of the fused results. Experiments show that the proposed adaptive method significantly improves the spatial information and well preserves the spectral information in the view of the subjective and objective evaluations.  相似文献   

5.
Multifocus image fusion is the process of obtaining a single image from multiple partially focused images such that the newly formed image consists of the well-defined information extracted from each source image. This paper proposes the use of saliency of the source images based on Mutual Spectral Residual. Spectral Residual brings out the unique/salient features of an image in frequency domain. The idea of proposed mutual spectral residual is to emphasize the relative unique features of a source image with respect to the other source images. The relative unique features are utilized to form saliency maps for each source image. These saliency maps can clearly indicate the focused and defocused parts of an image. Based on the saliency maps obtained, the image fusion takes place in spatial domain. Visual inspection and quantitative evaluation of the fused images obtained by the proposed method, using different evaluation metrics, demonstrate its effectiveness over several existing image fusion methods.  相似文献   

6.
针对红外与可见光图像融合过程中出现的细节损失严重等问题,提出一种基于潜在低秩表示与导向滤波的红外与可见光图像融合方法.首先,采用潜在低秩表示方法将源图像分解为低秩图层和显著图层,为了更多地提取低秩图层中细节信息,采用导向滤波将低秩图层分解基础图层和细节图层;并针对基础图层、细节图层和显著图层的特性,分别采用视觉显著度加权法、梯度显著度加权法、绝对值最大选择法作为融合规则.特别地,为了解决初始权重具有噪声且不与物体边界对齐问题,采用导向滤波优化初始权重.最后,将基础融合图层、细节融合图层和显著融合图层经叠加运算得到融合图像.通过对比多组融合图像主、客观评价结果表明,该方法能有效挖掘源图像的细节信息,在视觉质量和客观评价方法优于其他图像融合方法.  相似文献   

7.
Remote sensing image fusion is considered a cost effective method for handling the tradeoff between the spatial, temporal and spectral resolutions of current satellite systems. However, most current fusion methods concentrate on fusing images in two domains among the spatial, temporal and spectral domains, and a few efforts have been made to comprehensively explore the relationships of spatio-temporal–spectral features. In this study, we propose a novel integrated spatio-temporal–spectral fusion framework based on semicoupled sparse tensor factorization to generate synthesized frequent high-spectral and high-spatial resolution images by blending multisource observations. Specifically, the proposed method regards the desired high spatio-temporal–spectral resolution images as a four-dimensional tensor and formulates the integrated fusion problem as the estimation of the core tensor and the dictionary along each mode. The high-spectral correlation across the spectral domain and the high self-similarity (redundancy) features in the spatial and temporal domains are jointly exploited using the low dimensional and sparse core tensors. In addition, assuming that the sparse coefficients in the core tensors across the observed and desired image spaces are not strictly the same, we formulate the estimation of the core tensor and the dictionaries as a semicoupled sparse tensor factorization of available heterogeneous spatial, spectral and temporal remote sensing observations. Finally, the proposed method can exploit the multicomplementary spatial, temporal and spectral information of any combination of remote sensing data based on this single unified model. Experiments on multiple data types, including spatio-spectral, spatio-temporal, and spatio-temporal–spectral data fusion, demonstrate the effectiveness and efficiency of the proposed method.  相似文献   

8.
Li  Pengzhi  Li  Jianqiang  Chen  Yueda  Pei  Yan  Fu  Guanghui  Xie  Haihua 《The Journal of supercomputing》2021,77(3):2645-2666

In this paper, we propose a diagnosis and classification method of hydrocephalus computed tomography (CT) images using deep learning and image reconstruction methods. The proposed method constructs pathological features differing from the other healthy tissues. This method tries to improve the accuracy of pathological images identification and diagnosis. Identification of pathological features from CT images is an essential subject for the diagnosis and treatment of diseases. However, it is difficult to accurately distinguish pathological features owing to the variability of appearances, fuzzy boundaries, heterogeneous densities, shapes and sizes of lesions, etc. Some study results reported that the ResNet network has a better classification and diagnosis performance than other methods, and it has broad application prospectives in the identification of CT images. We use an improved ResNet network as a classification model with our proposed image reconstruction and information fusion methods. First, we evaluate a classification experiment using the hydrocephalus CT image datasets. Through the comparative experiments, we found that gradient features play an important role in the classification of hydrocephalus CT images. The classification effect of CT images with small information entropy is excellent in the evaluation of hydrocephalus CT images. A reconstructed image containing two channels of gradient features and one channel of LBP features is very effective in classification. Second, we apply our proposed method in classification experiments on CT images of colonography polyps for an evaluation. The experimental results have consistency with the hydrocephalus classification evaluation. It shows that the method is universal and suitable for classification of CT images in these two applications for the diagnosis of diseases. The original features of CT images are not ideal characteristics in classification, and the reconstructed image and information fusion methods have a great effect on CT images classification for pathological diagnosis.

  相似文献   

9.
In recent times, convolution neural networks (CNNs) have been utilized to generate desired images benefiting from the layered features. However, few studies have focused on integrating these features gained from multiple sources to obtain a high-quality image. In this paper, we propose a generative fusion approach using a supervised CNN framework with analysis and synthesis modules. According to it, the salient feature maps obtained from the analysis module are integrated to yield output generation by iteratively back-propagating gradients. Furthermore, a differential fusion strategy based on weighted gradient flow is embedded into the end-to-end fusion procedure. To transfer previous network configurations to current fusion tasks, the proposed network is fine-tuned according to the pretrained network such as VGG16, VGG19 and ResNet50. The experimental results indicate superior evaluations of the proposed approach compared with other state-of-the-art schemes in various fusion scenes, and also verify that the CNN features are adaptable and expressive to be aligned to generate fused images.  相似文献   

10.
Noise is easily mistaken as useful features of input images, and therefore, significantly reducing image fusion quality. In this paper, we propose a novel gradient entropy metric and p-Laplace diffusion constraint-based method. Specifically, the method is based on the matrix of structure tensor to fuse the gradient information. To minimize the negative effects of noise on the selections of image features, the gradient entropy metric is proposed to construct the weight for each gradient of input images. Particularly, the local adaptive p-Laplace diffusion constraint is constructed to further suppress noise when rebuilding the fused image from the fused gradient field. Experimental results show that the proposed method effectively preserves edge detail features of multispectral images while suppressing noise, achieving an optimal visual effect and more comprehensive quantitative assessments compared to other existing methods.  相似文献   

11.
多源空—谱遥感图像融合方法作为两路不完全观测多通道数据的计算重构反问题,其挑战在于补充信息不足、模糊和噪声等引起的病态性,现有方法在互补特征保持的分辨率增强方面仍有很大的改进空间。为了推动遥感图像融合技术的发展,本文系统概述目前融合建模的代表性方法,包括成分替代、多分辨率分析、变量回归、贝叶斯、变分和模型与数据混合驱动等方法体系及其存在问题。从贝叶斯融合建模的角度,分析了互补特征保真和图像先验在优化融合中的关键作用和建模机理,并综述了目前若干图像先验建模的新趋势,包括:分数阶正则化、非局部正则化、结构化稀疏表示、矩阵低秩至张量低秩表示、解析先验与深度先验的复合等。本文对各领域面临的主要挑战和可能的研究方向进行了概述和讨论,指出解析模型和数据混合驱动将是图像融合的重要发展方向,并需要结合成像退化机理、数据紧致表示和高效计算等问题,突破现有模型优化融合的技术瓶颈,进一步发展更优良的光谱信息保真和更低算法复杂度的融合方法。同时,为了解决大数据问题,有必要在Hadoop和SPARK等大数据平台上进行高性能计算,以更有利于多源数据融合算法的加速实现。  相似文献   

12.
基于深度学习的红外与可见光图像融合算法通常无法感知源图像显著性区域,导致融合结果没有突出红外与可见光图像各自的典型特征,无法达到理想的融合效果.针对上述问题,设计一种适用于红外与可见光图像融合任务的改进残差密集生成对抗网络结构.首先,使用改进残差密集模块作为基础网络分别构建生成器与判别器,并引入基于注意力机制的挤压激励网络来捕获通道维度下的显著特征,充分保留红外图像的热辐射信息和可见光图像的纹理细节信息;其次,使用相对平均判别器,分别衡量融合图像与红外图像、可见光图像之间的相对差异,并根据差异指导生成器保留缺少的源图像信息;最后,在TNO等多个图像融合数据集上进行实验,结果表明所提方法能够生成目标清晰、细节丰富的融合图像,相比基于残差网络的融合方法,边缘强度和平均梯度分别提升了64.56%和64.94%.  相似文献   

13.
针对基于图像纹理特征的蕾丝花边检索方法效率低下问题,为提高蕾丝花边检索效率,提出一种基于层次匹配下多种特征融合的蕾丝花边检索方法。通过运用图像纹理特征标识图像,利用Canny算子处理纹理图像,得到彩色Canny图像及其灰度梯度共生矩阵GGCM,采用能量、梯度平均、灰度平均、相关等二次统计特征参数描述图像的纹理特征,将上述提取纹理特征结合形状特征和SURF特征进行逐层匹配,实现层次匹配下多种特征的融合,弥补单个匹配方法的不足,同时在蕾丝花边库中验证所提检索方法的正确率。实验结果表明,与其他匹配方法相比,该方法提取的纹理特征具有更强的纹理鉴别能力,能较好地实现蕾丝花边检索,有效地提高了检索方法的速率和准确率。  相似文献   

14.
In this paper, a novel fusion framework is proposed for night-vision applications such as pedestrian recognition, vehicle navigation and surveillance. The underlying concept is to combine low-light visible and infrared imagery into a single output to enhance visual perception. The proposed framework is computationally simple since it is only realized in the spatial domain. The core idea is to obtain an initial fused image by averaging all the source images. The initial fused image is then enhanced by selecting the most salient features guided from the root mean square error (RMSE) and fractal dimension of the visual and infrared images to obtain the final fused image. Extensive experiments on different scene imaginary demonstrate that it is consistently superior to the conventional image fusion methods in terms of visual and quantitative evaluations.   相似文献   

15.
为更有效地提取图像的显著特征,提高多聚焦图像融合的性能,针对高斯核不完备基的缺点,其生成的滤波器不能有效提取图像显著特征,利用小波核近似正交和信号局部分析的优点,构造支持度变换,经过支持度分解后的低频信息使用PCNN的融合规则,高频信息使用绝对值最大选取的规则进行图像融合,实验数据和理论分析表明:该方法有效地改进了图像的显著特征,与高斯核构造的支持度变换图像融合方法相比,信息熵等评价指标结果均有提高,并且视觉效果有所改进。  相似文献   

16.
Image Fusion for Enhanced Visualization: A Variational Approach   总被引:3,自引:0,他引:3  
We present a variational model to perform the fusion of an arbitrary number of images while preserving the salient information and enhancing the contrast for visualization. We propose to use the structure tensor to simultaneously describe the geometry of all the inputs. The basic idea is that the fused image should have a structure tensor which approximates the structure tensor obtained from the multiple inputs. At the same time, the fused image should appear ‘natural’ and ‘sharp’ to a human interpreter. We therefore propose to combine the geometry merging of the inputs with perceptual enhancement and intensity correction. This is performed through a minimization functional approach which implicitly takes into account a set of human vision characteristics.  相似文献   

17.
针对已有的图像抽象化算法难以反映显著性边缘信息的不足,提出了一个显著性边缘引导下的基于能量优化的图像抽象化算法.对于给定的输入图像,首先基于边缘信息传递策略来提取图像的显著性边缘图,从而有效地减少了不连续边缘的产生;然后,为了在增强显著性边缘的同时抑制杂乱细节信息,根据显著性边缘图来构建图像的期望梯度场;最后,在图像颜色信息和期望梯度场的约束之下,通过能量优化来获得图像的抽象化效果.实验结果表明,本文算法在显著性边缘的连续性保持上表现出明显的优势,具有很好的科学研究价值及实际应用前景.  相似文献   

18.
传统的红外与可见光图像融合方法,多数需要手动提取特征且特征提取单一。而深度学习可以自动选择图像特征,改善特征提取的单一性,因此提出一种基于卷积神经网络与视觉显著性的红外和可见光图像融合方法。利用卷积神经网络获得红外目标与背景的二分类图;利用条件随机场对分类图进行精分割得到显著性目标提取图;采用非下采样轮廓波变换并结合目标提取图,得到融合图像。实验结果表明,该方法在主观视觉和客观评价方面均优于传统非智能方法,并且5个客观评价指标(边缘信息保留量,结构相似度,互信息,信息熵和标准差)均有显著提高。  相似文献   

19.
为生成兼具高光谱质量与高空间质量的融合图像,本文提出了一种新的Pan-sharpening变分融合模型.通过拟合退化后的全色(Panchromatic,Pan)波段图像与低分辨率多光谱(Multispectral,MS)波段图像间的线性关系得到各波段MS图像的权重系数,计算从Pan图像抽取的空间细节;基于全色波段图像的梯度定义加权函数,增强了图像的强梯度边缘并对因噪声而引入的虚假边缘进行了抑制,有效地保持了全色波段图像中目标的几何结构;基于MS波段传感器的调制传输函数定义低通滤波器,自适应地限制注入空间细节的数量,显著降低了融合MS图像的光谱失真;针对Pan-sharpening模型的不适定性问题,引入L1正则化能量项,保证了数值解的稳定性.采用Split Bregman数值方法求解能量泛函的最优解,提高了算法的计算效率.QuickBird、IKONOS和GeoEye-1数据集上的实验结果表明,模型的综合融合性能优于MTF-CON、AWLP、SparseFI、TVR和MTF-Variational等算法.  相似文献   

20.
目的 许多先前的显著目标检测工作都是集中在2D的图像上,并不能适用于RGB-D图像的显著性检测。本文同时提取颜色特征以及深度特征,提出了一种基于特征融合和S-D概率矫正的RGB-D显著性检测方法,使得颜色特征和深度特征相互补充。方法 首先,以RGB图像的4个边界为背景询问节点,使用特征融合的Manifold Ranking输出RGB图像的显著图;其次,依据RGB图像的显著图和深度特征计算S-D矫正概率;再次,计算深度图的显著图并依据S-D矫正概率对该显著图进行S-D概率矫正;最后,对矫正后的显著图提取前景询问节点再次使用特征融合的Manifold Ranking方法进行显著优化,得到最终的显著图。结果 利用本文RGB-D显著性检测方法对RGBD数据集上的1 000幅图像进行了显著性检测,并与6种不同的方法进行对比,本文方法的显著性检测结果更接近人工标定结果。Precision-Recall曲线(PR曲线)显示在相同召回率下本文方法的准确率较其中5种方法高,且处理单幅图像的时间为2.150 s,与其他算法相比也有一定优势。结论 本文方法能较准确地对RGB-D图像进行显著性检测。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号