首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Multi-focus image fusion methods can be mainly divided into two categories: transform domain methods and spatial domain methods. Recent emerged deep learning (DL)-based methods actually satisfy this taxonomy as well. In this paper, we propose a novel DL-based multi-focus image fusion method that can combine the complementary advantages of transform domain methods and spatial domain methods. Specifically, a residual architecture that includes a multi-scale feature extraction module and a dual-attention module is designed as the basic unit of a deep convolutional network, which is firstly used to obtain an initial fused image from the source images. Then, the trained network is further employed to extract features from the initial fused image and the source images for a similarity comparison, aiming to detect the focus property of each source pixel. The final fused image is obtained by selecting corresponding pixels from the source images and the initial fused image according to the focus property map. Experimental results show that the proposed method can effectively preserve the original focus information from the source images and prevent visual artifacts around the boundary regions, leading to more competitive qualitative and quantitative performance when compared with the state-of-the-art fusion methods.  相似文献   

2.
目的 基于深度学习的多聚焦图像融合方法主要是利用卷积神经网络(convolutional neural network,CNN)将像素分类为聚焦与散焦。监督学习过程常使用人造数据集,标签数据的精确度直接影响了分类精确度,从而影响后续手工设计融合规则的准确度与全聚焦图像的融合效果。为了使融合网络可以自适应地调整融合规则,提出了一种基于自学习融合规则的多聚焦图像融合算法。方法 采用自编码网络架构,提取特征,同时学习融合规则和重构规则,以实现无监督的端到端融合网络;将多聚焦图像的初始决策图作为先验输入,学习图像丰富的细节信息;在损失函数中加入局部策略,包含结构相似度(structural similarity index measure,SSIM)和均方误差(mean squared error,MSE),以确保更加准确地还原图像。结果 在Lytro等公开数据集上从主观和客观角度对本文模型进行评价,以验证融合算法设计的合理性。从主观评价来看,模型不仅可以较好地融合聚焦区域,有效避免融合图像中出现伪影,而且能够保留足够的细节信息,视觉效果自然清晰;从客观评价来看,通过将模型融合的图像与其他主流多聚焦图像融合算法的融合图像进行量化比较,在熵、Qw、相关系数和视觉信息保真度上的平均精度均为最优,分别为7.457 4,0.917 7,0.978 8和0.890 8。结论 提出了一种用于多聚焦图像的融合算法,不仅能够对融合规则进行自学习、调整,并且融合图像效果可与现有方法媲美,有助于进一步理解基于深度学习的多聚焦图像融合机制。  相似文献   

3.
Multi-focus image fusion is an enhancement method to generate full-clear images, which can address the depth-of-field limitation in imaging of optical lenses. Most existing methods generate the decision map to realize multi-focus image fusion, which usually lead to detail loss due to misclassification, especially near the boundary line of the focused and defocused regions. To overcome this challenge, this paper presents a new generative adversarial network with adaptive and gradient joint constraints to fuse multi-focus images. In our model, an adaptive decision block is introduced to determine whether source pixels are focused or not based on the difference of repeated blur. Under its guidance, a specifically designed content loss can dynamically guide the optimization trend, that is, force the generator to produce a fused result of the same distribution as the focused source images. To further enhance the texture details, we establish an adversarial game so that the gradient map of the fused result approximates the joint gradient map constructed based on the source images. Our model is unsupervised without requiring ground-truth fused images for training. In addition, we release a new dataset containing 120 high-quality multi-focus image pairs for benchmark evaluation. Experimental results demonstrate the superiority of our method over the state-of-the-art in terms of both subjective visual effect and quantitative metrics. Moreover, our method is about one order of magnitude faster compared with the state-of-the-art.  相似文献   

4.
多聚焦图像融合(MFIF)是从不同源图像中获取聚焦区域,以形成全清晰图像的一种图像增强方法。针对目前MFIF方法主要存在的两个方面问题,即传统的空间域方法在其融合边界存在较强的散焦扩散效应(DSE)以及伪影等问题;深度学习方法缺乏还原光场相机生成的数据集,并且因需要大量手动调参而存在训练过程耗时过多等问题,提出了一种基于目标图像先验信息的无监督多聚焦图像融合方法。首先,将源图像本身的内部先验信息和由空间域方法生成的初始融合图像所具有的外部先验信息分别用于G-Net和F-Net网络输入,其中,G-Net和F-Net网络都是由UNet组成的深度图像先验(DIP)网络;然后,引入一种由空间域方法生成的参考掩膜辅助G-Net网络生成引导决策图;最后,该决策图联合初始融合图像对F-Net网络进行优化,并生成最终的融合图像。验证实验基于具有真实参考图像的Lytro数据集和融合边界具有强DSE的MFFW数据集,并选用了5个广泛应用的客观指标进行性能评价。实验结果表明,该方法有效地缩短了优化迭代次数,在主观和客观性能评价上优于8种目前最先进的MFIF方法,尤其在融合边界具有强DSE的数据集上表现得更有优势。  相似文献   

5.
In this paper, we address the problem of fusing multi-focus images in dynamic scenes. The proposed approach consists of three main steps: first, the focus information of each source image obtained by morphological filtering is used to get the rough segmentation result which is one of the inputs of image matting. Then, image matting technique is applied to obtain the accurate focused region of each source image. Finally, the focused regions are combined together to construct the fused image. Through image matting, the proposed fusion algorithm combines the focus information and the correlations between nearby pixels together, and therefore tends to obtain more accurate fusion result. Experimental results demonstrate the superiority of the proposed method over traditional multi-focus image fusion methods, especially for those images in dynamic scenes.  相似文献   

6.
针对传统的多聚焦图像的空间域融合容易出现边缘模糊的问题,提出了一种基于引导滤波(GF)和差分图像的多聚焦图像融合方法.首先,将源图像进行不同水平的GF,并对滤波后图像进行差分,从而获得聚焦特征图像;随后,利用聚焦特征图像的梯度能量(EOG)信息获得初始决策图,对初始决策图进行空间一致性检查以及形态学操作以消除因EOG相...  相似文献   

7.
8.
Multifocus image fusion using region segmentation and spatial frequency   总被引:3,自引:0,他引:3  
  相似文献   

9.

In this paper, a novel region-based multi-focus color image fusion method is proposed, which employs the focused edges extracted from the source images to obtain a fused image with better focus. At first, the edges are obtained from the source images, using two suitable edge operators (Zero-cross and Canny). Then, a block-wise region comparison is performed to extract out the focused edges which have been morphologically dilated, followed by the selection of the largest component to remove isolated points. Any discontinuity in the detected edges is removed by consulting with the edge detection output from the Canny edge operator. The best reconstructed edge image is chosen, which is later converted into a focused region. Finally, the fused image is constructed by selecting pixels from the source images with the help of a prescribed color decision map. The proposed method has been implemented and tested on a set of real 2-D multi-focus image pairs (both gray-scale and color). The algorithm has a competitive performance with respect to the recent fusion methods in terms of subjective and objective evaluation.

  相似文献   

10.
In this paper, a new multifocus image fusion scheme based on the technique of focused pixels detection is proposed. First, a new improved multiscale Top-Hat (MTH) transform, which is more effective than the traditional Top-Hat transform in extracting focus information, is introduced and utilized to detect the pixels of the focused regions. Second, the initial decision map of the source images is generated by comparing the improved MTH value of each pixel. Then, the isolated regions removal method is developed and employed to refine the initial decision map. In order to improve the quality of the fused image and avoid the discontinuity in the transition zone, a dual sliding window technique and a fusion strategy based on multiscale transform are developed to achieve the transition zones fusion. Finally, the decision maps of the focused regions and the transition zones are both used to guide the fusion process, and then the final fused image is formed. The experimental results show that the proposed method outperforms the conventional multifocus image fusion methods in both subjective and objective qualities.  相似文献   

11.
一种自适应的多聚焦图像融合方法   总被引:8,自引:1,他引:8       下载免费PDF全文
为了对不同的多聚焦图像进行有效融合,提出了一种小波域中基于区域特征的自适应多聚焦图像融合方法。该方法首先对参加融合的两幅图像进行小波分解,然后针对低频部分,在保留源图像共同特征的基础上,将待融合的两图像各自所具有的特征添加到融合图像中,而对于高频部分,则根据区域的小波能量进行融合;最后通过小波逆变换来重构融合图像。该方法不仅能够完全自适应地对多聚焦图像进行有效的融合,而且对于各种不同的源图像具有通用性。实验表明,该算法能够得到良好的融合效果,是一种有效的多聚焦图像融合方法。  相似文献   

12.
目的 地标识别是图像和视觉领域一个应用问题,针对地标识别中全局特征对视角变化敏感和局部特征对光线变化敏感等单一特征所存在的问题,提出一种基于增量角度域损失(additive angular margin loss,ArcFace损失)并对多种特征进行融合的弱监督地标识别模型。方法 使用图像检索取Top-1的方法来完成识别任务。首先证明了ArcFace损失参数选取的范围,并于模型训练时使用该范围作为参数选取的依据,接着使用一种有效融合局部特征与全局特征的方法来获取图像特征以用于检索。其中,模型训练过程分为两步,第1步是在谷歌地标数据集上使用ArcFace损失函数微调ImageNet预训练模型权重,第2步是增加注意力机制并训练注意力网络。推理过程分为3个部分:抽取全局特征、获取局部特征和特征融合。具体而言,对输入的查询图像,首先从微调卷积神经网络的特征嵌入层提取全局特征;然后在网络中间层使用注意力机制提取局部特征;最后将两种特征向量横向拼接并用图像检索的方法给出数据库中与当前查询图像最相似的结果。结果 实验结果表明,在巴黎、牛津建筑数据集上,特征融合方法可以使浅层网络达到深层预训练网络的效果,融合特征相比于全局特征(mean average precision,mAP)值提升约1%。实验还表明在神经网络嵌入特征上无需再加入特征白化过程。最后在城市级街景图像中本文模型也取得了较为满意的效果。结论 本模型使用ArcFace损失进行训练且使多种特征相似性结果进行有效互补,提升了模型在实际应用场景中的抗干扰能力。  相似文献   

13.
Wang  Zhaobin  Wang  Shuai  Guo  Lijie 《Neural computing & applications》2018,29(11):1101-1114

The purpose of multi-focus image fusion is to acquire an image where all the objects are focused by fusing the source images which have different focus points. A novel multi-focus image fusion method is proposed in this paper, which is based on PCNN and random walks. PCNN is consistent with people’s visual perception. And the random walks model has been proven to have enormous potential to fuse image in recent years. The proposed method first employs PCNN to measure the sharpness of source images. Then, an original fusion map is constructed. Next, the method of random walks is employed to improve the accuracy of the fused regions detection. Finally, the fused image is generated according to the probability computed by random walks. The experiments demonstrate that our method outperforms many existing methods of multi-focus image fusion in visual perception and objective criteria. To assess the performance of our method in practical application, some examples are given at the end of paper.

  相似文献   

14.
目的 散焦模糊检测致力于区分图像中的清晰与模糊像素,广泛应用于诸多领域,是计算机视觉中的重要研究方向。待检测图像含复杂场景时,现有的散焦模糊检测方法存在精度不够高、检测结果边界不完整等问题。本文提出一种由粗到精的多尺度散焦模糊检测网络,通过融合不同尺度下图像的多层卷积特征提高散焦模糊的检测精度。方法 将图像缩放至不同尺度,使用卷积神经网络从每个尺度下的图像中提取多层卷积特征,并使用卷积层融合不同尺度图像对应层的特征;使用卷积长短时记忆(convolutional long-short term memory,Conv-LSTM)层自顶向下地整合不同尺度的模糊特征,同时生成对应尺度的模糊检测图,以这种方式将深层的语义信息逐步传递至浅层网络;在此过程中,将深浅层特征联合,利用浅层特征细化深一层的模糊检测结果;使用卷积层将多尺度检测结果融合得到最终结果。本文在网络训练过程中使用了多层监督策略确保每个Conv-LSTM层都能达到最优。结果 在DUT (Dalian University of Technology)和CUHK (The Chinese University of Hong Kong)两个公共的模糊检测数据集上进行训练和测试,对比了包括当前最好的模糊检测算法BTBCRL (bottom-top-bottom network with cascaded defocus blur detection map residual learning),DeFusionNet (defocus blur detection network via recurrently fusing and refining multi-scale deep features)和DHDE (multi-scale deep and hand-crafted features for defocus estimation)等10种算法。实验结果表明:在DUT数据集上,本文模型相比于DeFusionNet模型,MAE (mean absolute error)值降低了38.8%,F0.3值提高了5.4%;在CUHK数据集上,相比于LBP (local binary pattern)算法,MAE值降低了36.7%,F0.3值提高了9.7%。通过实验对比,充分验证了本文提出的散焦模糊检测模型的有效性。结论 本文提出的由粗到精的多尺度散焦模糊检测方法,通过融合不同尺度图像的特征,以及使用卷积长短时记忆层自顶向下地整合深层的语义信息和浅层的细节信息,使得模型在不同的图像场景中能得到更加准确的散焦模糊检测结果。  相似文献   

15.
刘慧  李珊珊  高珊珊  邓凯  徐岗  张彩明 《软件学报》2023,34(5):2134-2151
随着多模态医学图像在临床诊疗工作中的普及,建立在时空相关性特性基础上的融合技术得到快速发展,融合后的医学图像不仅可以保留各模态源图像的独有特征,而且能够强化互补信息、便于医生阅片.目前大多数方法采用人工定义约束的策略来实现特征提取和特征融合,这容易导致融合图像中部分有用信息丢失和细节不清晰等问题.为此,提出一种基于预训练模型特征提取的双对抗融合网络实现MR-T1/MR-T2图像的融合.该网络由一个特征提取模块、一个特征融合模块和两个鉴别网络模块组成.由于已配准的多模态医学图像数据集规模较小,无法对特征提取网络进行充分的训练,又因预训练模型具有强大的数据表征能力,故将预先训练的卷积神经网络模型嵌入到特征提取模块以生成特征图.然后,特征融合网络负责融合深度特征并输出融合图像.两个鉴别网络通过对源图像与融合图像进行准确分类,分别与特征融合网络建立对抗关系,最终激励其学习出最优的融合参数.实验结果证明了预训练技术在所提方法中的有效性,同时与现有的6种典型融合方法相比,所提方法融合结果在视觉效果和量化指标方面均取得最优表现.  相似文献   

16.
As is well known, activity level measurement and fusion rule are two crucial factors in image fusion. For most existing fusion methods, either in spatial domain or in a transform domain like wavelet, the activity level measurement is essentially implemented by designing local filters to extract high-frequency details, and the calculated clarity information of different source images are then compared using some elaborately designed rules to obtain a clarity/focus map. Consequently, the focus map contains the integrated clarity information, which is of great significance to various image fusion issues, such as multi-focus image fusion, multi-modal image fusion, etc. However, in order to achieve a satisfactory fusion performance, these two tasks are usually difficult to finish. In this study, we address this problem with a deep learning approach, aiming to learn a direct mapping between source images and focus map. To this end, a deep convolutional neural network (CNN) trained by high-quality image patches and their blurred versions is adopted to encode the mapping. The main novelty of this idea is that the activity level measurement and fusion rule can be jointly generated through learning a CNN model, which overcomes the difficulty faced by the existing fusion methods. Based on the above idea, a new multi-focus image fusion method is primarily proposed in this paper. Experimental results demonstrate that the proposed method can obtain state-of-the-art fusion performance in terms of both visual quality and objective assessment. The computational speed of the proposed method using parallel computing is fast enough for practical usage. The potential of the learned CNN model for some other-type image fusion issues is also briefly exhibited in the experiments.  相似文献   

17.
Gao  Wenchang  Yu  Lei  Tan  Yao  Yang  Pengna 《Applied Intelligence》2022,52(12):14085-14100

The aim of image fusion is to obtain a clear image by combining useful information coming from multiple images. However, the fused image usually has the problem of artifacts and unclear boundary. To address these problems, a deep convolutional neural network based framework for multi-focus image fusion is proposed in this paper, called multi-scale inception module convolutional neural network (MSIMCNN). MSIMCNN converts the entire image into a binary mask to estimate the focus characteristics, and obtains the clear boundary between focus and defocus. First of all, a pair of focus images and the corresponding feature images detected by the Laplace operator are inputted into the network. The Laplace operator can detect the edge and gradient of focus in the image, which can help us accurately reconstruct the focused area in the focus map and distinguish the focus and defocus boundaries. Then, in the feature extraction stage, different scales of convolution kernels are designed to extract the rich and complementary features at different scales of the source images. At the same time, the inception module is added to increase the width of the network and reduce the parameters, which can extract more focus features required for image reconstruction and reduce the complexity. Finally, the focus map of the source image pair can be obtained in the feature reconstruction stage. In this stage, an efficient method is proposed to make the focus mask, which is used for the calculation of the loss function and the generation of the training set. The experimental results on different data sets confirm the superiority and effectiveness of MSIMCNN compared with other methods.

  相似文献   

18.
目前多数红外与可见光图像融合算法在融合过程中通常需要对源图像进行分解,这样易导致融合图像细节模糊和显著性目标丢失.为解决该问题,提出一种基于深度卷积特征提取的红外与可见光图像融合方法.首先,利用迁移学习理论对EfficientNet特征提取性能进行分析,选择7个特征提取模块;然后,直接将源图像送入特征提取模块以实现显著性特征提取;接着,构造通道归一化和平均算子操作用于获取显著图,再使用Softmax与Up-sampling组合的融合规则来得到融合权重,将融合权重与源图像进行卷积,生成7幅候选融合图像;最后,将候选融合图像的像素最大值作为最终的重构融合图像.所有实验均在公共数据集上进行,并与经典的传统和深度学习方法比较,主客观实验结果均表明,所提出方法能够有效地融合红外与可见光图像中的重要信息,突显融合图像的细节纹理,具有更好的视觉效果和更少的图像伪影以及人工噪声.  相似文献   

19.
深度学习技术应用到多聚焦图像融合领域时,其大多通过监督学习的方式来训练网络,但由于缺乏专用于多聚焦图像融合的监督训练的标记数据集,且制作专用的大规模标记训练集代价过高,所以现有方法多通过在聚焦图像中随机添加高斯模糊进行监督学习,这导致网络训练难度大,很难实现理想的融合效果。为解决以上问题,提出了一种易实现且融合效果好的多聚焦图像融合方法。通过在易获取的无标记数据集上以无监督学习方式训练引入了注意力机制的encoder-decoder网络模型,获得输入源图像的深层特征。再通过形态聚焦检测对获取的特征进行活动水平测量生成初始决策图。运用一致性验证方法对初始决策图优化,得到最终的决策图。融合图像质量在主观视觉和客观指标两方面上进行评定,经实验结果表明,融合图像清晰度高,保有细节丰富且失真度小。  相似文献   

20.
目前大多数图像融合算法将每个像素都独立对待,使像素之间关系割裂开来。本文提出了一种基于形态学算法和遗传算法的多焦点图像融合方法,此种方法有效地结合了像素级融合方法和特征级融合方法。其基本思想是先检测出原始图像中清晰聚焦的区域,再将这些区域提取出来,组成各部分都清晰聚焦的结果图像。实验结果证明,此方法优于Haar小波融合方法和形态学小波融合方法。特别是在原始图像没有完全配准的情况下,此种方法更为有效。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号