首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 197 毫秒
1.
针对传统的多聚焦图像的空间域融合容易出现边缘模糊的问题,提出了一种基于引导滤波(GF)和差分图像的多聚焦图像融合方法。首先,将源图像进行不同水平的GF,并对滤波后图像进行差分,从而获得聚焦特征图像;随后,利用聚焦特征图像的梯度能量(EOG)信息获得初始决策图,对初始决策图进行空间一致性检查以及形态学操作以消除因EOG相近而造成的噪点;然后,对初始决策图进行GF以得到优化后决策图,从而避免融合后的图像存在边缘骤变的问题;最后,基于优化后决策图对源图像进行加权融合,以得到融合图像。选取3组经典的多聚焦图像作为实验图像,将所提方法与其他9种多聚焦图像融合方法得到的结果进行比较。主观视觉效果显示,所提方法能更好地将多聚焦图像的细节信息保存下来,另外,经该方法处理后的图像的4项客观评价指标均显著优于对比方法。结果表明,所提方法能够获得高质量的融合图像,较好地保留原始图像信息,有效解决传统多聚焦图像融合出现的边缘模糊问题。  相似文献   

2.
As is well known, activity level measurement and fusion rule are two crucial factors in image fusion. For most existing fusion methods, either in spatial domain or in a transform domain like wavelet, the activity level measurement is essentially implemented by designing local filters to extract high-frequency details, and the calculated clarity information of different source images are then compared using some elaborately designed rules to obtain a clarity/focus map. Consequently, the focus map contains the integrated clarity information, which is of great significance to various image fusion issues, such as multi-focus image fusion, multi-modal image fusion, etc. However, in order to achieve a satisfactory fusion performance, these two tasks are usually difficult to finish. In this study, we address this problem with a deep learning approach, aiming to learn a direct mapping between source images and focus map. To this end, a deep convolutional neural network (CNN) trained by high-quality image patches and their blurred versions is adopted to encode the mapping. The main novelty of this idea is that the activity level measurement and fusion rule can be jointly generated through learning a CNN model, which overcomes the difficulty faced by the existing fusion methods. Based on the above idea, a new multi-focus image fusion method is primarily proposed in this paper. Experimental results demonstrate that the proposed method can obtain state-of-the-art fusion performance in terms of both visual quality and objective assessment. The computational speed of the proposed method using parallel computing is fast enough for practical usage. The potential of the learned CNN model for some other-type image fusion issues is also briefly exhibited in the experiments.  相似文献   

3.
为了克服基于块的融合方法对块的大小敏感以及融合图像中存在伪影等问题,提出一种新的基于四叉树分解和自适应焦点测度的多聚焦图像融合方法。首先,设计一种新的基于修正拉普拉斯能量和(SML)和导向滤波的自适应焦点测度,用于获得源图像的焦点图。然后,采用一种新的四叉树分解策略,并结合已经得到的焦点图,进一步将源图像分解成最优大小的树块;同时,从树块中检测出聚焦区域,并构成决策图。最后,对决策图进行优化和一致性验证,并重构出一幅全聚焦图像。通过公共多聚焦图像数据集进行实验,与11种先进的融合方法进行视觉质量和客观指标比较。实验结果表明,本文所提出的融合方法取得了更好的性能。  相似文献   

4.
深度学习技术应用到多聚焦图像融合领域时,其大多通过监督学习的方式来训练网络,但由于缺乏专用于多聚焦图像融合的监督训练的标记数据集,且制作专用的大规模标记训练集代价过高,所以现有方法多通过在聚焦图像中随机添加高斯模糊进行监督学习,这导致网络训练难度大,很难实现理想的融合效果。为解决以上问题,提出了一种易实现且融合效果好的多聚焦图像融合方法。通过在易获取的无标记数据集上以无监督学习方式训练引入了注意力机制的encoder-decoder网络模型,获得输入源图像的深层特征。再通过形态聚焦检测对获取的特征进行活动水平测量生成初始决策图。运用一致性验证方法对初始决策图优化,得到最终的决策图。融合图像质量在主观视觉和客观指标两方面上进行评定,经实验结果表明,融合图像清晰度高,保有细节丰富且失真度小。  相似文献   

5.
Multi-focus image fusion has emerged as a major topic in image processing to generate all-focus images with increased depth-of-field from multi-focus photographs. Different approaches have been used in spatial or transform domain for this purpose. But most of them are subject to one or more of image fusion quality degradations such as blocking artifacts, ringing effects, artificial edges, halo artifacts, contrast decrease, sharpness reduction, and misalignment of decision map with object boundaries. In this paper we present a novel multi-focus image fusion method in spatial domain that utilizes a dictionary which is learned from local patches of source images. Sparse representation of relative sharpness measure over this trained dictionary are pooled together to get the corresponding pooled features. Correlation of the pooled features with sparse representations of input images produces a pixel level score for decision map of fusion. Final regularized decision map is obtained using Markov Random Field (MRF) optimization. We also gathered a new color multi-focus image dataset which has more variety than traditional multi-focus image sets. Experimental results demonstrate that our proposed method outperforms existing state-of-the-art methods, in terms of visual and quantitative evaluations.  相似文献   

6.
针对现有的多聚焦图像融合方法对聚焦/散焦边界(FDB)信息捕捉不准确的问题,提出了一种新的基于线性稀疏表示和图像抠图的多聚焦图像融合方法。首先,引入一种基于线性稀疏表示的焦点测度,它利用自然图像形成的字典与输入图像在局部窗口上的线性关系,通过求解线性系数来表示图像的焦点信息。然后,利用焦点测度获取源图像的焦点图和一个由聚焦区域、散焦区域以及包含FDB的未知区域组成的三元图,并将三元图作为一个输入,采用图像抠图技术处理源图像的FDB区域,从而得到较精确的全聚焦图像。最后,为了进一步提高融合图像的质量,将得到的全聚焦图像作为新字典实现融合过程的迭代进行,在经过设定的更新次数后得到最终的全聚焦融合图像。实验结果表明,相比于11种最先进的多聚焦图像融合方法,该方法具有较好的融合性能和视觉效果,且有较高的计算效率。  相似文献   

7.
目的:多聚焦图像融合技术一个关键问题是如何准确地判断待融合图像的清晰度。本文提出了基于归一化结构极值点数目的清晰度判断准则。方法:本文基于图像的局部极值点特性,定义了归一化结构极值点数目这个指标作为清晰度判断准则,同时还给出了利用该准则和融合决策矩阵快速估计技术的多聚焦图像快速融合方法。结果:利用本文提出的清晰度判断准则和融合方法,实验表明上述问题得到了很好的解决。结论:本文提出了一个新的图像清晰度判断准则,该准则判断准确率高,且对脉冲噪声有好的鲁棒性。通过与传统融合方法对两组实验图像融合结果的主客观比较表明,该方法的融合速度和效果比现有多聚焦图像融合方法有明显提高。  相似文献   

8.
目的 基于深度学习的多聚焦图像融合方法主要是利用卷积神经网络(convolutional neural network,CNN)将像素分类为聚焦与散焦。监督学习过程常使用人造数据集,标签数据的精确度直接影响了分类精确度,从而影响后续手工设计融合规则的准确度与全聚焦图像的融合效果。为了使融合网络可以自适应地调整融合规则,提出了一种基于自学习融合规则的多聚焦图像融合算法。方法 采用自编码网络架构,提取特征,同时学习融合规则和重构规则,以实现无监督的端到端融合网络;将多聚焦图像的初始决策图作为先验输入,学习图像丰富的细节信息;在损失函数中加入局部策略,包含结构相似度(structural similarity index measure,SSIM)和均方误差(mean squared error,MSE),以确保更加准确地还原图像。结果 在Lytro等公开数据集上从主观和客观角度对本文模型进行评价,以验证融合算法设计的合理性。从主观评价来看,模型不仅可以较好地融合聚焦区域,有效避免融合图像中出现伪影,而且能够保留足够的细节信息,视觉效果自然清晰;从客观评价来看,通过将模型融合的图像与其他主流多聚焦图像融合算法的融合图像进行量化比较,在熵、Qw、相关系数和视觉信息保真度上的平均精度均为最优,分别为7.457 4,0.917 7,0.978 8和0.890 8。结论 提出了一种用于多聚焦图像的融合算法,不仅能够对融合规则进行自学习、调整,并且融合图像效果可与现有方法媲美,有助于进一步理解基于深度学习的多聚焦图像融合机制。  相似文献   

9.
In this paper, we address the problem of fusing multi-focus images in dynamic scenes. The proposed approach consists of three main steps: first, the focus information of each source image obtained by morphological filtering is used to get the rough segmentation result which is one of the inputs of image matting. Then, image matting technique is applied to obtain the accurate focused region of each source image. Finally, the focused regions are combined together to construct the fused image. Through image matting, the proposed fusion algorithm combines the focus information and the correlations between nearby pixels together, and therefore tends to obtain more accurate fusion result. Experimental results demonstrate the superiority of the proposed method over traditional multi-focus image fusion methods, especially for those images in dynamic scenes.  相似文献   

10.
Multi-focus image fusion methods can be mainly divided into two categories: transform domain methods and spatial domain methods. Recent emerged deep learning (DL)-based methods actually satisfy this taxonomy as well. In this paper, we propose a novel DL-based multi-focus image fusion method that can combine the complementary advantages of transform domain methods and spatial domain methods. Specifically, a residual architecture that includes a multi-scale feature extraction module and a dual-attention module is designed as the basic unit of a deep convolutional network, which is firstly used to obtain an initial fused image from the source images. Then, the trained network is further employed to extract features from the initial fused image and the source images for a similarity comparison, aiming to detect the focus property of each source pixel. The final fused image is obtained by selecting corresponding pixels from the source images and the initial fused image according to the focus property map. Experimental results show that the proposed method can effectively preserve the original focus information from the source images and prevent visual artifacts around the boundary regions, leading to more competitive qualitative and quantitative performance when compared with the state-of-the-art fusion methods.  相似文献   

11.
宋宇  李庆玲 《计算机应用》2011,31(7):1815-1817
光学显微视觉系统的主要特点在于景深短,难以获取反映显微场景的全面信息。为解决该问题,提出基于小波分析的光学显微视觉系统景深扩展策略,分为局部清晰图像获取和多聚焦图像融合两个阶段:首先,以定义的小波系数活性水平为依据,构造了新型清晰度评价函数和聚焦曲线全局极值搜索策略来实现快速自动聚焦、获取局部显微场景信息的目的;然后,为实现多局部聚焦显微图像的融合,设计了小波系数活性水平选择型融合规则来融合获取的多个局部显微场景信息。实验表明,提出方法可有效扩展光学显微镜的景深。  相似文献   

12.
针对多聚焦图像融合中难以有效检测聚焦点的问题,提出了一种基于鲁棒主成分分析(RPCA)和区域检测的多聚焦图像融合方法。将RPCA理论运用到多聚焦图像融合中,把源图像分解为稀疏图像和低秩图像;对稀疏矩阵采用区域检测的方法得到源图像的聚焦判决图;对聚焦判决图进行三方向一致性和区域生长法处理得到最终决策图;根据最终决策图对源图像进行融合。实验结果表明,在主观评价方面,所提出的方法在对比度、纹理清晰度、亮度等几方面都有显著的提高;在客观评价方面,用标准差、平均梯度、空间频率和互信息四项评价指标说明了该方法的有效性。  相似文献   

13.
Gao  Wenchang  Yu  Lei  Tan  Yao  Yang  Pengna 《Applied Intelligence》2022,52(12):14085-14100

The aim of image fusion is to obtain a clear image by combining useful information coming from multiple images. However, the fused image usually has the problem of artifacts and unclear boundary. To address these problems, a deep convolutional neural network based framework for multi-focus image fusion is proposed in this paper, called multi-scale inception module convolutional neural network (MSIMCNN). MSIMCNN converts the entire image into a binary mask to estimate the focus characteristics, and obtains the clear boundary between focus and defocus. First of all, a pair of focus images and the corresponding feature images detected by the Laplace operator are inputted into the network. The Laplace operator can detect the edge and gradient of focus in the image, which can help us accurately reconstruct the focused area in the focus map and distinguish the focus and defocus boundaries. Then, in the feature extraction stage, different scales of convolution kernels are designed to extract the rich and complementary features at different scales of the source images. At the same time, the inception module is added to increase the width of the network and reduce the parameters, which can extract more focus features required for image reconstruction and reduce the complexity. Finally, the focus map of the source image pair can be obtained in the feature reconstruction stage. In this stage, an efficient method is proposed to make the focus mask, which is used for the calculation of the loss function and the generation of the training set. The experimental results on different data sets confirm the superiority and effectiveness of MSIMCNN compared with other methods.

  相似文献   

14.

In this paper, a novel region-based multi-focus color image fusion method is proposed, which employs the focused edges extracted from the source images to obtain a fused image with better focus. At first, the edges are obtained from the source images, using two suitable edge operators (Zero-cross and Canny). Then, a block-wise region comparison is performed to extract out the focused edges which have been morphologically dilated, followed by the selection of the largest component to remove isolated points. Any discontinuity in the detected edges is removed by consulting with the edge detection output from the Canny edge operator. The best reconstructed edge image is chosen, which is later converted into a focused region. Finally, the fused image is constructed by selecting pixels from the source images with the help of a prescribed color decision map. The proposed method has been implemented and tested on a set of real 2-D multi-focus image pairs (both gray-scale and color). The algorithm has a competitive performance with respect to the recent fusion methods in terms of subjective and objective evaluation.

  相似文献   

15.
Nowadays image processing and machine vision fields have become important research topics due to numerous applications in almost every field of science. Performance in these fields is critically dependent to the quality of input images. In most of the imaging devices, optical lenses are used to capture images from a particular scene. But due to the limited depth of field of optical lenses, objects in different distances from focal point will be captured with different sharpness and details. Thus, important details of the scene might be lost in some regions. Multi-focus image fusion is an effective technique to cope with this problem. The main challenge in multi-focus fusion is the selection of an appropriate focus measure. In this paper, we propose a novel focus measure based on the surface area of regions surrounded by intersection points of input source images. The potential of this measure to distinguish focused regions from the blurred ones is proved. In our fusion algorithm, intersection points of input images are calculated and then input images are segmented using these intersection points. After that, the surface area of each segment is considered as a measure to determine focused regions. Using this measure we obtain an initial selection map of fusion which is then refined by morphological modifications. To demonstrate the performance of the proposed method, we compare its results with several competing methods. The results show the effectiveness of our proposed method.  相似文献   

16.
Wang  Zhaobin  Wang  Shuai  Guo  Lijie 《Neural computing & applications》2018,29(11):1101-1114

The purpose of multi-focus image fusion is to acquire an image where all the objects are focused by fusing the source images which have different focus points. A novel multi-focus image fusion method is proposed in this paper, which is based on PCNN and random walks. PCNN is consistent with people’s visual perception. And the random walks model has been proven to have enormous potential to fuse image in recent years. The proposed method first employs PCNN to measure the sharpness of source images. Then, an original fusion map is constructed. Next, the method of random walks is employed to improve the accuracy of the fused regions detection. Finally, the fused image is generated according to the probability computed by random walks. The experiments demonstrate that our method outperforms many existing methods of multi-focus image fusion in visual perception and objective criteria. To assess the performance of our method in practical application, some examples are given at the end of paper.

  相似文献   

17.
许占伟  张涛 《计算机工程》2011,37(16):209-211
为得到更好的融合效果,将特征级融合与像素级融合相结合,利用Contourlet变换(CT)对源图像进行分解,对于近似图像,利用Canny算子进行边缘检测以得到边缘特征图像,再以边缘特征图像作为交叉视觉皮质模型的输入,根据各神经元的点火次数进行融合;对于细节图像,根据区域能量系数矩阵进行融合.通过多聚焦闹钟图像和CT、M...  相似文献   

18.
This paper presents a new wavelet-based algorithm for the fusion of spatially registered infrared and visible images. Wavelet-based image fusion is the most common fusion method, which fuses the information from the source images in the wavelet transform domain according to some fusion rules. We specifically propose new fusion rules for fusion of low and high frequency wavelet coefficients of the source images in the second step of the wavelet-based image fusion algorithm. First, the source images are decomposed using dual-tree discrete wavelet transform (DT-DWT). Then, a fuzzy-based approach is used to fuse high frequency wavelet coefficients of the IR and visible images. Particularly, fuzzy logic is used to integrate the outputs of three different fusion rules (weighted averaging, selection using pixel-based decision map (PDM), and selection using region-based decision map (RDM)), based on a dissimilarity measure of the source images. The objective is to utilize the advantages of previous pixel- and region-based methods in a single scheme. The PDM is obtained based on local activity measurement in the DT-DWT domain of the source images. A new segmentation-based algorithm is also proposed to generate the RDM using the PDM. In addition, a new optimization-based approach using population-based optimization is proposed for the low frequency fusion rule instead of simple averaging. After fusing low and high frequency wavelet coefficients of the source images, the final fused image is obtained using the inverse DT-DWT. This new method provides improved subjective and objectives results as compared to previous image fusion methods.  相似文献   

19.
The depth of field (DOF) of camera equipment is generally limited, so it is very difficult to get a fully focused image with all the objects clear after taking only one shot. A way to obtain a fully focused image is to use a multi-focus image fusion method, which fuses multiple images with different focusing depths into one image. However, most of the existing methods focus too much on the fusion accuracy of a single pixel, ignoring the integrity of the target and the importance of shallow features, resulting in internal errors and boundary artifacts, which need to be repaired after a long time of post-processing. In order to solve these problems, we propose a cascade network based on Transformer and attention mechanism, which can directly obtain the decision map and fusion result of focusing/defocusing region through end-to-end processing of source image, avoiding complex post-processing. For improving the fusion accuracy, this paper introduces the joint loss function, which can optimize the network parameters from three aspects. Furthermore, In order to enrich the shallow features of the network, a global attention module with shallow features is designed. Extensive experiments were conducted, including a large number of ablation experiments, 6 objective measures and a variety of subjective visual comparisons. Compared with 9 state-of-the-art methods, the results show that the proposed network structure can improve the quality of multi-focus fusion images and the performance is optimal.  相似文献   

20.
目的 对于微距摄影来说,由于微距镜头的景深有限,往往很难通过单幅照片获得拍摄对象全幅清晰的图像.因此要想获取全幅清晰的照片,就需要拍摄多幅具有不同焦点的微距照片,并对其进行融合.方法 传统的微距照片融合方法一般都假定需要融合的图像是已经配准好的,也并没有考虑微距图像的自动采集.因此提出了一种用于微距摄影的多聚焦图像采集和融合系统,该系统由3个部分组成.第1部分是一种微距图像拍摄装置,该硬件能够以高精度的方式拍摄物体在不同焦距下的微距照片.第2部分是一个基于不变特征的图像配准组件,它可以对在多个焦点下拍摄的微距图像进行自动配准和对齐.第3部分是一个基于图像金字塔的多聚焦图像融合组件,这个组件能够对已经对齐的微距照片进行融合,使得合成的图像具有更大的景深.该组件对基于图像金字塔的融合方法进行了扩展,提出了一种基于滤波的权重计算策略.通过将该权重计算与图像金字塔相结合,得到了一种基于多分辨率的多聚焦图像融合方法.结果 论文使用设计的拍摄装置采集了多组实验数据,用以验证系统硬件设计和软件设计的正确性,并使用主观和客观的方法对提出的系统进行评价.从主观评价来看,系统合成的微距图像不仅具有足够的景深,而且在高分辨率下也能够清晰地呈现物体微小的细节.从客观评价来看,通过将系统合成的微距图像与其他方法合成的微距图像进行量化比较,在标准差、信息熵和平均梯度3种评价标准中都是最优的.结论 实验结果表明,该系统是灵活和高效的,不仅能够对多幅具有不同焦点的微距图像进行自动采集、配准和融合,并且在图像融合的质量方面也能和其他方法相媲美.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号