首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Multi-focus image fusion aims to extract the focused regions from multiple partially focused images of the same scene and then combine them together to produce a completely focused image. Detecting the focused regions from multiple images is key for multi-focus image fusion. In this paper, we propose a novel boundary finding based multi-focus image fusion algorithm, in which the task of detecting the focused regions is treated as finding the boundaries between the focused and defocused regions from the source images. According to the found boundaries, the source images could be naturally separated into regions with the same focus conditions, i.e., each region is fully focused or defocused. Then, the focused regions can be found out by selecting the regions with greater focus-measures from each pair of regions. To improve the precision of boundary detection and focused region detection, we also present a multi-scale morphological focus-measure, effectiveness of which has been verified by using some quantitative evaluations. Different from the general multi-focus image fusion algorithms, our algorithm fuses the boundary regions and non-boundary regions of the source images respectively, which helps produce a fusion image with good visual quality. Moreover, the experimental results validate that the proposed algorithm outperforms some state-of-the-art image fusion algorithms in both qualitative and quantitative evaluations.  相似文献   

2.
Anisotropic blur and mis-registration frequently happen in multi-focus images due to object or camera motion. These factors severely degrade the fusion quality of multi-focus images. In this paper, we present a novel multi-scale weighted gradient-based fusion method to solve this problem. This method is based on a multi-scale structure-based focus measure that reflects the sharpness of edge and corner structures at multiple scales. This focus measure is derived based on an image structure saliency and introduced to determine the gradient weights in the proposed gradient-based fusion method for multi-focus images with a novel multi-scale approach. In particular, we focus on a two-scale scheme, i.e., a large scale and a small scale, to effectively solve the fusion problems raised by anisotropic blur and mis-registration. The large-scale structure-based focus measure is used first to attenuate the impacts of anisotropic blur and mis-registration on the focused region detection, and then the gradient weights near the boundaries of the focused regions are carefully determined by applying the small-scale focus measure. Experimental results clearly demonstrate that the proposed method outperforms the conventional fusion methods in the presence of anisotropic blur and mis-registration.  相似文献   

3.
Multi-focus image fusion has emerged as an important research area in information fusion. It aims at increasing the depth-of-field by extracting focused regions from multiple partially focused images, and merging them together to produce a composite image in which all objects are in focus. In this paper, a novel multi-focus image fusion algorithm is presented in which the task of detecting the focused regions is achieved using a Content Adaptive Blurring (CAB) algorithm. The proposed algorithm induces non-uniform blur in a multi-focus image depending on its underlying content. In particular, it analyzes the local image quality in a neighborhood and determines if the blur should be induced or not without losing image quality. In CAB, pixels belonging to the blur regions receive little or no blur at all, whereas the focused regions receive significant blur. Absolute difference of the original image and the CAB-blurred image yields initial segmentation map, which is further refined using morphological operators and graph-cut techniques to improve the segmentation accuracy. Quantitative and qualitative evaluations and comparisons with current state-of-the-art on two publicly available datasets demonstrate the strength of the proposed algorithm.  相似文献   

4.
为了克服基于块的融合方法对块的大小敏感以及融合图像中存在伪影等问题,提出一种新的基于四叉树分解和自适应焦点测度的多聚焦图像融合方法。首先,设计一种新的基于修正拉普拉斯能量和(SML)和导向滤波的自适应焦点测度,用于获得源图像的焦点图。然后,采用一种新的四叉树分解策略,并结合已经得到的焦点图,进一步将源图像分解成最优大小的树块;同时,从树块中检测出聚焦区域,并构成决策图。最后,对决策图进行优化和一致性验证,并重构出一幅全聚焦图像。通过公共多聚焦图像数据集进行实验,与11种先进的融合方法进行视觉质量和客观指标比较。实验结果表明,本文所提出的融合方法取得了更好的性能。  相似文献   

5.
Multi-focus image fusion is an enhancement method to generate full-clear images, which can address the depth-of-field limitation in imaging of optical lenses. Most existing methods generate the decision map to realize multi-focus image fusion, which usually lead to detail loss due to misclassification, especially near the boundary line of the focused and defocused regions. To overcome this challenge, this paper presents a new generative adversarial network with adaptive and gradient joint constraints to fuse multi-focus images. In our model, an adaptive decision block is introduced to determine whether source pixels are focused or not based on the difference of repeated blur. Under its guidance, a specifically designed content loss can dynamically guide the optimization trend, that is, force the generator to produce a fused result of the same distribution as the focused source images. To further enhance the texture details, we establish an adversarial game so that the gradient map of the fused result approximates the joint gradient map constructed based on the source images. Our model is unsupervised without requiring ground-truth fused images for training. In addition, we release a new dataset containing 120 high-quality multi-focus image pairs for benchmark evaluation. Experimental results demonstrate the superiority of our method over the state-of-the-art in terms of both subjective visual effect and quantitative metrics. Moreover, our method is about one order of magnitude faster compared with the state-of-the-art.  相似文献   

6.
目前大多数图像融合算法将每个像素都独立对待,使像素之间关系割裂开来。本文提出了一种基于形态学算法和遗传算法的多焦点图像融合方法,此种方法有效地结合了像素级融合方法和特征级融合方法。其基本思想是先检测出原始图像中清晰聚焦的区域,再将这些区域提取出来,组成各部分都清晰聚焦的结果图像。实验结果证明,此方法优于Haar小波融合方法和形态学小波融合方法。特别是在原始图像没有完全配准的情况下,此种方法更为有效。  相似文献   

7.
目的 基于深度学习的多聚焦图像融合方法主要是利用卷积神经网络(convolutional neural network,CNN)将像素分类为聚焦与散焦。监督学习过程常使用人造数据集,标签数据的精确度直接影响了分类精确度,从而影响后续手工设计融合规则的准确度与全聚焦图像的融合效果。为了使融合网络可以自适应地调整融合规则,提出了一种基于自学习融合规则的多聚焦图像融合算法。方法 采用自编码网络架构,提取特征,同时学习融合规则和重构规则,以实现无监督的端到端融合网络;将多聚焦图像的初始决策图作为先验输入,学习图像丰富的细节信息;在损失函数中加入局部策略,包含结构相似度(structural similarity index measure,SSIM)和均方误差(mean squared error,MSE),以确保更加准确地还原图像。结果 在Lytro等公开数据集上从主观和客观角度对本文模型进行评价,以验证融合算法设计的合理性。从主观评价来看,模型不仅可以较好地融合聚焦区域,有效避免融合图像中出现伪影,而且能够保留足够的细节信息,视觉效果自然清晰;从客观评价来看,通过将模型融合的图像与其他主流多聚焦图像融合算法的融合图像进行量化比较,在熵、Qw、相关系数和视觉信息保真度上的平均精度均为最优,分别为7.457 4,0.917 7,0.978 8和0.890 8。结论 提出了一种用于多聚焦图像的融合算法,不仅能够对融合规则进行自学习、调整,并且融合图像效果可与现有方法媲美,有助于进一步理解基于深度学习的多聚焦图像融合机制。  相似文献   

8.
In this paper, we address the problem of fusing multi-focus images in dynamic scenes. The proposed approach consists of three main steps: first, the focus information of each source image obtained by morphological filtering is used to get the rough segmentation result which is one of the inputs of image matting. Then, image matting technique is applied to obtain the accurate focused region of each source image. Finally, the focused regions are combined together to construct the fused image. Through image matting, the proposed fusion algorithm combines the focus information and the correlations between nearby pixels together, and therefore tends to obtain more accurate fusion result. Experimental results demonstrate the superiority of the proposed method over traditional multi-focus image fusion methods, especially for those images in dynamic scenes.  相似文献   

9.
Depth from focus using a pyramid architecture   总被引:1,自引:0,他引:1  
A method is presented for depth recovery through the analysis of scene sharpness across changing focus position. Modeling a defocused image as the application of a low pass filter on a properly focused image of the same scene, we can compare the high spatial frequency content of regions in each image and determine the correct focus position. Recovering depth in this manner is inherently a local operation, and can be done efficiently using a pipelined image processor. Laplacian and Gaussian pyramids are used to calculate sharpness maps which are collected and compared to find the focus position that maximizes high spatial frequencies for each region.  相似文献   

10.
提出了一种基于视觉特性的快速、高精度的多聚焦图像融合新算法。根据多聚焦图像的特点,采用基于视觉对比度的快速间隔块扫描算法对聚焦离焦区域在整幅图像内进行粗定位,然后在粗定位的基础上通过基于块视觉对比度的比对进行聚焦离焦区域的精确定位,并且将图像划分为3个部分:聚焦区、离焦区与边界。取两幅图像各自的聚焦区,而后将两幅图像离焦与聚焦的边界区域加权融合即得融合图像。实验结果证明,该算法与基于小波分解的融合算法和基于视觉的块分解融合算法相比不仅速度快而且效果好。  相似文献   

11.
The continuous advancement in the field of imaging sensor necessitates the development of an efficient image fusion technique. The multi-focal image fusion extracts the in-focus information from the source images to construct a single composite image with increased depth-of-field. Traditionally, the information in multi-focal images is divided into two categories: in-focus and out-of-focus data. Instead of using a binary focus map, in this work we calculate the degree of focus for each source image using fuzzy logic. The fused image is then generated based on the weighted sum of this information. An initial focus tri-state map is built for each input image by using spatial frequency and a proposed focus measure named as alternate sum modified Laplacian. For the cases where these measures indicate different source images to contain focused pixel or have equal strength, another focus measure based on sum of gradient is employed to calculate the degree of focus in a fuzzy inference system. Finally, the fused image is computed from the weights determined by the degree of focus map of each image. The proposed algorithm is designed to fuse two source images, whereas fusion of multiple input images can be performed by fusing a source image with the fusion output of the previous input group. The comparison of the proposed method with several transform and pixel domain based techniques are conducted in terms of both subjective visual assessment and objective quantitative evaluation. Experimental results demonstrate that our method can be competitive with or even outperforms the methods in comparison.  相似文献   

12.
Digital images are normally taken by focusing on an object, resulting in defocused background regions. A popular approach to produce an all-in-focus image without defocused regions is to capture several input images at varying focus settings, and then fuse them into an image using offline image processing software. This paper describes an all-in-focus imaging method that can operate on digital cameras. The proposed method consists of an automatic focus-bracketing algorithm that determines at which focuses to capture images and an image-fusion algorithm that computes a high-quality all-in-focus image. While most previous methods use the focus measure calculated independently for each input image, the proposed method calculates the relative focus measure between a pair of input images. We note that a well-focused region in an image shows better contrast, sharpness, and details than the corresponding region that is defocused in another image. Based on the observation that the average filtered version of a well-focused region in an image shows a higher correlation to the corresponding defocused region in another image than the original well-focused version, a new focus measure is proposed. Experimental results of various sample image sequences show the superiority of the proposed measure in terms of both objective and subjective evaluation and the proposed method allows the user to capture all-in-focus images directly on their digital camera without using offline image processing software.  相似文献   

13.
基于自动聚焦算法的多聚焦图像融合   总被引:4,自引:0,他引:4  
在分析了多聚焦图像特点的基础上, 将镜头自动聚焦算法移植到多聚集图像融合问题的研究中, 并加以改进, 其主要思想是: 以高频分量作为图像清晰度的判据, 寻求多聚焦图像的聚焦区域, 将其直接移植到融合图像中; 而对于一个焦点到另一焦点的过渡区域, 对其用取最大特征的融合算子进行处理。实验表明, 该算法是原图像特征的最佳近似, 优于小波分解的融合算法。  相似文献   

14.
针对多聚焦图像,提出一种基于图像分块的融合方法。将源图像分为大小相同数量相等的子块,采用能量梯度算子作为对焦评价函数,计算各个图像子块能量梯度匹配度,设置匹配度阈值分离出源图像中的清晰区域。源图像中的清晰区域直接作为融合图像相应的区域,其它区域的处理中,构造与相应子块能量梯度大小相关的图像序列,以及像素点到各个子块中心距离相关的融合函数,然后用融合函数对图像序列融合。实验结果表明该方法有效性和合理性。  相似文献   

15.
目的 为解决目前基于鱼眼变换技术的图像适应方法难以解决的焦点检测和多焦点冲突两大问题,提出一种基于改进鱼眼变换技术的图像适应方法。方法 提出的方法根据源图像的能量计算出图像中所有最优高能量线并组成高能量线集合,作为源图像的高能量部分,即图像的焦点区域;以能量线而不是传统的图像区域为单位进行鱼眼变换以得到目标图像。结果 改变鱼眼变换技术的变换模式并应用于图像适应中,实验结果表明,本文方法解决了基于鱼眼变换技术的图像适应方法存在的问题,通过本文算法所得到的目标图像具有较好的视觉效果,用户满意度接近4分。算法运行速度较快,将源图像(512×384)长度缩小一半的情况下仅需6 s的运算时间。结论 本文方法一方面保留了鱼眼变换图像适应方法的优势,在突出显示图像重要部分的同时,不会忽略图像的次要部分;另一方面解决了鱼眼变换图像适应方法存在的焦点检测和多焦点冲突问题。实现效果和用户主观评价结果表明,该方法是一种有效可行的图像适应方法。  相似文献   

16.
针对现有的多聚焦图像融合方法对聚焦/散焦边界(FDB)信息捕捉不准确的问题,提出了一种新的基于线性稀疏表示和图像抠图的多聚焦图像融合方法。首先,引入一种基于线性稀疏表示的焦点测度,它利用自然图像形成的字典与输入图像在局部窗口上的线性关系,通过求解线性系数来表示图像的焦点信息。然后,利用焦点测度获取源图像的焦点图和一个由聚焦区域、散焦区域以及包含FDB的未知区域组成的三元图,并将三元图作为一个输入,采用图像抠图技术处理源图像的FDB区域,从而得到较精确的全聚焦图像。最后,为了进一步提高融合图像的质量,将得到的全聚焦图像作为新字典实现融合过程的迭代进行,在经过设定的更新次数后得到最终的全聚焦融合图像。实验结果表明,相比于11种最先进的多聚焦图像融合方法,该方法具有较好的融合性能和视觉效果,且有较高的计算效率。  相似文献   

17.
基于全自动控制显微镜的自动聚焦算法研究   总被引:20,自引:1,他引:20  
图像自动聚焦评价函数的选择是垒自动控制显微镜无源方式自动聚焦系统的关键问露。对几种主要的图像聚焦评价函散(灰度方差算子、灰度梯度算子、能量谱方法等)进行了比较、研究,并在此基础上首次将改进的Laplacian算子作为聚焦评价函数引人自动聚焦之中,同时为了消除噪声的影响,引入了步长和阈值两个参数。实验结果表明,改进的LapIacian算子比其他评价函数更为准确、稳定和可靠,该算法已成功应用于显微镜自动词焦系统中。  相似文献   

18.
基于小波变换与局部能量的多聚焦图像融合   总被引:17,自引:0,他引:17  
本文提出了一种基于区域局部能量的不同聚焦点图像融合方法。本文利用小波分解,将图像分解为低频部分和高频部分,然后选择合适的比例,削弱低频部分,减小低频部分在整个图像能量中所占的比例,相对增大高频部分的比例,再重构图像。对于重构的图像,在空域中使用区域局部能量大小判定的方法,对各幅图像中的目标进行判断,并选择其中的清晰部分生成融合图像。该方法不但适用于多聚焦图像融合,而且还可以应用于特性类似的医学图像的融合。实验结果表明,该方法可以提取出多聚焦图像中的清晰目标,生成的融合图像效果优于Laplacian塔型方法和小波变换方法。  相似文献   

19.
Multi-focus image fusion has emerged as a major topic in image processing to generate all-focus images with increased depth-of-field from multi-focus photographs. Different approaches have been used in spatial or transform domain for this purpose. But most of them are subject to one or more of image fusion quality degradations such as blocking artifacts, ringing effects, artificial edges, halo artifacts, contrast decrease, sharpness reduction, and misalignment of decision map with object boundaries. In this paper we present a novel multi-focus image fusion method in spatial domain that utilizes a dictionary which is learned from local patches of source images. Sparse representation of relative sharpness measure over this trained dictionary are pooled together to get the corresponding pooled features. Correlation of the pooled features with sparse representations of input images produces a pixel level score for decision map of fusion. Final regularized decision map is obtained using Markov Random Field (MRF) optimization. We also gathered a new color multi-focus image dataset which has more variety than traditional multi-focus image sets. Experimental results demonstrate that our proposed method outperforms existing state-of-the-art methods, in terms of visual and quantitative evaluations.  相似文献   

20.

In this paper, a novel region-based multi-focus color image fusion method is proposed, which employs the focused edges extracted from the source images to obtain a fused image with better focus. At first, the edges are obtained from the source images, using two suitable edge operators (Zero-cross and Canny). Then, a block-wise region comparison is performed to extract out the focused edges which have been morphologically dilated, followed by the selection of the largest component to remove isolated points. Any discontinuity in the detected edges is removed by consulting with the edge detection output from the Canny edge operator. The best reconstructed edge image is chosen, which is later converted into a focused region. Finally, the fused image is constructed by selecting pixels from the source images with the help of a prescribed color decision map. The proposed method has been implemented and tested on a set of real 2-D multi-focus image pairs (both gray-scale and color). The algorithm has a competitive performance with respect to the recent fusion methods in terms of subjective and objective evaluation.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号