首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Multi-focus image fusion aims to extract the focused regions from multiple partially focused images of the same scene and then combine them together to produce a completely focused image. Detecting the focused regions from multiple images is key for multi-focus image fusion. In this paper, we propose a novel boundary finding based multi-focus image fusion algorithm, in which the task of detecting the focused regions is treated as finding the boundaries between the focused and defocused regions from the source images. According to the found boundaries, the source images could be naturally separated into regions with the same focus conditions, i.e., each region is fully focused or defocused. Then, the focused regions can be found out by selecting the regions with greater focus-measures from each pair of regions. To improve the precision of boundary detection and focused region detection, we also present a multi-scale morphological focus-measure, effectiveness of which has been verified by using some quantitative evaluations. Different from the general multi-focus image fusion algorithms, our algorithm fuses the boundary regions and non-boundary regions of the source images respectively, which helps produce a fusion image with good visual quality. Moreover, the experimental results validate that the proposed algorithm outperforms some state-of-the-art image fusion algorithms in both qualitative and quantitative evaluations.  相似文献   

2.
Anisotropic blur and mis-registration frequently happen in multi-focus images due to object or camera motion. These factors severely degrade the fusion quality of multi-focus images. In this paper, we present a novel multi-scale weighted gradient-based fusion method to solve this problem. This method is based on a multi-scale structure-based focus measure that reflects the sharpness of edge and corner structures at multiple scales. This focus measure is derived based on an image structure saliency and introduced to determine the gradient weights in the proposed gradient-based fusion method for multi-focus images with a novel multi-scale approach. In particular, we focus on a two-scale scheme, i.e., a large scale and a small scale, to effectively solve the fusion problems raised by anisotropic blur and mis-registration. The large-scale structure-based focus measure is used first to attenuate the impacts of anisotropic blur and mis-registration on the focused region detection, and then the gradient weights near the boundaries of the focused regions are carefully determined by applying the small-scale focus measure. Experimental results clearly demonstrate that the proposed method outperforms the conventional fusion methods in the presence of anisotropic blur and mis-registration.  相似文献   

3.
In this paper, we address the problem of fusing multi-focus images in dynamic scenes. The proposed approach consists of three main steps: first, the focus information of each source image obtained by morphological filtering is used to get the rough segmentation result which is one of the inputs of image matting. Then, image matting technique is applied to obtain the accurate focused region of each source image. Finally, the focused regions are combined together to construct the fused image. Through image matting, the proposed fusion algorithm combines the focus information and the correlations between nearby pixels together, and therefore tends to obtain more accurate fusion result. Experimental results demonstrate the superiority of the proposed method over traditional multi-focus image fusion methods, especially for those images in dynamic scenes.  相似文献   

4.
目前大多数图像融合算法将每个像素都独立对待,使像素之间关系割裂开来。本文提出了一种基于形态学算法和遗传算法的多焦点图像融合方法,此种方法有效地结合了像素级融合方法和特征级融合方法。其基本思想是先检测出原始图像中清晰聚焦的区域,再将这些区域提取出来,组成各部分都清晰聚焦的结果图像。实验结果证明,此方法优于Haar小波融合方法和形态学小波融合方法。特别是在原始图像没有完全配准的情况下,此种方法更为有效。  相似文献   

5.
深度学习技术应用到多聚焦图像融合领域时,其大多通过监督学习的方式来训练网络,但由于缺乏专用于多聚焦图像融合的监督训练的标记数据集,且制作专用的大规模标记训练集代价过高,所以现有方法多通过在聚焦图像中随机添加高斯模糊进行监督学习,这导致网络训练难度大,很难实现理想的融合效果。为解决以上问题,提出了一种易实现且融合效果好的多聚焦图像融合方法。通过在易获取的无标记数据集上以无监督学习方式训练引入了注意力机制的encoder-decoder网络模型,获得输入源图像的深层特征。再通过形态聚焦检测对获取的特征进行活动水平测量生成初始决策图。运用一致性验证方法对初始决策图优化,得到最终的决策图。融合图像质量在主观视觉和客观指标两方面上进行评定,经实验结果表明,融合图像清晰度高,保有细节丰富且失真度小。  相似文献   

6.
传统方法对多聚焦图像进行预处理,由于图像灰度重叠区域合并使原图像细节信息损失,导致多聚焦图像灰度重叠区域识别效果不理想,为此提出基于Mean-shift算法和OTSU阈值分割算法的多聚焦图像灰度重叠特征自适应识别方法。使用Mean-shift算法对多聚焦图像进行平滑处理,对平滑处理过后的多聚焦图像进行小波变换,将图像的灰度重叠区域灰度值增强;再使用阈值分割将经过灰度增强的重叠区域分类;通过OTSU算法识别出灰度重叠特征区域。实验结果表明,提出方法在图像灰度重叠区域的识别效果上较为突出,并且能够有效保留灰度重叠区域的细节信息。  相似文献   

7.
基于清晰度和非下采样多聚焦图像融合   总被引:2,自引:1,他引:1  
丁莉  韩崇昭 《计算机工程》2010,36(11):212-214
根据多聚焦图像的特点提出一种基于清晰度的NSCT图像融合算法。在清晰的区域,低频系数和高频系数全部采用清晰区域的系数,而从清晰到模糊过渡的区域,低频系数则取区域方差值最大,高频子带系数取区域能量值最大。该算法与梯度金字塔算法、小波融合算法和Contourlet融合算法进行比较,实验结果证明该方法融合后的图像与源图像具有最小均方差。  相似文献   

8.
In this study, we present new deep learning (DL) method for fusing multi-focus images. Current multi-focus image fusion (MFIF) approaches based on DL methods mainly treat MFIF as a classification task. These methods use a convolutional neural network (CNN) as a classifier to identify pixels as focused or defocused pixels. However, due to unavailability of labeled data to train networks, existing DL-based supervised models for MFIF add Gaussian blur in focused images to produce training data. DL-based unsupervised models are also too simple and only applicable to perform fusion tasks other than MFIF. To address the above issues, we proposed a new MFIF method, which aims to learn feature extraction, fusion and reconstruction components together to produce a complete unsupervised end-to-end trainable deep CNN. To enhance the feature extraction capability of CNN, we introduce a Siamese multi-scale feature extraction module to achieve a promising performance. In our proposed network we applied multiscale convolutions along with skip connections to extract more useful common features from a multi-focus image pair. Instead of using basic loss functions to train the CNN, our model utilizes structure similarity (SSIM) measure as a training loss function. Moreover, the fused images are reconstructed in a multiscale manner to guarantee more accurate restoration of images. Our proposed model can process images with variable size during testing and validation. Experimental results on various test images validate that our proposed method yields better quality fused images that are superior to the fused images generated by compared state-of-the-art image fusion methods.  相似文献   

9.
Multi-focus image fusion is an enhancement method to generate full-clear images, which can address the depth-of-field limitation in imaging of optical lenses. Most existing methods generate the decision map to realize multi-focus image fusion, which usually lead to detail loss due to misclassification, especially near the boundary line of the focused and defocused regions. To overcome this challenge, this paper presents a new generative adversarial network with adaptive and gradient joint constraints to fuse multi-focus images. In our model, an adaptive decision block is introduced to determine whether source pixels are focused or not based on the difference of repeated blur. Under its guidance, a specifically designed content loss can dynamically guide the optimization trend, that is, force the generator to produce a fused result of the same distribution as the focused source images. To further enhance the texture details, we establish an adversarial game so that the gradient map of the fused result approximates the joint gradient map constructed based on the source images. Our model is unsupervised without requiring ground-truth fused images for training. In addition, we release a new dataset containing 120 high-quality multi-focus image pairs for benchmark evaluation. Experimental results demonstrate the superiority of our method over the state-of-the-art in terms of both subjective visual effect and quantitative metrics. Moreover, our method is about one order of magnitude faster compared with the state-of-the-art.  相似文献   

10.
针对多聚焦图像融合容易出现信息丢失、块效应明显等问题,提出了一种新的基于图像抠图技术的多聚焦图像融合算法。首先,通过聚焦检测获得源图像的聚焦信息,并根据所有源图像的聚焦信息生成融合图像的三分图,即前景、背景和未知区域;然后,利用图像抠图技术,根据三分图获得每一幅源图像的精确聚焦区域;最后,将这些聚焦区域结合起来构成融合图像的前景和背景,并根据抠图算法得到的确定前景、背景对未知区域进行最优融合,增强融合图像前景、背景与未知区域相邻像素之间的联系,实现图像融合。实验结果表明,与传统算法相比,所提算法在客观评价方面能获得更高的互信息量(MI)和边缘保持度,在主观评价方面能有效抑制块明显效应,得到更优的视觉效果。该算法可以应用到目标识别、计算机视觉等领域,以期得到更优的融合效果。  相似文献   

11.
目的 基于深度学习的多聚焦图像融合方法主要是利用卷积神经网络(convolutional neural network,CNN)将像素分类为聚焦与散焦。监督学习过程常使用人造数据集,标签数据的精确度直接影响了分类精确度,从而影响后续手工设计融合规则的准确度与全聚焦图像的融合效果。为了使融合网络可以自适应地调整融合规则,提出了一种基于自学习融合规则的多聚焦图像融合算法。方法 采用自编码网络架构,提取特征,同时学习融合规则和重构规则,以实现无监督的端到端融合网络;将多聚焦图像的初始决策图作为先验输入,学习图像丰富的细节信息;在损失函数中加入局部策略,包含结构相似度(structural similarity index measure,SSIM)和均方误差(mean squared error,MSE),以确保更加准确地还原图像。结果 在Lytro等公开数据集上从主观和客观角度对本文模型进行评价,以验证融合算法设计的合理性。从主观评价来看,模型不仅可以较好地融合聚焦区域,有效避免融合图像中出现伪影,而且能够保留足够的细节信息,视觉效果自然清晰;从客观评价来看,通过将模型融合的图像与其他主流多聚焦图像融合算法的融合图像进行量化比较,在熵、Qw、相关系数和视觉信息保真度上的平均精度均为最优,分别为7.457 4,0.917 7,0.978 8和0.890 8。结论 提出了一种用于多聚焦图像的融合算法,不仅能够对融合规则进行自学习、调整,并且融合图像效果可与现有方法媲美,有助于进一步理解基于深度学习的多聚焦图像融合机制。  相似文献   

12.
Nowadays image processing and machine vision fields have become important research topics due to numerous applications in almost every field of science. Performance in these fields is critically dependent to the quality of input images. In most of the imaging devices, optical lenses are used to capture images from a particular scene. But due to the limited depth of field of optical lenses, objects in different distances from focal point will be captured with different sharpness and details. Thus, important details of the scene might be lost in some regions. Multi-focus image fusion is an effective technique to cope with this problem. The main challenge in multi-focus fusion is the selection of an appropriate focus measure. In this paper, we propose a novel focus measure based on the surface area of regions surrounded by intersection points of input source images. The potential of this measure to distinguish focused regions from the blurred ones is proved. In our fusion algorithm, intersection points of input images are calculated and then input images are segmented using these intersection points. After that, the surface area of each segment is considered as a measure to determine focused regions. Using this measure we obtain an initial selection map of fusion which is then refined by morphological modifications. To demonstrate the performance of the proposed method, we compare its results with several competing methods. The results show the effectiveness of our proposed method.  相似文献   

13.
Wang  Zhaobin  Wang  Shuai  Guo  Lijie 《Neural computing & applications》2018,29(11):1101-1114

The purpose of multi-focus image fusion is to acquire an image where all the objects are focused by fusing the source images which have different focus points. A novel multi-focus image fusion method is proposed in this paper, which is based on PCNN and random walks. PCNN is consistent with people’s visual perception. And the random walks model has been proven to have enormous potential to fuse image in recent years. The proposed method first employs PCNN to measure the sharpness of source images. Then, an original fusion map is constructed. Next, the method of random walks is employed to improve the accuracy of the fused regions detection. Finally, the fused image is generated according to the probability computed by random walks. The experiments demonstrate that our method outperforms many existing methods of multi-focus image fusion in visual perception and objective criteria. To assess the performance of our method in practical application, some examples are given at the end of paper.

  相似文献   

14.
Remaining useful information of the original images in the fusion image is very important in image fusion. To be effective for image fusion, a multi-scale top-hat transform and toggle contrast operator based algorithm using the extracted image regions and details is proposed in this paper. Top-hat transform could extract image regions, and operations constructed from toggle contrast operator could extract image details. Moreover, multi-scale top-hat transform and toggle contrast operator could be used to extract the effective image regions and details at multi-scales of the original images. Then, the extracted image regions and details are imported into the final fusion image to form the effective fusion result. Thus, the proposed multi-scale top-hat transform and toggle contrast operator based algorithm is an effective image fusion algorithm to keep more useful image information. The combination of the top-hat transform and toggle contrast operator for effective image fusion is the main contribution of this paper, which is the extension of the previous work using only the toggle contrast operator for edge preserved image fusion. Experimental results on multi-modal and multi-focus images show that the proposed algorithm performs very well for image fusion.  相似文献   

15.

In this paper, a novel region-based multi-focus color image fusion method is proposed, which employs the focused edges extracted from the source images to obtain a fused image with better focus. At first, the edges are obtained from the source images, using two suitable edge operators (Zero-cross and Canny). Then, a block-wise region comparison is performed to extract out the focused edges which have been morphologically dilated, followed by the selection of the largest component to remove isolated points. Any discontinuity in the detected edges is removed by consulting with the edge detection output from the Canny edge operator. The best reconstructed edge image is chosen, which is later converted into a focused region. Finally, the fused image is constructed by selecting pixels from the source images with the help of a prescribed color decision map. The proposed method has been implemented and tested on a set of real 2-D multi-focus image pairs (both gray-scale and color). The algorithm has a competitive performance with respect to the recent fusion methods in terms of subjective and objective evaluation.

  相似文献   

16.
基于二次成像与清晰度差异的多聚焦图像融合   总被引:1,自引:0,他引:1  
本文提出了一种基于清晰度差异的不同聚焦点图像的融合方法。该方法首先选择了一种基于梯度向量模方和的清晰度定义,然后根据几何光学系统的成像模型,以及点扩散函数的作用效果提出了模拟光学系统的二次成像模型。然后根据二次成像前后各图像清晰度的差异情况,对各幅图像中的目标进行判断,并选择其中的清晰部分生成融合图像。实验结果表明,该方法可以提取出多聚焦图像中的清晰目标,生成的融合图像效果优于Laplacian塔型方法和小波变换方法。  相似文献   

17.
Finite depth-of-field poses a problem in light optical imaging systems since the objects present outside the range of depth-of-field appear blurry in the recorded image. Effective depth-of-field of a sensor can be enhanced considerably without compromising the quality of the image by combining multi-focus images of a scene. This paper presents a block-based algorithm for multi-focus image fusion. In general, finding a suitable block-size is a problem in block-based methods. A large block is more likely to contain portions from both focused and defocused regions. This may lead to selection of considerable amount of defocused regions. On the other hand, small blocks do not vary much in relative contrast and hence difficult to choose from. Moreover, small blocks are more affected by mis-registration problems. In this work, we present a block-based algorithm which do not use a fixed block-size and rather makes use of a quad-tree structure to obtain an optimal subdivision of blocks. Though the algorithm starts with blocks, it ultimately identifies sharply focused regions in input images. The algorithm is simple, computationally efficient and gives good results. A new focus-measure called energy of morphologic gradients is introduced and is used in the algorithm. It is comparable with other focus measures viz.energy of gradients, variance, Tenengrad, energy of Laplacian and sum modified Laplacian. The algorithm is robust since it works with any of the above focus measures. It is also robust against pixel mis-registration. Performance of the algorithm has been evaluated by using two different quantitative measures.  相似文献   

18.
为解决多聚焦图像融合算法中细节信息保留受限的问题,提出改进稀疏表示与积化能量和的多聚焦图像融合算法。首先,对源图像采用非下采样剪切波变换,得到低频子带系数和高频子带系数。接着,通过滑动窗口技术从低频子带系数中提取图像块,构造联合局部自适应字典,利用正交匹配追踪算法计算得到稀疏表示系数,利用方差能量加权规则得到融合后的稀疏系数,再通过反向滑动窗口技术获得融合后的低频子带系数;然后,对于高频子带系数提出积化能量和的融合规则,得到融合后高频子带系数;最后,通过逆变换获得融合图像。实验结果表明,该算法能保留更详细的细节信息,在视觉质量和客观评价上具有一定的优势。  相似文献   

19.
In this paper, a simple and efficient multi-focus image fusion approach is proposed. As for the multi-focus images, all of them are obtained from the same scene with different focuses. So the images can be segmented into two kinds regions based on out of and on the focus, which directly leads to a region based fusion, i.e., finding out all of the regions on the focus from the source images, and merging them into a combination image. This poses the question of how to locate the regions on the focus from the input images. Considering that the details or scales are different in the regions which are not and on the focuses, blurring measure method in this paper is used to locate the regions based on the blocking degree. This new fusion method can significantly reduce the amount of distortion artifacts and the loss of contrast information. These are usually observed in fused images in the conventional fusion schemes. The fusion performance of proposed method has been evaluated through informal visual inspection and objective fusion performance measurements, and results show the advantages of the approach compared to conventional fusion approaches.  相似文献   

20.
Modern developments in image technology enabled easy access to an innovative type of sensor-based networks, Camera or Visual Sensor Networks (VSN). Nevertheless, more sensor data sources bring about the problem of overload information. To solve this problem, some researchers have been carried out on the techniques to counteract the data overload caused by sensors without losing useful data. The aim of fusion in each application is to combine images from several sensors, which leads to the decreased amount of input image data, producing an image with more accurate data. This paper proposes a noisy feature removal scheme for multi-focus image fusion combining the decision information of optimized individual features. The proposed scheme is developed in two main steps. In the first step, the diverse types of features are extracted from each block of input blurred images. The useful information of these individual features indicates which image block is more focused among corresponding blocks in source images. After that, noisy features are removed using binary Genetic Grey wolf optimizer (GGWO) algorithm. The ensemble decision based on individual features is employed to fuse blurred images in the second step. The experimentation is evaluated on different multi-focus images and it reveals that GGWO based proposed method performs better visual quality than other methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号