首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
目前大多数图像融合算法将每个像素都独立对待,使像素之间关系割裂开来。本文提出了一种基于形态学算法和遗传算法的多焦点图像融合方法,此种方法有效地结合了像素级融合方法和特征级融合方法。其基本思想是先检测出原始图像中清晰聚焦的区域,再将这些区域提取出来,组成各部分都清晰聚焦的结果图像。实验结果证明,此方法优于Haar小波融合方法和形态学小波融合方法。特别是在原始图像没有完全配准的情况下,此种方法更为有效。  相似文献   

2.
Multi-focus image fusion has emerged as an important research area in information fusion. It aims at increasing the depth-of-field by extracting focused regions from multiple partially focused images, and merging them together to produce a composite image in which all objects are in focus. In this paper, a novel multi-focus image fusion algorithm is presented in which the task of detecting the focused regions is achieved using a Content Adaptive Blurring (CAB) algorithm. The proposed algorithm induces non-uniform blur in a multi-focus image depending on its underlying content. In particular, it analyzes the local image quality in a neighborhood and determines if the blur should be induced or not without losing image quality. In CAB, pixels belonging to the blur regions receive little or no blur at all, whereas the focused regions receive significant blur. Absolute difference of the original image and the CAB-blurred image yields initial segmentation map, which is further refined using morphological operators and graph-cut techniques to improve the segmentation accuracy. Quantitative and qualitative evaluations and comparisons with current state-of-the-art on two publicly available datasets demonstrate the strength of the proposed algorithm.  相似文献   

3.
目的 基于深度学习的多聚焦图像融合方法主要是利用卷积神经网络(convolutional neural network,CNN)将像素分类为聚焦与散焦。监督学习过程常使用人造数据集,标签数据的精确度直接影响了分类精确度,从而影响后续手工设计融合规则的准确度与全聚焦图像的融合效果。为了使融合网络可以自适应地调整融合规则,提出了一种基于自学习融合规则的多聚焦图像融合算法。方法 采用自编码网络架构,提取特征,同时学习融合规则和重构规则,以实现无监督的端到端融合网络;将多聚焦图像的初始决策图作为先验输入,学习图像丰富的细节信息;在损失函数中加入局部策略,包含结构相似度(structural similarity index measure,SSIM)和均方误差(mean squared error,MSE),以确保更加准确地还原图像。结果 在Lytro等公开数据集上从主观和客观角度对本文模型进行评价,以验证融合算法设计的合理性。从主观评价来看,模型不仅可以较好地融合聚焦区域,有效避免融合图像中出现伪影,而且能够保留足够的细节信息,视觉效果自然清晰;从客观评价来看,通过将模型融合的图像与其他主流多聚焦图像融合算法的融合图像进行量化比较,在熵、Qw、相关系数和视觉信息保真度上的平均精度均为最优,分别为7.457 4,0.917 7,0.978 8和0.890 8。结论 提出了一种用于多聚焦图像的融合算法,不仅能够对融合规则进行自学习、调整,并且融合图像效果可与现有方法媲美,有助于进一步理解基于深度学习的多聚焦图像融合机制。  相似文献   

4.
Multi-focus image fusion is an enhancement method to generate full-clear images, which can address the depth-of-field limitation in imaging of optical lenses. Most existing methods generate the decision map to realize multi-focus image fusion, which usually lead to detail loss due to misclassification, especially near the boundary line of the focused and defocused regions. To overcome this challenge, this paper presents a new generative adversarial network with adaptive and gradient joint constraints to fuse multi-focus images. In our model, an adaptive decision block is introduced to determine whether source pixels are focused or not based on the difference of repeated blur. Under its guidance, a specifically designed content loss can dynamically guide the optimization trend, that is, force the generator to produce a fused result of the same distribution as the focused source images. To further enhance the texture details, we establish an adversarial game so that the gradient map of the fused result approximates the joint gradient map constructed based on the source images. Our model is unsupervised without requiring ground-truth fused images for training. In addition, we release a new dataset containing 120 high-quality multi-focus image pairs for benchmark evaluation. Experimental results demonstrate the superiority of our method over the state-of-the-art in terms of both subjective visual effect and quantitative metrics. Moreover, our method is about one order of magnitude faster compared with the state-of-the-art.  相似文献   

5.
Finite depth-of-field poses a problem in light optical imaging systems since the objects present outside the range of depth-of-field appear blurry in the recorded image. Effective depth-of-field of a sensor can be enhanced considerably without compromising the quality of the image by combining multi-focus images of a scene. This paper presents a block-based algorithm for multi-focus image fusion. In general, finding a suitable block-size is a problem in block-based methods. A large block is more likely to contain portions from both focused and defocused regions. This may lead to selection of considerable amount of defocused regions. On the other hand, small blocks do not vary much in relative contrast and hence difficult to choose from. Moreover, small blocks are more affected by mis-registration problems. In this work, we present a block-based algorithm which do not use a fixed block-size and rather makes use of a quad-tree structure to obtain an optimal subdivision of blocks. Though the algorithm starts with blocks, it ultimately identifies sharply focused regions in input images. The algorithm is simple, computationally efficient and gives good results. A new focus-measure called energy of morphologic gradients is introduced and is used in the algorithm. It is comparable with other focus measures viz.energy of gradients, variance, Tenengrad, energy of Laplacian and sum modified Laplacian. The algorithm is robust since it works with any of the above focus measures. It is also robust against pixel mis-registration. Performance of the algorithm has been evaluated by using two different quantitative measures.  相似文献   

6.
In this paper, we address the problem of fusing multi-focus images in dynamic scenes. The proposed approach consists of three main steps: first, the focus information of each source image obtained by morphological filtering is used to get the rough segmentation result which is one of the inputs of image matting. Then, image matting technique is applied to obtain the accurate focused region of each source image. Finally, the focused regions are combined together to construct the fused image. Through image matting, the proposed fusion algorithm combines the focus information and the correlations between nearby pixels together, and therefore tends to obtain more accurate fusion result. Experimental results demonstrate the superiority of the proposed method over traditional multi-focus image fusion methods, especially for those images in dynamic scenes.  相似文献   

7.
针对多聚焦图像融合中难以有效检测聚焦点的问题,提出了一种基于鲁棒主成分分析(RPCA)和区域检测的多聚焦图像融合方法。将RPCA理论运用到多聚焦图像融合中,把源图像分解为稀疏图像和低秩图像;对稀疏矩阵采用区域检测的方法得到源图像的聚焦判决图;对聚焦判决图进行三方向一致性和区域生长法处理得到最终决策图;根据最终决策图对源图像进行融合。实验结果表明,在主观评价方面,所提出的方法在对比度、纹理清晰度、亮度等几方面都有显著的提高;在客观评价方面,用标准差、平均梯度、空间频率和互信息四项评价指标说明了该方法的有效性。  相似文献   

8.

In this paper, a novel region-based multi-focus color image fusion method is proposed, which employs the focused edges extracted from the source images to obtain a fused image with better focus. At first, the edges are obtained from the source images, using two suitable edge operators (Zero-cross and Canny). Then, a block-wise region comparison is performed to extract out the focused edges which have been morphologically dilated, followed by the selection of the largest component to remove isolated points. Any discontinuity in the detected edges is removed by consulting with the edge detection output from the Canny edge operator. The best reconstructed edge image is chosen, which is later converted into a focused region. Finally, the fused image is constructed by selecting pixels from the source images with the help of a prescribed color decision map. The proposed method has been implemented and tested on a set of real 2-D multi-focus image pairs (both gray-scale and color). The algorithm has a competitive performance with respect to the recent fusion methods in terms of subjective and objective evaluation.

  相似文献   

9.
Anisotropic blur and mis-registration frequently happen in multi-focus images due to object or camera motion. These factors severely degrade the fusion quality of multi-focus images. In this paper, we present a novel multi-scale weighted gradient-based fusion method to solve this problem. This method is based on a multi-scale structure-based focus measure that reflects the sharpness of edge and corner structures at multiple scales. This focus measure is derived based on an image structure saliency and introduced to determine the gradient weights in the proposed gradient-based fusion method for multi-focus images with a novel multi-scale approach. In particular, we focus on a two-scale scheme, i.e., a large scale and a small scale, to effectively solve the fusion problems raised by anisotropic blur and mis-registration. The large-scale structure-based focus measure is used first to attenuate the impacts of anisotropic blur and mis-registration on the focused region detection, and then the gradient weights near the boundaries of the focused regions are carefully determined by applying the small-scale focus measure. Experimental results clearly demonstrate that the proposed method outperforms the conventional fusion methods in the presence of anisotropic blur and mis-registration.  相似文献   

10.
Multi-focus image fusion has emerged as a major topic in image processing to generate all-focus images with increased depth-of-field from multi-focus photographs. Different approaches have been used in spatial or transform domain for this purpose. But most of them are subject to one or more of image fusion quality degradations such as blocking artifacts, ringing effects, artificial edges, halo artifacts, contrast decrease, sharpness reduction, and misalignment of decision map with object boundaries. In this paper we present a novel multi-focus image fusion method in spatial domain that utilizes a dictionary which is learned from local patches of source images. Sparse representation of relative sharpness measure over this trained dictionary are pooled together to get the corresponding pooled features. Correlation of the pooled features with sparse representations of input images produces a pixel level score for decision map of fusion. Final regularized decision map is obtained using Markov Random Field (MRF) optimization. We also gathered a new color multi-focus image dataset which has more variety than traditional multi-focus image sets. Experimental results demonstrate that our proposed method outperforms existing state-of-the-art methods, in terms of visual and quantitative evaluations.  相似文献   

11.
In this paper, a simple and efficient multi-focus image fusion approach is proposed. As for the multi-focus images, all of them are obtained from the same scene with different focuses. So the images can be segmented into two kinds regions based on out of and on the focus, which directly leads to a region based fusion, i.e., finding out all of the regions on the focus from the source images, and merging them into a combination image. This poses the question of how to locate the regions on the focus from the input images. Considering that the details or scales are different in the regions which are not and on the focuses, blurring measure method in this paper is used to locate the regions based on the blocking degree. This new fusion method can significantly reduce the amount of distortion artifacts and the loss of contrast information. These are usually observed in fused images in the conventional fusion schemes. The fusion performance of proposed method has been evaluated through informal visual inspection and objective fusion performance measurements, and results show the advantages of the approach compared to conventional fusion approaches.  相似文献   

12.
张攀 《微型电脑应用》2012,28(9):59-60,65
多聚焦图像融合,是将两幅(或多幅)对同一场景的各个目标,聚焦不同的图像融合成一幅清晰的新图像.在多聚焦图像融合中,典型的群智能算法图像融合方法取得了较好的效果,如遗传算法、粒子群算法等.目前,对群智能算法的优化改进,加快图像的融合速度是一个主要的研究方向.  相似文献   

13.
In this study, we present new deep learning (DL) method for fusing multi-focus images. Current multi-focus image fusion (MFIF) approaches based on DL methods mainly treat MFIF as a classification task. These methods use a convolutional neural network (CNN) as a classifier to identify pixels as focused or defocused pixels. However, due to unavailability of labeled data to train networks, existing DL-based supervised models for MFIF add Gaussian blur in focused images to produce training data. DL-based unsupervised models are also too simple and only applicable to perform fusion tasks other than MFIF. To address the above issues, we proposed a new MFIF method, which aims to learn feature extraction, fusion and reconstruction components together to produce a complete unsupervised end-to-end trainable deep CNN. To enhance the feature extraction capability of CNN, we introduce a Siamese multi-scale feature extraction module to achieve a promising performance. In our proposed network we applied multiscale convolutions along with skip connections to extract more useful common features from a multi-focus image pair. Instead of using basic loss functions to train the CNN, our model utilizes structure similarity (SSIM) measure as a training loss function. Moreover, the fused images are reconstructed in a multiscale manner to guarantee more accurate restoration of images. Our proposed model can process images with variable size during testing and validation. Experimental results on various test images validate that our proposed method yields better quality fused images that are superior to the fused images generated by compared state-of-the-art image fusion methods.  相似文献   

14.
楼晶晶  王煦  苗启广 《计算机应用》2010,30(12):3229-3232
提出了一种能充分利用第二代Bandelet变换自适应捕获图像边缘的能力,并基于区域特性进行融合的多聚焦图像融合新算法。首先将源图像分解至Bandelet变换域,然后结合区域特性进行融合处理:对几何流,采用绝对值最大的融合规则;对Bandelet系数矩阵采用区域方差的融合规则,最后通过Bandelet逆变换得到融合图像。实验结果表明,所提出的新算法能够更好地提取参与融合的原始图像的基本特征进行融合处理,融合效果在主观视觉效果和客观性能指标两方面都较经典的拉普拉斯金字塔算法和小波变换算法更具优势,尤其是对纹理及边缘信息明显的源图像,融合结果优势更加明显。  相似文献   

15.
Wang  Zhaobin  Wang  Shuai  Guo  Lijie 《Neural computing & applications》2018,29(11):1101-1114

The purpose of multi-focus image fusion is to acquire an image where all the objects are focused by fusing the source images which have different focus points. A novel multi-focus image fusion method is proposed in this paper, which is based on PCNN and random walks. PCNN is consistent with people’s visual perception. And the random walks model has been proven to have enormous potential to fuse image in recent years. The proposed method first employs PCNN to measure the sharpness of source images. Then, an original fusion map is constructed. Next, the method of random walks is employed to improve the accuracy of the fused regions detection. Finally, the fused image is generated according to the probability computed by random walks. The experiments demonstrate that our method outperforms many existing methods of multi-focus image fusion in visual perception and objective criteria. To assess the performance of our method in practical application, some examples are given at the end of paper.

  相似文献   

16.
The depth of field (DOF) of camera equipment is generally limited, so it is very difficult to get a fully focused image with all the objects clear after taking only one shot. A way to obtain a fully focused image is to use a multi-focus image fusion method, which fuses multiple images with different focusing depths into one image. However, most of the existing methods focus too much on the fusion accuracy of a single pixel, ignoring the integrity of the target and the importance of shallow features, resulting in internal errors and boundary artifacts, which need to be repaired after a long time of post-processing. In order to solve these problems, we propose a cascade network based on Transformer and attention mechanism, which can directly obtain the decision map and fusion result of focusing/defocusing region through end-to-end processing of source image, avoiding complex post-processing. For improving the fusion accuracy, this paper introduces the joint loss function, which can optimize the network parameters from three aspects. Furthermore, In order to enrich the shallow features of the network, a global attention module with shallow features is designed. Extensive experiments were conducted, including a large number of ablation experiments, 6 objective measures and a variety of subjective visual comparisons. Compared with 9 state-of-the-art methods, the results show that the proposed network structure can improve the quality of multi-focus fusion images and the performance is optimal.  相似文献   

17.
针对现有的多聚焦图像融合方法对聚焦/散焦边界(FDB)信息捕捉不准确的问题,提出了一种新的基于线性稀疏表示和图像抠图的多聚焦图像融合方法。首先,引入一种基于线性稀疏表示的焦点测度,它利用自然图像形成的字典与输入图像在局部窗口上的线性关系,通过求解线性系数来表示图像的焦点信息。然后,利用焦点测度获取源图像的焦点图和一个由聚焦区域、散焦区域以及包含FDB的未知区域组成的三元图,并将三元图作为一个输入,采用图像抠图技术处理源图像的FDB区域,从而得到较精确的全聚焦图像。最后,为了进一步提高融合图像的质量,将得到的全聚焦图像作为新字典实现融合过程的迭代进行,在经过设定的更新次数后得到最终的全聚焦融合图像。实验结果表明,相比于11种最先进的多聚焦图像融合方法,该方法具有较好的融合性能和视觉效果,且有较高的计算效率。  相似文献   

18.
针对多聚焦图像融合容易出现信息丢失、块效应明显等问题,提出了一种新的基于图像抠图技术的多聚焦图像融合算法。首先,通过聚焦检测获得源图像的聚焦信息,并根据所有源图像的聚焦信息生成融合图像的三分图,即前景、背景和未知区域;然后,利用图像抠图技术,根据三分图获得每一幅源图像的精确聚焦区域;最后,将这些聚焦区域结合起来构成融合图像的前景和背景,并根据抠图算法得到的确定前景、背景对未知区域进行最优融合,增强融合图像前景、背景与未知区域相邻像素之间的联系,实现图像融合。实验结果表明,与传统算法相比,所提算法在客观评价方面能获得更高的互信息量(MI)和边缘保持度,在主观评价方面能有效抑制块明显效应,得到更优的视觉效果。该算法可以应用到目标识别、计算机视觉等领域,以期得到更优的融合效果。  相似文献   

19.
PCA分块结合高通滤波的多聚焦图像融合研究   总被引:1,自引:0,他引:1  
针对传统像素级图像融合方法割裂像素间联系的问题,提出了一种新的基于区域检测的图像融合方法。该算法主要利用PCA与高通瘛波进行区域的检测。首先,将图像分块;其次,对每块利用高通滤波和主分量分析算法判断其是清晰块还是模糊块。最后选择相应的清晰块重构图像,从而得到最终的融合结果图。实验结果表明,此方法优于传统的PCA和小波方法。  相似文献   

20.
显著性物体检测的关键在于准确地突出前景区域,多数传统方法在处理复杂背景图像时效果不理想。针对上述问题,提出了一种基于前景增强与背景抑制的显著性物体检测方法。首先,利用简单线性迭代聚类(SLIC)将图像进行分割得到多个超像素区域,通过区域间的对比和边界信息分别获得图像的显著区域与背景种子,并通过计算得到基于区域间对比和基于背景的两幅显著图。然后,在两幅图像中运用Seam Carving和Graph based的图像分割法区分显著与非显著区域,进而得到前景增强与背景抑制模板。最终,融合两幅显著图与模板得到最终的显著图。在公开数据集MSRA 1000上对算法进行验证,结果表明,所提算法与7种主流算法相比具有更好的查准率和查全率。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号