首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 156 毫秒
1.
目的:多聚焦图像融合技术一个关键问题是如何准确地判断待融合图像的清晰度。本文提出了基于归一化结构极值点数目的清晰度判断准则。方法:本文基于图像的局部极值点特性,定义了归一化结构极值点数目这个指标作为清晰度判断准则,同时还给出了利用该准则和融合决策矩阵快速估计技术的多聚焦图像快速融合方法。结果:利用本文提出的清晰度判断准则和融合方法,实验表明上述问题得到了很好的解决。结论:本文提出了一个新的图像清晰度判断准则,该准则判断准确率高,且对脉冲噪声有好的鲁棒性。通过与传统融合方法对两组实验图像融合结果的主客观比较表明,该方法的融合速度和效果比现有多聚焦图像融合方法有明显提高。  相似文献   

2.
核Fisher判别分析在多聚焦图像融合中的应用   总被引:1,自引:0,他引:1       下载免费PDF全文
提出一种基于核Fisher判别分析与图像块分割的多聚焦图像融合方法。该方法首先将源图像进行块分割,计算反映图像块聚焦程度的清晰度特征;再将源图像的部分区域作为训练样本,获得训练后的核Fisher判别分析参数;然后利用已知的核Fisher判别分析获得初步融合图像;最后对位于源图像清晰与模糊区域交界处的源图像块利用冗余小波变换进行处理后,得到最终融合图像。实验结果表明,该方法的图像融合效果优于常用图像融合方法,可在有效提高图像融合质量与减少计算量之间获得较好的折衷。  相似文献   

3.
针对多聚焦图像融合中难以有效检测聚焦点的问题,提出了一种基于鲁棒主成分分析(RPCA)和区域检测的多聚焦图像融合方法。将RPCA理论运用到多聚焦图像融合中,把源图像分解为稀疏图像和低秩图像;对稀疏矩阵采用区域检测的方法得到源图像的聚焦判决图;对聚焦判决图进行三方向一致性和区域生长法处理得到最终决策图;根据最终决策图对源图像进行融合。实验结果表明,在主观评价方面,所提出的方法在对比度、纹理清晰度、亮度等几方面都有显著的提高;在客观评价方面,用标准差、平均梯度、空间频率和互信息四项评价指标说明了该方法的有效性。  相似文献   

4.
基于清晰度和非下采样多聚焦图像融合   总被引:2,自引:1,他引:1  
丁莉  韩崇昭 《计算机工程》2010,36(11):212-214
根据多聚焦图像的特点提出一种基于清晰度的NSCT图像融合算法。在清晰的区域,低频系数和高频系数全部采用清晰区域的系数,而从清晰到模糊过渡的区域,低频系数则取区域方差值最大,高频子带系数取区域能量值最大。该算法与梯度金字塔算法、小波融合算法和Contourlet融合算法进行比较,实验结果证明该方法融合后的图像与源图像具有最小均方差。  相似文献   

5.
为建立一个有效的活跃度测量模型来检测多聚焦图像的聚焦区域,针对多数融合方法效率不高和处理源图像未配准问题的不足,提出一种基于多视觉特征和引导滤波的快速鲁棒多聚焦图像融合方法.首先通过分别测量对比度显著性、清晰度和结构显著性这3个互补的视觉特征对源图像聚焦区域进行检测,获得初始的融合决策图;为了充分利用空间一致性并有效抑制融合结果中伪影的产生,利用形态学滤波和引导滤波对初始决策图进行优化,从而获得最终的融合权重图;最后根据优化的权重图对源图像进行加权融合,获得融合图像.实验结果表明,无论是主观视觉效果还是客观定量评价,该方法均优于一些主流的多聚焦图像融合方法.  相似文献   

6.
针对现有图像域融合算法难以合理度量区域清晰度,导致融合结果出现"振铃"现象的问题,提出一种基于图像区域相对清晰度的多焦距图像融合算法.该算法定义了图像区域的相对清晰度判定指标,并根据多焦距图像像素邻域的相对清晰度来逐个选取聚焦清晰的像素点从而构成融合图像.实验表明,所提算法能取得优于其它算法的融合效果;特别是,它能较好保留源图像包含的细节信息,从而显著减弱融合结果中的"振铃"现象.  相似文献   

7.
基于区域锐度的多聚焦图像融合   总被引:5,自引:2,他引:3       下载免费PDF全文
张素兰  王铮 《计算机工程》2009,35(4):221-222
为了获得同一场景内所有物体都清晰的图像,提出一种新的多聚焦图像融合算法。把待融合图像进行分块,构造融合块的清晰度评价函数(区域锐度),根据其区域锐度值,判断融合块应取自哪幅源图像。采用投票选举的方法对融合图像进行一致性校验,对相邻但来自不同聚焦图像的融合块进行加权融合。实验结果证明,与基于小波分解的融合算法相比,该算法速度快、效果好。  相似文献   

8.
基于双树复小波变换的图像融合方法   总被引:4,自引:2,他引:2       下载免费PDF全文
为获得更好的融合效果,提出基于双树复小波变换的图像融合方法。双树复小波变换具有平移不变性、方向选择性等特点,适合进行图像融合,优于传统离散小波变换方法。给出多策略的融合规则,源图像小波变换后低频采用区域清晰度,高频采用区域标准差。灰度多聚焦图像和彩色多聚焦图像的融合实验测试以及评价指标的统计结果,表明了双树复小波变换方法的优势和所用融合规则的有效性。  相似文献   

9.
提出一种基于共聚焦图像的深度估计方法。采用虚拟孔径技术把相机阵列获得的多视角图像合成得到共聚焦图像序列,并将其作为聚焦测距的数据源进行聚焦分析,实现对场景深度信息的估计。针对共聚焦图像的特点,将传统聚焦测距方法中的清晰度评价算法与颜色一致性评价算法相结合,提出一种聚焦度测量算法,该测量算法同时适用于图像中强纹理区域和弱纹理区域的聚焦分析。实验结果表明,该方法对聚焦判别的有效性较高,可以获得较准确的场景深度估计。  相似文献   

10.
为了提高基于多尺度变换的多聚焦图像融合中聚焦区域的准确性,提出了一种基于非下采样Shearlet变换(NSST)与聚焦区域检测的多聚焦图像融合算法。首先,通过基于非下采样Shearlet变换的融合方法得到初始融合图像;其次,将初始融合图像与源多聚焦图像作比较,得到初始聚焦区域;接着,利用形态学开闭运算对初始聚焦区域进行修正;最后,在修正的聚焦区域上通过改进的脉冲耦合神经网络(IPCNN)获得融合图像。与经典的基于小波变换、Shearlet变换的融合方法以及当前流行的基于NSST和脉冲耦合神经网络(PCNN)的融合方法相比,所提算法在客观评价指标互信息(MI)、空间频率和转移的边缘信息上均有明显的提高。实验结果表明,所提出的算法能更准确地识别出源图像中的聚焦区域,能从源图像中提取出更多的清晰信息到融合图像。  相似文献   

11.
Wang  Zhaobin  Wang  Shuai  Guo  Lijie 《Neural computing & applications》2018,29(11):1101-1114

The purpose of multi-focus image fusion is to acquire an image where all the objects are focused by fusing the source images which have different focus points. A novel multi-focus image fusion method is proposed in this paper, which is based on PCNN and random walks. PCNN is consistent with people’s visual perception. And the random walks model has been proven to have enormous potential to fuse image in recent years. The proposed method first employs PCNN to measure the sharpness of source images. Then, an original fusion map is constructed. Next, the method of random walks is employed to improve the accuracy of the fused regions detection. Finally, the fused image is generated according to the probability computed by random walks. The experiments demonstrate that our method outperforms many existing methods of multi-focus image fusion in visual perception and objective criteria. To assess the performance of our method in practical application, some examples are given at the end of paper.

  相似文献   

12.
Anisotropic blur and mis-registration frequently happen in multi-focus images due to object or camera motion. These factors severely degrade the fusion quality of multi-focus images. In this paper, we present a novel multi-scale weighted gradient-based fusion method to solve this problem. This method is based on a multi-scale structure-based focus measure that reflects the sharpness of edge and corner structures at multiple scales. This focus measure is derived based on an image structure saliency and introduced to determine the gradient weights in the proposed gradient-based fusion method for multi-focus images with a novel multi-scale approach. In particular, we focus on a two-scale scheme, i.e., a large scale and a small scale, to effectively solve the fusion problems raised by anisotropic blur and mis-registration. The large-scale structure-based focus measure is used first to attenuate the impacts of anisotropic blur and mis-registration on the focused region detection, and then the gradient weights near the boundaries of the focused regions are carefully determined by applying the small-scale focus measure. Experimental results clearly demonstrate that the proposed method outperforms the conventional fusion methods in the presence of anisotropic blur and mis-registration.  相似文献   

13.
Multi-focus image fusion has emerged as a major topic in image processing to generate all-focus images with increased depth-of-field from multi-focus photographs. Different approaches have been used in spatial or transform domain for this purpose. But most of them are subject to one or more of image fusion quality degradations such as blocking artifacts, ringing effects, artificial edges, halo artifacts, contrast decrease, sharpness reduction, and misalignment of decision map with object boundaries. In this paper we present a novel multi-focus image fusion method in spatial domain that utilizes a dictionary which is learned from local patches of source images. Sparse representation of relative sharpness measure over this trained dictionary are pooled together to get the corresponding pooled features. Correlation of the pooled features with sparse representations of input images produces a pixel level score for decision map of fusion. Final regularized decision map is obtained using Markov Random Field (MRF) optimization. We also gathered a new color multi-focus image dataset which has more variety than traditional multi-focus image sets. Experimental results demonstrate that our proposed method outperforms existing state-of-the-art methods, in terms of visual and quantitative evaluations.  相似文献   

14.
针对传统的多聚焦图像的空间域融合容易出现边缘模糊的问题,提出了一种基于引导滤波(GF)和差分图像的多聚焦图像融合方法.首先,将源图像进行不同水平的GF,并对滤波后图像进行差分,从而获得聚焦特征图像;随后,利用聚焦特征图像的梯度能量(EOG)信息获得初始决策图,对初始决策图进行空间一致性检查以及形态学操作以消除因EOG相...  相似文献   

15.
In this paper, a simple and efficient multi-focus image fusion approach is proposed. As for the multi-focus images, all of them are obtained from the same scene with different focuses. So the images can be segmented into two kinds regions based on out of and on the focus, which directly leads to a region based fusion, i.e., finding out all of the regions on the focus from the source images, and merging them into a combination image. This poses the question of how to locate the regions on the focus from the input images. Considering that the details or scales are different in the regions which are not and on the focuses, blurring measure method in this paper is used to locate the regions based on the blocking degree. This new fusion method can significantly reduce the amount of distortion artifacts and the loss of contrast information. These are usually observed in fused images in the conventional fusion schemes. The fusion performance of proposed method has been evaluated through informal visual inspection and objective fusion performance measurements, and results show the advantages of the approach compared to conventional fusion approaches.  相似文献   

16.
Multi-focus image fusion using PCNN   总被引:1,自引:0,他引:1  
This paper proposes a new method for multi-focus image fusion based on dual-channel pulse coupled neural networks (dual-channel PCNN). Compared with previous methods, our method does not decompose the input source images and need not employ more PCNNs or other algorithms such as DWT. This method employs the dual-channel PCNN to implement multi-focus image fusion. Two parallel source images are directly input into PCNN. Meanwhile focus measure is carried out for source images. According to results of focus measure, weighted coefficients are automatically adjusted. The rule of auto-adjusting depends on the specific transformation. Input images are combined in the dual-channel PCNN. Four group experiments are designed to testify the performance of the proposed method. Several existing methods are compared with our method. Experimental results show our presented method outperforms existing methods, in both visual effect and objective evaluation criteria. Finally, some practical applications are given further.  相似文献   

17.
提出了可协调经验小波变换,并将其应用于多聚焦图像融合。经验小波变换(EWT)是一种自适应信号分解方法,具有比经验模态分解和传统小波分解更好的特性。其核心思想是通过构造自适应的滤波器实现对信号的自适应分解。但是若直接对两幅多聚焦图像分别进行EWT分解,因各自生成的经验小波互不相关,将出现分解所得对应子带不匹配的情况,影响融合图像的质量。针对这一问题,提出了一种可协调的经验小波变换(C-EWT),C-EWT分解下的两个多聚焦图像的对应子带是完全匹配的。基于此,利用C-EWT提出了一种新的多聚焦图像融合算法。每幅源图像经过C-EWT分解后,得到一个低频分量和多个高频分量; 对低频分量采用基于改进Laplacian能量和的阈值匹配选择与加权规则进行融合,对高频分量则采用局部Log-Gabor能量取大的融合规则;将融合之后的各子带分量进行重构得到融合图像。仿真实验表明:与其他六种融合算法相比,所提算法在融合聚焦区域、保留边缘和细节信息方面具有优势,融合图像具有更好的视觉效果,且客观评价指标与标准图像最为接近。  相似文献   

18.
动静态联合滤波器具有良好的边缘平滑特性,对梯度反转和全局强度迁移等伪影具有很强的鲁棒性。为了保留源图像的结构信息,提出了基于动静态联合滤波器的多聚焦图像融合方法。首先采用动静态联合滤波器将源图像分解为结构分量和纹理分量,以视觉显著度加权法对结构分量进行融合,综合相位一致性和清晰度信息对纹理分量进行融合;将两分量叠加获得初始融合图像,并通过计算源图像与初始融合图像间的结构相似度作为决策矩阵,获得最终的融合图像。通过对比多组融合图像主、客观评价结果发现,该方法能有效保留边缘信息。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号