首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
《国际计算机数学杂志》2012,89(9):2072-2090
In the multi-focus image fusion problem, the source images are obtained from the same scene. They are fused to get an image that contains all well-focussed objects. Previously, individual machine-learning models are proposed for image fusion. The performance of individual models is limited to fuse the useful information extracted from the blurred images. To address this problem, we developed a novel ensemble scheme for multi-focus image fusion using support vector machines (SVMs). In the proposed scheme, first, SVM models are constructed using different kernel functions of linear, polynomial, radial basis, and sigmoid. The predictions of individual SVM models are then combined using majority voting. In this way, the combined decision space becomes more informative and discriminant. A comparative analysis of the proposed scheme is carried out with previous techniques. It is found that our scheme is more accurate for synthesized-blurred and real defocussed images.  相似文献   

2.
针对传统的多聚焦图像的空间域融合容易出现边缘模糊的问题,提出了一种基于引导滤波(GF)和差分图像的多聚焦图像融合方法。首先,将源图像进行不同水平的GF,并对滤波后图像进行差分,从而获得聚焦特征图像;随后,利用聚焦特征图像的梯度能量(EOG)信息获得初始决策图,对初始决策图进行空间一致性检查以及形态学操作以消除因EOG相近而造成的噪点;然后,对初始决策图进行GF以得到优化后决策图,从而避免融合后的图像存在边缘骤变的问题;最后,基于优化后决策图对源图像进行加权融合,以得到融合图像。选取3组经典的多聚焦图像作为实验图像,将所提方法与其他9种多聚焦图像融合方法得到的结果进行比较。主观视觉效果显示,所提方法能更好地将多聚焦图像的细节信息保存下来,另外,经该方法处理后的图像的4项客观评价指标均显著优于对比方法。结果表明,所提方法能够获得高质量的融合图像,较好地保留原始图像信息,有效解决传统多聚焦图像融合出现的边缘模糊问题。  相似文献   

3.
Multi-focus image fusion has emerged as a major topic in image processing to generate all-focus images with increased depth-of-field from multi-focus photographs. Different approaches have been used in spatial or transform domain for this purpose. But most of them are subject to one or more of image fusion quality degradations such as blocking artifacts, ringing effects, artificial edges, halo artifacts, contrast decrease, sharpness reduction, and misalignment of decision map with object boundaries. In this paper we present a novel multi-focus image fusion method in spatial domain that utilizes a dictionary which is learned from local patches of source images. Sparse representation of relative sharpness measure over this trained dictionary are pooled together to get the corresponding pooled features. Correlation of the pooled features with sparse representations of input images produces a pixel level score for decision map of fusion. Final regularized decision map is obtained using Markov Random Field (MRF) optimization. We also gathered a new color multi-focus image dataset which has more variety than traditional multi-focus image sets. Experimental results demonstrate that our proposed method outperforms existing state-of-the-art methods, in terms of visual and quantitative evaluations.  相似文献   

4.
深度学习技术应用到多聚焦图像融合领域时,其大多通过监督学习的方式来训练网络,但由于缺乏专用于多聚焦图像融合的监督训练的标记数据集,且制作专用的大规模标记训练集代价过高,所以现有方法多通过在聚焦图像中随机添加高斯模糊进行监督学习,这导致网络训练难度大,很难实现理想的融合效果。为解决以上问题,提出了一种易实现且融合效果好的多聚焦图像融合方法。通过在易获取的无标记数据集上以无监督学习方式训练引入了注意力机制的encoder-decoder网络模型,获得输入源图像的深层特征。再通过形态聚焦检测对获取的特征进行活动水平测量生成初始决策图。运用一致性验证方法对初始决策图优化,得到最终的决策图。融合图像质量在主观视觉和客观指标两方面上进行评定,经实验结果表明,融合图像清晰度高,保有细节丰富且失真度小。  相似文献   

5.
目的 基于深度学习的多聚焦图像融合方法主要是利用卷积神经网络(convolutional neural network,CNN)将像素分类为聚焦与散焦。监督学习过程常使用人造数据集,标签数据的精确度直接影响了分类精确度,从而影响后续手工设计融合规则的准确度与全聚焦图像的融合效果。为了使融合网络可以自适应地调整融合规则,提出了一种基于自学习融合规则的多聚焦图像融合算法。方法 采用自编码网络架构,提取特征,同时学习融合规则和重构规则,以实现无监督的端到端融合网络;将多聚焦图像的初始决策图作为先验输入,学习图像丰富的细节信息;在损失函数中加入局部策略,包含结构相似度(structural similarity index measure,SSIM)和均方误差(mean squared error,MSE),以确保更加准确地还原图像。结果 在Lytro等公开数据集上从主观和客观角度对本文模型进行评价,以验证融合算法设计的合理性。从主观评价来看,模型不仅可以较好地融合聚焦区域,有效避免融合图像中出现伪影,而且能够保留足够的细节信息,视觉效果自然清晰;从客观评价来看,通过将模型融合的图像与其他主流多聚焦图像融合算法的融合图像进行量化比较,在熵、Qw、相关系数和视觉信息保真度上的平均精度均为最优,分别为7.457 4,0.917 7,0.978 8和0.890 8。结论 提出了一种用于多聚焦图像的融合算法,不仅能够对融合规则进行自学习、调整,并且融合图像效果可与现有方法媲美,有助于进一步理解基于深度学习的多聚焦图像融合机制。  相似文献   

6.
Often captured images are not focussed everywhere. Many applications of pattern recognition and computer vision require all parts of the image to be well-focussed. The all-in-focus image obtained, through the improved image fusion scheme, is useful for downstream tasks of image processing such as image enhancement, image segmentation, and edge detection. Mostly, fusion techniques have used feature-level information extracted from spatial or transform domain. In contrast, we have proposed a random forest (RF)-based novel scheme that has incorporated feature and decision levels information. In the proposed scheme, useful features are extracted from both spatial and transform domains. These features are used to train randomly generated trees of RF algorithm. The predicted information of trees is aggregated to construct more accurate decision map for fusion. Our proposed scheme has yielded better-fused image than the fused image produced by principal component analysis and Wavelet transform-based previous approaches that use simple feature-level information. Moreover, our approach has generated better-fused images than Support Vector Machine and Probabilistic Neural Network-based individual Machine Learning approaches. The performance of proposed scheme is evaluated using various qualitative and quantitative measures. The proposed scheme has reported 98.83, 97.29, 98.97, 97.78, and 98.14 % accuracy for standard images of Elaine, Barbara, Boat, Lena, and Cameraman, respectively. Further, this scheme has yielded 97.94, 98.84, 97.55, and 98.09 % accuracy for the real blurred images of Calendar, Leaf, Tree, and Lab, respectively.  相似文献   

7.
The aim of multi-focus image fusion is to fuse the images taken from the same scene with different focuses so that we can obtain a resultant image with all objects in focus. However, the most existing techniques in many cases cannot gain good fusion performance and acceptable complexity simultaneously. In order to improve image fusion efficiency and performance, we propose a lightweight multi-focus image fusion scheme based on Laplacian pyramid transform (LPT) and adaptive pulse coupled neural networks-local spatial frequency (PCNN-LSF), and it only needs to deal with fewer sub-images than common methods. The proposed scheme employs LPT to decompose a source image into the corresponding constituent sub-images. Spatial frequency (SF) is calculated to adjust the linking strength β of PCNN according to the gradient features of the sub-images. Then oscillation frequency graph (OFG) of the sub-images is generated by PCNN model. Local spatial frequency (LSF) of the OFG is calculated as the key step to fuse the sub-images. Incorporating LSF of the OFG into the fusion scheme (LSF of the OFG represents the information of its regional features); it can effectively describe the detailed information of the sub-images. LSF can enhance the features of OFG and makes it easy to extract high quality coefficient of the sub-image. The experiments indicate that the proposed scheme achieves good fusion effect and is more efficient than other commonly used image fusion algorithms.  相似文献   

8.
该方法首先提取源图像的亮度分量,采用非下采样Contourlet变换对其进行分解,通过“合成图像像素值取大”准则对高频系数进行处理得到融合决策图,并对其进行一致性校验,最后根据校验后的决策图在RGB空间进行像素点选取,得到融合图像。实验结果表明,本文方法解决了RGB空间融合方法容易导致的颜色失真,同时本方法仅对亮度分量融合,降低运算复杂度。融合图像在保留图像有用信息的同时,弥补了传统空间域方法在细节表现力上的不足,更加符合人类的视觉特征。本文融合方法还用于灰度多聚焦图像融合中,实践证明,融合效果很好。  相似文献   

9.
基于二次成像与清晰度差异的多聚焦图像融合   总被引:1,自引:0,他引:1  
本文提出了一种基于清晰度差异的不同聚焦点图像的融合方法。该方法首先选择了一种基于梯度向量模方和的清晰度定义,然后根据几何光学系统的成像模型,以及点扩散函数的作用效果提出了模拟光学系统的二次成像模型。然后根据二次成像前后各图像清晰度的差异情况,对各幅图像中的目标进行判断,并选择其中的清晰部分生成融合图像。实验结果表明,该方法可以提取出多聚焦图像中的清晰目标,生成的融合图像效果优于Laplacian塔型方法和小波变换方法。  相似文献   

10.
邹佳彬  孙伟 《计算机应用》2018,38(3):859-865
为抑制传统小波变换在多聚焦图像融合中产生的伪吉布斯现象,以及克服传统稀疏表示的融合方法容易造成融合图像的纹理与边缘等细节特征趋于平滑的缺陷,提高多聚焦图像融合的效率与质量,采用一种基于提升静态小波变换(LSWT)与联合结构组稀疏表示的图像融合算法。首先对实验图像进行提升静态小波变换,根据分解后得到的低频系数与高频系数各自不同的物理特征,采用不同的融合方式。选择低频系数时,采用基于联合结构组稀疏表示的系数选择方案;选择高频系数时,采用方向区域拉普拉斯能量和(DRSML)与匹配度相结合的系数选择方案。最后经逆变换重构得到最终融合图像。实验结果表明,改进的算法有效地提高了图像的互信息量、平均梯度等指标,完好地保留图像的纹理与边缘等细节信息,融合图像效果更好。  相似文献   

11.
在多聚焦图像融合算法中,针对多分辨率系数融合法无法提取源图像清晰像素点和分块法存在的块效应的现象,从多聚焦图像清晰像素点和人眼视觉对比度的特征出发,利用平稳小波变换(SWT)的非下采样性和平移不变性,定义基于SWT的图像像素点区域对比度作为提取像素点的依据,在研究区域对比度邻域大小对像素点提取影响的基础上,设定适合的阈值建立提取模板,对多聚焦源图像中清楚区域的像素点进行提取,并对小部分未能提取的像素位置采用基于局部能量策略进行融合。仿真实验结果表明,新算法既有效地提取源图像的清晰像素点,又改善了块效应现象,融合效果有了很大提升。  相似文献   

12.
Sensor fusion combines the output of multiple imaging sensors within a single composite display. Ideally, a fused image will retain important spatial information provided by individual input images, and will convey useful spatial or chromatic emergent information derived from the contrast between input images. The present experiment assessed the potential benefits of sensor fusion as a method of enhancing drivers' night-time detection of road hazards. Observers were asked to detect a pedestrian within thermal and visible images of a night-time scene, and within chromatic and achromatic renderings created by sensor fusion of grayscale thermal and visible images. Results indicated that fusion can both improve spatial image content, and can effectively embellish spatial content with emergent chromatic information. The benefits of both sensor fusion and of color rendering, however, were inconsistent, varying substantially with quality of input images submitted for fusion.  相似文献   

13.
针对多聚焦图像融合中难以有效检测聚焦点的问题,提出了一种基于鲁棒主成分分析(RPCA)和区域检测的多聚焦图像融合方法。将RPCA理论运用到多聚焦图像融合中,把源图像分解为稀疏图像和低秩图像;对稀疏矩阵采用区域检测的方法得到源图像的聚焦判决图;对聚焦判决图进行三方向一致性和区域生长法处理得到最终决策图;根据最终决策图对源图像进行融合。实验结果表明,在主观评价方面,所提出的方法在对比度、纹理清晰度、亮度等几方面都有显著的提高;在客观评价方面,用标准差、平均梯度、空间频率和互信息四项评价指标说明了该方法的有效性。  相似文献   

14.
一种基于小波方向对比度的多聚焦图像融合方法   总被引:5,自引:1,他引:5       下载免费PDF全文
人类视觉系统对于图像的局部对比度非常敏感,如果把小波变换和方向对比度结合起来,融合效果可能更好。在研究了方向对比度后提出了一种新的基于小波方向对比度的多聚焦图像融合方法。首先对参加融合的两幅图像进行小波多尺度分解,然后在每幅图像的每个分解层上,分别计算高频子带每个像素的邻域均值和低频子带的邻域均值之比,其中该分解层的低频子带是由上个分解层的低频子带和高频子带求2维离散小波逆变换得到,采用两者之比较大者所对应的高频子带系数作为融合后对应的小波系数,然后从最高分解层到最低分解层依次对得到的高频小波系数和该分解层的低频小波系数求2维离散小波逆变换,最终得到融合后的图像。这种方法考虑了邻域内像素的相关性,减少了融合像素的错误选取。实验结果表明,该方法的融合效果比针对每个像素求小波方向对比度的多聚焦图像融合方法的融合效果得到提高。  相似文献   

15.
一种自适应的多聚焦图像融合方法   总被引:8,自引:1,他引:8       下载免费PDF全文
为了对不同的多聚焦图像进行有效融合,提出了一种小波域中基于区域特征的自适应多聚焦图像融合方法。该方法首先对参加融合的两幅图像进行小波分解,然后针对低频部分,在保留源图像共同特征的基础上,将待融合的两图像各自所具有的特征添加到融合图像中,而对于高频部分,则根据区域的小波能量进行融合;最后通过小波逆变换来重构融合图像。该方法不仅能够完全自适应地对多聚焦图像进行有效的融合,而且对于各种不同的源图像具有通用性。实验表明,该算法能够得到良好的融合效果,是一种有效的多聚焦图像融合方法。  相似文献   

16.
This paper shows practical examples of the application of a new image fusion paradigm for achieving a 2-D all in-focus image starting from a set of multi-focus images of a 3-D real object. The goal consists in providing an enhanced 2-D image showing the object entirely in focus. The fusion procedure shown here is based on the use of a focusing pixel-level measure. Such measure is defined in the space–frequency domain through a 1-D pseudo-Wigner distribution. The method is illustrated with different sets of images. Evaluation measures applied to artificially blurred cut and pasted regions have shown that the present scheme can provide equally or even better performance than other alternative image fusion algorithms.  相似文献   

17.
18.
In this paper, we consider a central estimating officer (CEO) scenario, where sensors observe a noisy version of a binary sequence generated by a single source (the “phenomenon”) and the access point (AP)’s goal is to estimate, by properly fusing the received data, this sequence. Due to this system model, the data sent by the sensors are correlated and, therefore, it is possible to exploit a proper a priori information in the localized fusion operation performed at the AP. In the presence of channel coding at the sensors and block faded communication links, we first derive the optimum maximum a priori probability (MAP) joint decoding and fusion rule, showing its computational unfeasibility. We then derive two suboptimal decoding/fusion strategies. In the first case, the fusion rule exploits the source correlation and receives, at its input, the soft-output values generated by a joint channel decoder (JCD). Two possible iterative JCD algorithms are proposed: one with “circular” iterations between the component decoders (associated with the sources) and one with “parallel” iterations between the component decoders. For each algorithm, two information combining strategies are considered. In the second case, a separate channel decoding (SCD) scheme is considered and the correlation is exploited only during the fusion operation. Our results show that the scheme with SCD followed by fusion basically leads to the same probability of decision error of the scheme with JCD and fusion with, however, a much lower computational complexity, thus making it suitable to resource-constrained scenarios.  相似文献   

19.
文中研究了非抽样Contourlet变换(NSCT)的原理,以及其多尺度、局部化、方向性和各向异性等优点。提出了一种基于NSCT的多聚焦图像融合新算法。本算法将多聚焦图像进行NSCT分解,不同子带采用不同的融合规则,低频子带采用新的基于灰度形态学梯度算子的融合算法,并做一致性检测,带通子带采用基于区域能量的融合算法。最后将融合得到的系数进行NSCT反变换得到融合图像。实验结果表明,与其他融合算法相比较,该算法可以更有效地保留源图像信息和细节特征。  相似文献   

20.
免疫粒子群优化算法在图像融合中的应用   总被引:1,自引:0,他引:1       下载免费PDF全文
提出了一种基于图像分块的小波多聚焦图像融合方法,并将免疫粒子群优化搜索策略应用于多聚焦图像融合子块寻优中。将图像子块作为粒子,以寻求最优组合分块形成的融合图像。利用两种评价参量,即信息熵和交叉熵进行不同图像融合方法的分析及效果评价,实验结果表明,其融合性能优于对图像只进行分块而不作小波分解的融合方法和只作小波分解而不进行分块的融合方法,该方法既能消除块痕迹,又能节约运算量,取得了很好的融合效果。与标准粒子群相比,免疫粒子群的收敛性能和达优率更好。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号