首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 215 毫秒
1.
基于PCA与小波变换的彩色图像融合算法   总被引:3,自引:1,他引:2  
陆欢  吴庆宪  姜长生 《计算机仿真》2007,24(9):202-205,296
针对多聚焦彩色图像融合问题,提出了一种基于PCA与小波变换的多聚焦图像融合算法.首先,利用主元分析和离散小波框架变换算法,将图像分为清晰块与模糊块;然后对于清晰块,直接选取该块作为融合后的相应块区域;对于模糊块,则建立了基于点特征值的像素选取方法,分别从两幅待融合图像中选取特征值较大的点作为融合结果的像素点,从而得到最终的融合结果图.实验结果表明,所提出的方法能够较好地解决彩色多聚焦彩色图像融合问题,融合效果较为理想.  相似文献   

2.
PCA分块结合高通滤波的多聚焦图像融合研究   总被引:1,自引:0,他引:1  
针对传统像素级图像融合方法割裂像素间联系的问题,提出了一种新的基于区域检测的图像融合方法。该算法主要利用PCA与高通瘛波进行区域的检测。首先,将图像分块;其次,对每块利用高通滤波和主分量分析算法判断其是清晰块还是模糊块。最后选择相应的清晰块重构图像,从而得到最终的融合结果图。实验结果表明,此方法优于传统的PCA和小波方法。  相似文献   

3.
基于离散小波框架变换的彩色多聚焦图像融合算法   总被引:4,自引:0,他引:4  
提出了一种基于离散小波框架变换的彩色多聚焦图像融合算法。首先求取各彩色多聚焦图像的灰度分量,再对各灰度分量进行离散小波框架变换,根据离散小波框架变换系数求取各图像中像素的清晰度指标,然后根据各图像中像素的清晰度指标对属于清晰区域的像素进行组合,从而得到融合后的图像。试验结果表明本文所提出的算法能够较好地解决彩色多聚焦图像融合问题。  相似文献   

4.
通过分析图像经多小波分解后小波系数的多个统计特征,构造出多小波系数区域相似度,结合多聚焦图像的成像机理,提出了一中基于多小波变换和区域相似度的融合算法,实验结果表明,该算法可有效融合多聚焦图像,从而获得场景内所有物体都清晰的图像.  相似文献   

5.
针对多聚焦图像融合,从人眼视觉对比度以及多聚焦图像清晰与模糊像素之间的边界点出发,提出一种基于有理三次样条的图象融合算法。将多聚焦图像划成三个区域:清晰区域、模糊区域和边界区域。在边界区域,利用有理三次样条函数构造权函数,并以边界区域像素为中心,用清晰区域的像素和模糊区域像素做加权组合,计算边界区域像素的灰度值。经实验验证,提出的图像融合方法具有较强的灵活性和可靠性,具有较好的融合效果。  相似文献   

6.
梅益君  王元庆 《计算机工程》2007,33(19):193-195
提出了一种新的基于点扩散函数的多聚焦图像融合算法,采用高斯函数对多聚焦图像进行多次卷积,通过与原近似误差图像的比较,得到聚焦清晰和模糊区域,采用形态学方法将其分割出来,并根据最大值融合规则对其进行了融合。试验结果表明,该算法可将图像分割成多个聚焦区域,边缘清晰、融合效果良好。  相似文献   

7.
由于可见光成像系统的聚焦范围有限,很难获得同一场景内所有物体都清晰的图像。多聚焦图像融合技术可有效地解决这一问题。在分析了小波包变换的概念和原理后,提出了一种基于区域统计融合规则的小波包变换多聚焦图像融合方法,先对不同聚焦图像分别进行小波包变换,采用低频系数取平均,高频系数根据区域统计值决定的融合规则,再进行小波包反变换得到融合结果。仿真试验表明,相比小波变换法,本文方法可有效综合多聚焦图像,获得了更好的融合效果。  相似文献   

8.
小波变换和清晰块搜索结合的图像融合算法   总被引:2,自引:0,他引:2       下载免费PDF全文
在基于小波变换的多聚焦图像融合中,低频系数选取决定融合图像的轮廓,同时影响融合图像的边缘细节,因此小波变换法是以损失图像的局部清晰来换取整幅图像的平均清晰;而基于空域块选择的融合方法能很好地保留利用聚焦图像的清晰部分,但是图像模糊区域效果不佳。因此提出一种新的融合算法,先对图像进行小波变换融合,之后对原图像进行清晰块搜索,进行基于清晰块选择的图像融合。实验结果表明,该算法融合结果优于单一的小波变换算法。  相似文献   

9.
为了提高基于多尺度变换的多聚焦图像融合中聚焦区域的准确性,提出了一种基于非下采样Shearlet变换(NSST)与聚焦区域检测的多聚焦图像融合算法。首先,通过基于非下采样Shearlet变换的融合方法得到初始融合图像;其次,将初始融合图像与源多聚焦图像作比较,得到初始聚焦区域;接着,利用形态学开闭运算对初始聚焦区域进行修正;最后,在修正的聚焦区域上通过改进的脉冲耦合神经网络(IPCNN)获得融合图像。与经典的基于小波变换、Shearlet变换的融合方法以及当前流行的基于NSST和脉冲耦合神经网络(PCNN)的融合方法相比,所提算法在客观评价指标互信息(MI)、空间频率和转移的边缘信息上均有明显的提高。实验结果表明,所提出的算法能更准确地识别出源图像中的聚焦区域,能从源图像中提取出更多的清晰信息到融合图像。  相似文献   

10.
基于清晰度和非下采样多聚焦图像融合   总被引:2,自引:1,他引:1  
丁莉  韩崇昭 《计算机工程》2010,36(11):212-214
根据多聚焦图像的特点提出一种基于清晰度的NSCT图像融合算法。在清晰的区域,低频系数和高频系数全部采用清晰区域的系数,而从清晰到模糊过渡的区域,低频系数则取区域方差值最大,高频子带系数取区域能量值最大。该算法与梯度金字塔算法、小波融合算法和Contourlet融合算法进行比较,实验结果证明该方法融合后的图像与源图像具有最小均方差。  相似文献   

11.
Multi-focus image fusion aims to extract the focused regions from multiple partially focused images of the same scene and then combine them together to produce a completely focused image. Detecting the focused regions from multiple images is key for multi-focus image fusion. In this paper, we propose a novel boundary finding based multi-focus image fusion algorithm, in which the task of detecting the focused regions is treated as finding the boundaries between the focused and defocused regions from the source images. According to the found boundaries, the source images could be naturally separated into regions with the same focus conditions, i.e., each region is fully focused or defocused. Then, the focused regions can be found out by selecting the regions with greater focus-measures from each pair of regions. To improve the precision of boundary detection and focused region detection, we also present a multi-scale morphological focus-measure, effectiveness of which has been verified by using some quantitative evaluations. Different from the general multi-focus image fusion algorithms, our algorithm fuses the boundary regions and non-boundary regions of the source images respectively, which helps produce a fusion image with good visual quality. Moreover, the experimental results validate that the proposed algorithm outperforms some state-of-the-art image fusion algorithms in both qualitative and quantitative evaluations.  相似文献   

12.
In this paper, we address the problem of fusing multi-focus images in dynamic scenes. The proposed approach consists of three main steps: first, the focus information of each source image obtained by morphological filtering is used to get the rough segmentation result which is one of the inputs of image matting. Then, image matting technique is applied to obtain the accurate focused region of each source image. Finally, the focused regions are combined together to construct the fused image. Through image matting, the proposed fusion algorithm combines the focus information and the correlations between nearby pixels together, and therefore tends to obtain more accurate fusion result. Experimental results demonstrate the superiority of the proposed method over traditional multi-focus image fusion methods, especially for those images in dynamic scenes.  相似文献   

13.
This paper presents a new wavelet-based algorithm for the fusion of spatially registered infrared and visible images. Wavelet-based image fusion is the most common fusion method, which fuses the information from the source images in the wavelet transform domain according to some fusion rules. We specifically propose new fusion rules for fusion of low and high frequency wavelet coefficients of the source images in the second step of the wavelet-based image fusion algorithm. First, the source images are decomposed using dual-tree discrete wavelet transform (DT-DWT). Then, a fuzzy-based approach is used to fuse high frequency wavelet coefficients of the IR and visible images. Particularly, fuzzy logic is used to integrate the outputs of three different fusion rules (weighted averaging, selection using pixel-based decision map (PDM), and selection using region-based decision map (RDM)), based on a dissimilarity measure of the source images. The objective is to utilize the advantages of previous pixel- and region-based methods in a single scheme. The PDM is obtained based on local activity measurement in the DT-DWT domain of the source images. A new segmentation-based algorithm is also proposed to generate the RDM using the PDM. In addition, a new optimization-based approach using population-based optimization is proposed for the low frequency fusion rule instead of simple averaging. After fusing low and high frequency wavelet coefficients of the source images, the final fused image is obtained using the inverse DT-DWT. This new method provides improved subjective and objectives results as compared to previous image fusion methods.  相似文献   

14.
基于区域目标检测的红外与可见光图像序列融合   总被引:1,自引:0,他引:1       下载免费PDF全文
刘从义  敬忠良  肖刚  杨波 《计算机工程》2007,33(20):204-206
提出了一种基于区域目标检测的红外与可见光图像序列融合方法。该方法通过目标检测技术将源图像序列分割成目标和背景区域,并在目标和背景区域里分别采用不同的融合规则,同时使用双树复小波变换方法使每一幅源图像具有移不变多分辨率表示。实验采用了实际图像序列数据。融合结果表明,该方法是可行和高效的,且比其他图像融合方法具有更好的性能。  相似文献   

15.
Reduced depth-of-field (DOF) poses a problem in the light optical imaging system, since the objects present outside this zone appear blurry in the recorded image. The effective DOF of the sensor may be enhanced considerably without compromising the quality of the image by fusing images captured with different focused regions. This paper presents an image fusion technique suitable for combining multifocus images of a scene. The method employs morphological filters to select sharply focused regions from various images and then combines them together to reconstruct the image in which all the regions are properly focused. A performance measure based on image gradients is used to compare the results obtained by the proposed method with those obtained by other image fusion techniques.  相似文献   

16.
The goal of image fusion is to accurately and comprehensively describe complementary information of multiple source images in a new scene. Traditional fusion methods are easy to produce side-effects which cause artifacts and blurred edges. To solve these problems, a novel fusion algorithm based on robust principal component analysis (RPCA) and guided filter is proposed. The guided filter can preserve the edges effectively, which is often used to enhance the images without distort the details. Considering edges and flat area are treated differently by the guided filter, in this paper, sparse component of the source image is filtered by the guided filter to generate the enhanced image which contains the preserved edges and the enhanced background. And then the focused regions of the source images are detected by spatial frequency map of the difference images between the enhanced image and the corresponding source image. Finally, morphological algorithm is used to obtain precise fusion decision map. Experimental results show that the proposed method improves the fusion performance obviously which outperforms the current fusion methods.  相似文献   

17.
Multi-focus image fusion has emerged as an important research area in information fusion. It aims at increasing the depth-of-field by extracting focused regions from multiple partially focused images, and merging them together to produce a composite image in which all objects are in focus. In this paper, a novel multi-focus image fusion algorithm is presented in which the task of detecting the focused regions is achieved using a Content Adaptive Blurring (CAB) algorithm. The proposed algorithm induces non-uniform blur in a multi-focus image depending on its underlying content. In particular, it analyzes the local image quality in a neighborhood and determines if the blur should be induced or not without losing image quality. In CAB, pixels belonging to the blur regions receive little or no blur at all, whereas the focused regions receive significant blur. Absolute difference of the original image and the CAB-blurred image yields initial segmentation map, which is further refined using morphological operators and graph-cut techniques to improve the segmentation accuracy. Quantitative and qualitative evaluations and comparisons with current state-of-the-art on two publicly available datasets demonstrate the strength of the proposed algorithm.  相似文献   

18.
The wavelet decomposition has become an attractive tool for fusing multisensor images. Usually, the input images are decomposed with an orthogonal wavelet in order to extract features, which are combined through an appropriate fusion rule. The fused image is then reconstructed by applying the inverse wavelet transform. In this paper, we investigate the use of the nonorthogonal (or redundant) wavelet decomposition as an alternative approach for feature extraction. By using test and remote sensing images, various fusion rules are considered and the detailed comparison indicates the superiority of this approach compared to the standard orthogonal wavelet decomposition for image fusion.  相似文献   

19.
图像边缘检测的关键是在尽量多检测到边缘的同时更有效地抑制噪声,为此提出了一种融合小波变换和形态学差分算法的边缘检测方法。将源图像进行小波分解,高频分量利用小波模极大值算法进行边缘检测,可有效提取高频边缘;低频分量采用形态学差分算法进行边缘检测,能够检测出低频边缘;采用一定的融合规则将两个边缘检测图像融合在一起。实验结果表明,该方法优于单独使用小波模极大值法或数学形态学法,对噪声具有很好的鲁棒性,得到的图像边缘连续、清晰。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号