首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
提出了一种结合多分辨率与图像块分割的图像融合新方法。该方法除将多分辨率融合图像块作为可能位于源图像的模糊区域与清晰区域交界处位置的最终融合图像块之外,还将多分辨率图像融合方法获取的融合图像与源图像进行分块比较,选用与多分辨率融合图像块相似的源图像块作为最终融合图像块。实验结果表明,该方法能有效提高常用的多分辨率图像融合方法的融合效果。  相似文献   

2.
《Information Fusion》2003,4(4):259-280
This paper presents an overview on image fusion techniques using multiresolution decompositions. The aim is twofold: (i) to reframe the multiresolution-based fusion methodology into a common formalism and, within this framework, (ii) to develop a new region-based approach which combines aspects of both object and pixel-level fusion. To this end, we first present a general framework which encompasses most of the existing multiresolution-based fusion schemes and provides freedom to create new ones. Then, we extend this framework to allow a region-based fusion approach. The basic idea is to make a multiresolution segmentation based on all different input images and to use this segmentation to guide the fusion process. Performance assessment is also addressed and future directions and open problems are discussed as well.  相似文献   

3.
陈武  靳海兵  吴政  王祥涛 《计算机仿真》2009,26(10):257-260
针对提升小波变换的特点,研究图像融合规则,提出了一种新的基于方向导数的多分辨多光谱图像快速融合算法。首先,利用提升小波变换得到待融合源图像的多分辨分析;然后以小波域方向导数为判据,在图像的多分辨分析的相应各级上进行融合,得到融合图像的多分辨分析;最后,利用提升小波反变换重构融合图像。使用多光谱图像对算法进行验证并与方向对比度算法进行对比,结果表明,融合图像完好的融合了源图像的信息,且算法运算时间短。  相似文献   

4.
Medical image fusion has been used to derive the useful complimentary information from multimodality imaging. The proposed methodology introduces fusion approach for robust and automatic extraction of information from segmented images of different modalities. This fusion strategy is implemented in multiresolution domain using wavelet transform- and genetic algorithm-based search technique to extract maximum complementary information. The analysis of input images at multiple resolutions is able to extract more fine details and improves the quality of the composite fused image. The proposed approaches are also independent of any manual marking or knowledge of fiducial points and start the fusion procedure automatically. The performance of fusion scheme implemented on segmented brain images has been evaluated computing mutual information as similarity measuring matrix. Prior to fusion process, images are being segmented using different segmentation techniques like fuzzy C-mean and Markov random field models. Experimental results show that Gibbs- and ICM-based segmentation approaches related to Markov random field perform over the fuzzy C-mean and which are being used prior to GA-based fusion process for MR T1, MR T2 and MR PD images of section of human brain.  相似文献   

5.
ABSTRACT

There is no such thing as ‘the best image fusion method’ in terms of both spectral and spatial fidelity. This fact encourages the researchers to develop more advanced approaches in order to optimally transfer the spatial details without distorting the colour content. Component substitution (CS)-based image fusion methods have been proven to produce sharper images but suffer from colour distortion. The aim of this study was to modify the CS-based Gram-Schmidt (GS) fusion method with the aid of the Genetic Algorithm (GA) to further improve its colour preservation performance. The GA was used to estimate a weight for each multispectral (MS) band. The obtained band weights were used to generate a low-resolution panchromatic (PAN) band, which plays a significant role in the performance of the GS method. The performance of the proposed approach was compared not only against the conventional GS, but also against widely-used CS-based, multiresolution analysis (MRA)-based and colour-based (CB) image fusion methods. The results indicated that the proposed GA-based approach produced spectrally and spatially superior results compared to the other methods used.  相似文献   

6.
Multispectral and multiresolution image fusion is important for many multimedia and remote sensing applications, such as video surveillance, medical imaging, and satellite imaging. For the commercial satellite ??IKONOS,?? spatial resolutions of high-resolution panchromatic (PAN) and low-resolution multispectral (MS) satellite images are 1?m and 4?m, respectively. To cope with color distortion and blocking artifacts in fused images, in this study, a multispectral and multiresolution image fusion approach using PSO is proposed. The pixels of fused images in the training set are classified into several categories based on the characteristics of low-resolution MS images. Then, the smooth parameters of spatial and spectral responses between the high-resolution PAN and low-resolution MS images are determined by PSO. All the pixels within each category are normalized by its own smooth parameter so that color distortion and blocking artifacts can be greatly reduced. Based on the experimental results obtained in this study, the overall visual quality of the fused images by the proposed approach is better than that by three comparison approaches, whereas the correlation coefficients, ?? PAN , for the fused images by the proposed approach are greater than that by three comparison approaches.  相似文献   

7.
Images acquired by heterogeneous image sensors may provide complementary information about the scene, for instance, the visual image can provide personal identification information like the facial pattern while the infrared (IR) or millimeter wave image can detect the suspicious regions of concealed weapons. Usually, a technique, namely multiresolution pixel-level image fusion is applied to integrate the information from multi-sensor images. However, when the images are significantly different, the performance of the multiresolution fusion algorithms is not always satisfactory. In this study, a new strategy consisting of two steps is proposed. The first step is to use an unsupervised fuzzy k-means clustering to detect the concealed weapon from the IR image. The detected region is embedded in the visual image in the second step and this process is implemented with a multiresolution mosaic technique. Therefore, the synthesized image retains the quality comparable to the visual image while the region of the concealed weapon is highlighted and enhanced. The experimental results indicate the efficiency of the proposed approach.This material is based on part of the work carried out at the SPCR laboratory of Lehigh University and the work is partially supported by the U. S. Army Research Office under grant number DAAD19-00-1-0431. The content of the information does not necessarily reflect the position or the policy of the federal government, and no official endorsement should be inferred.  相似文献   

8.
目的 对于微距摄影来说,由于微距镜头的景深有限,往往很难通过单幅照片获得拍摄对象全幅清晰的图像.因此要想获取全幅清晰的照片,就需要拍摄多幅具有不同焦点的微距照片,并对其进行融合.方法 传统的微距照片融合方法一般都假定需要融合的图像是已经配准好的,也并没有考虑微距图像的自动采集.因此提出了一种用于微距摄影的多聚焦图像采集和融合系统,该系统由3个部分组成.第1部分是一种微距图像拍摄装置,该硬件能够以高精度的方式拍摄物体在不同焦距下的微距照片.第2部分是一个基于不变特征的图像配准组件,它可以对在多个焦点下拍摄的微距图像进行自动配准和对齐.第3部分是一个基于图像金字塔的多聚焦图像融合组件,这个组件能够对已经对齐的微距照片进行融合,使得合成的图像具有更大的景深.该组件对基于图像金字塔的融合方法进行了扩展,提出了一种基于滤波的权重计算策略.通过将该权重计算与图像金字塔相结合,得到了一种基于多分辨率的多聚焦图像融合方法.结果 论文使用设计的拍摄装置采集了多组实验数据,用以验证系统硬件设计和软件设计的正确性,并使用主观和客观的方法对提出的系统进行评价.从主观评价来看,系统合成的微距图像不仅具有足够的景深,而且在高分辨率下也能够清晰地呈现物体微小的细节.从客观评价来看,通过将系统合成的微距图像与其他方法合成的微距图像进行量化比较,在标准差、信息熵和平均梯度3种评价标准中都是最优的.结论 实验结果表明,该系统是灵活和高效的,不仅能够对多幅具有不同焦点的微距图像进行自动采集、配准和融合,并且在图像融合的质量方面也能和其他方法相媲美.  相似文献   

9.
加权多分辨率图像融合的快速算法   总被引:7,自引:0,他引:7       下载免费PDF全文
快速、可靠的融合算法是图像融合技术实用化的关键,因此具有良好融合效果的快速融合算法的研究显得尤为重要。传统的加权多分辨率图像融合方法能够产生良好的融合效果,但是其计算复杂度却不能令人满意,为了降低算法的复杂度,基于定义的相关信号强度比,提出了一种加权多分辨率图像融合的快速算法。同传统加权多分辨率图像融合方法相比,算法具有较低的计算复杂度。实验结果表明,该算法同传统加权多分辨率图像融合方法一样,能够产生良好的融合效果。  相似文献   

10.
Multiresolution color image segmentation   总被引:12,自引:0,他引:12  
Image segmentation is the process by which an original image is partitioned into some homogeneous regions. In this paper, a novel multiresolution color image segmentation (MCIS) algorithm which uses Markov random fields (MRF's) is proposed. The proposed approach is a relaxation process that converges to the MAP (maximum a posteriori) estimate of the segmentation. The quadtree structure is used to implement the multiresolution framework, and the simulated annealing technique is employed to control the splitting and merging of nodes so as to minimize an energy function and therefore, maximize the MAP estimate. The multiresolution scheme enables the use of different dissimilarity measures at different resolution levels. Consequently, the proposed algorithm is noise resistant. Since the global clustering information of the image is required in the proposed approach, the scale space filter (SSF) is employed as the first step. The multiresolution approach is used to refine the segmentation. Experimental results of both the synthesized and real images are very encouraging. In order to evaluate experimental results of both synthesized images and real images quantitatively, a new evaluation criterion is proposed and developed  相似文献   

11.
We present in this paper a new fusion image scheme called as “Attention Fusion” (ATF). This scheme, developed in a multiresolution space, uses an attention map to define the level of activity for each one of the coefficients and so, to derive the rules of fusion. The multiresolution decomposition is done by using the dual-tree complex wavelet transform. The fusion method assumes that the highest attention level for the input images to be fused must be one of the main factors to establish the information to put in the fused image. The performance of the method is tested by an axiomatic score function. Two sets of experiments have been carried out: (a) fusion on multi-focus images and (b) fusion on multi-band images. In the first experiment, the proposed method has been compared with PYR, DWT, CWT and SR methods. In this kind of images the ATF method has a good performance. On the other hand, results in the second experiment with multi-band images, demonstrate that the ATF method has the best performance (across several sets of images) in comparison with the CWT, PYR and DWT methods, all of them also based in a multiresolution decomposition.  相似文献   

12.
基于特征的遥感影像数据融合方法   总被引:2,自引:0,他引:2       下载免费PDF全文
基于小波的多分辨率分析理论提出了一种新的遥感影像数据融合方法。它利用区域方差最大和一致性准 则对不同尺度下的子带数据进行融合,采用加权运算对相应基带数据进行复合E文中给出了黑白航空影像与TM 影像、SAR影像的融合结果E通过与基于像素平均的融合方法比较,证明了本方法具有良好的鲁棒性和自适应能 力E  相似文献   

13.
Currently available image fusion techniques applied to the merging of fine resolution panchromatic and multispectral images are still not able to minimize colour distortion and maximize spatial detail. In this study, a new fusion method, based on Bidimensional Empirical Mode Decomposition (BEMD), is proposed. Unlike other multiresolution analysis tools, such as the discrete wavelet transform (DWT), which normally examines only horizontal, vertical and diagonal orthonormal details at each decomposed scale, the BEMD produces a fully two‐ dimensional decomposition of the panchromatic and multispectral images, based purely on spatial relationships between the extrema of the image. These are decomposed into a certain level of Intrinsic Mode Functions (IMFs) and residual images with the same number of columns and rows as the original image. In consequence, by injecting all the IMF images from the panchromatic image into the residue of the corresponding multispectral image, the fusion image may be reconstructed. The fusion results are evaluated and compared with other popular methods in terms of both the visual examination and the quantitative assessment of the merged images. Preliminary results show that BEMD is optimal and provides a delicate balance between spectral information preservation and enhancement of spatial detail.  相似文献   

14.
In this paper, a simple and efficient multi-focus image fusion approach is proposed. As for the multi-focus images, all of them are obtained from the same scene with different focuses. So the images can be segmented into two kinds regions based on out of and on the focus, which directly leads to a region based fusion, i.e., finding out all of the regions on the focus from the source images, and merging them into a combination image. This poses the question of how to locate the regions on the focus from the input images. Considering that the details or scales are different in the regions which are not and on the focuses, blurring measure method in this paper is used to locate the regions based on the blocking degree. This new fusion method can significantly reduce the amount of distortion artifacts and the loss of contrast information. These are usually observed in fused images in the conventional fusion schemes. The fusion performance of proposed method has been evaluated through informal visual inspection and objective fusion performance measurements, and results show the advantages of the approach compared to conventional fusion approaches.  相似文献   

15.
Recently, multi-modal biometric fusion techniques have attracted increasing atove the recognition performance in some difficult biometric problems. The small sample biometric recognition problem is such a research difficulty in real-world applications. So far, most research work on fusion techniques has been done at the highest fusion level, i.e. the decision level. In this paper, we propose a novel fusion approach at the lowest level, i.e. the image pixel level. We first combine two kinds of biometrics: the face feature, which is a representative of contactless biometric, and the palmprint feature, which is a typical contacting biometric. We perform the Gabor transform on face and palmprint images and combine them at the pixel level. The correlation analysis shows that there is very small correlation between their normalized Gabor-transformed images. This paper also presents a novel classifier, KDCV-RBF, to classify the fused biometric images. It extracts the image discriminative features using a Kernel discriminative common vectors (KDCV) approach and classifies the features by using the radial base function (RBF) network. As the test data, we take two largest public face databases (AR and FERET) and a large palmprint database. The experimental results demonstrate that the proposed biometric fusion recognition approach is a rather effective solution for the small sample recognition problem.  相似文献   

16.
Image fusion is an important component of digital image processing and quantitative image analysis. Image fusion is the technique of integrating and merging information from different remote sensors to achieve refined or improved data. A number of fusion algorithms have been developed in the past two decades, and most of these methods are efficient for applications especially for same-sensor and single-date images. However, colour distortion is a common problem for multi-sensor or multi-date image fusion. In this study, a new image fusion method of regression kriging is presented. Regression kriging takes consideration of correlation between response variable (i.e., the image to be fused) and predictor variables (i.e., the image with finer spatial resolutions), spatial autocorrelation among pixels in the predictor images, and the unbiased estimation with minimized variance. Regression kriging is applied to fuse multi-temporal (e.g., Ikonos, QuickBird, and OrbView-3) images. The significant properties of image fusion using regression kriging are spectral preservation and relatively simple procedures. The qualitative assessments indicate that there is no apparent colour distortion in the fused images that coincides with the quantitative checks, which show that the fused images are highly correlated with the initial data and the per-pixel differences are too small to be considered as significant errors. Besides a basic comparison of image fusion between a wavelet based approach and regression kriging, general comparisons with other published fusion algorithms indicate that regression kriging is comparable with other sophisticated techniques for multi-sensor and multi-date image fusion.  相似文献   

17.
While many studies in the field of image fusion of remotely sensed data aim towards deriving new algorithms for visual enhancement, there is little research on the influence of image fusion on other applications. One major application in earth science is land cover mapping. The concept of sensors with multiple spatial resolutions provides a potential for image fusion. It minimises errors of geometric alignment and atmospheric or temporal changes.

This study focuses on the influence of image fusion on spectral classification algorithms and their accuracy. A Landsat 7 ETM+ image was used, where six multispectral bands (30 m) were fused with the corresponding 15 m panchromatic channel. The fusion methods comprise rather common techniques like Brovey, hue‐saturation‐value transform, and principal component analysis, and more complex approaches, including adaptive image fusion, multisensor multiresolution image fusion technique, and wavelet transformation. Image classification was performed with supervised methods, e.g. maximum likelihood classifier, object‐based classification, and support vector machines. The classification was assessed with test samples, a clump analysis, and techniques accounting for classification errors along land cover boundaries. It was found that the adaptive image fusion approach shows best results with low noise content. It resulted in a major improvement when compared with the reference, especially along object edges. Acceptable results were achieved by wavelet, multisensor multiresolution image fusion, and principal component analysis. Brovey and hue‐saturation‐value image fusion performed poorly and cannot be recommended for classification of fused imagery.  相似文献   

18.
对不同模态的医学图像进行融合处理,可为临床提供新的诊断信息.针对V. Petrovic提出的灰度图像融合方法,给出了一种改进的多尺度图像融合算法.首先对源图像按照V.Petrovic的方法对源图像的梯度细节图像进行融合/分解并得到源图像的多尺度金字塔表示,然后对金字塔中的两个低通分量加权平均后,再进行反变换,即得灰度融合图像.然后以此灰度融合图像作为两幅源图像的公共部分,利用Toet的伪彩色融合方法,得到最终的彩色融合图像.由于该算法兼容了灰度融合和彩色融合的优点,用不同彩色来显示灰度差,使得融合图像在色彩上更丰富,包含更多的细节.将其应用于CT和MRI图像融合,仿真结果得到了证明.  相似文献   

19.
In the evaluation of image fusion methods, spatially degraded multispectral (MS) and panchromatic (PAN) images are frequently employed as test data sets. The degradation is implemented using either averaging or a combination of low-pass filtering and decimation. However, the decimation operation causes the degraded MS and PAN images to be slightly misaligned with each other and with the original MS image, which acts as the reference image in the fusion evaluation. In this study, two image fusion methods based on decimated and undecimated multiresolution analysis techniques were evaluated on three popular types of test data sets consisting of spatially degraded IKONOS MS and PAN images. In the experiment, image misalignments caused by decimation significantly influenced the quality of fused images and resulted in untrustworthy performances of the image fusion methods being evaluated. It was demonstrated that unlike aligned MS and PAN images in actual image fusion, misaligned MS and PAN images in test data sets are inappropriate for image fusion evaluation.  相似文献   

20.
Multimodal medical image fusion is an important task for the retrieval of complementary information from medical images. Shift sensitivity, lack of phase information and poor directionality of real valued wavelet transforms motivated us to use complex wavelet transform for fusion. We have used Daubechies complex wavelet transform (DCxWT) for image fusion which is approximately shift invariant and provides phase information. In the present work, we have proposed a new multimodal medical image fusion using DCxWT at multiple levels which is based on multiresolution principle. The proposed method fuses the complex wavelet coefficients of source images using maximum selection rule. Experiments have been performed over three different sets of multimodal medical images. The proposed fusion method is visually and quantitatively compared with wavelet domain (Dual tree complex wavelet transform (DTCWT), Lifting wavelet transform (LWT), Multiwavelet transform (MWT), Stationary wavelet transform (SWT)) and spatial domain (Principal component analysis (PCA), linear and sharp) image fusion methods. The proposed method is further compared with Contourlet transform (CT) and Nonsubsampled contourlet transform (NSCT) based image fusion methods. For comparison of the proposed method, we have used five fusion metrics, namely entropy, edge strength, standard deviation, fusion factor and fusion symmetry. Comparison results prove that performance of the proposed fusion method is better than any of the above existing fusion methods. Robustness of the proposed method is tested against Gaussian, salt & pepper and speckle noise and the plots of fusion metrics for different noise cases established the superiority of the proposed fusion method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号