首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 187 毫秒
1.
《红外技术》2016,(10):832-837
高光谱分辨率的高光谱图像与高空间分辨率的全色图像融合可以综合两类图像的优势。将全色图像与高光谱部分波段分别融合,再合成假彩色图像是融合的有效思路。引进多分辨率分析框架,首先对源图像进行小波变换,得到低频系数与高频系数;然后从中提取平均梯度特征和边缘特征分别作为两类系数融合的依据;最后将融合后的系数经过小波逆变换还原为各波段融合图像,进行假彩色合成得到最终的彩色融合图。实验结果表明,与PCA、HIS等经典方法相比,本文方法不仅能够在保证融合效率前提下有效保持光谱信息,提高融合图像的空间分辨率,融合图像的标准差、平均梯度、信息熵等各项指标也均优于其他融合方法。  相似文献   

2.
采用区域互信息的多光谱与全色图像融合算法   总被引:1,自引:0,他引:1       下载免费PDF全文
为了提高多光谱与全色图像融合算法质量,提出了一种采用区域互信息的多光谱与全色图像融合算法。首先将多光谱图像变换至HSV彩色空间,并采用分水岭与区域合并的方法对V分量进行区域分割,得到区域分割映射,欧氏光谱距离作为区域合并的测度。然后采用非下采样Contourlet变换(Nonsubsample Contourlet Transform,NSCT)对多光谱图像V分量和全色图像进行多分辨率分解,将区域分割结果映射至全色图像,通过计算对应区域间的互信息对多分辨率分解系数进行融合,获得融合图像的分解系数,最后通过NSCT反变换实现融合图像重构。图像融合算法对比实验表明,文中融合算法在充分保留了多光谱图像光谱信息的同时,尽可能多地注入了全色图像的细节信息,有效提高了多光谱图像的边缘特征。  相似文献   

3.
基于Contourlet系数局部特征的选择性遥感图像融合算法   总被引:2,自引:0,他引:2  
为了使融合后的多光谱图像在显著提高空间分辨率的同时,尽可能多地保持原始多光谱特性,提出了一种基于Contourlet变换系数局部特征的选择性遥感图像融合方法。根据多光谱和全色图像融合过程中Contourlet变换后的低频和高频部分融合目的的不同,对得到的近似和各层各方向的细节分量分别运用窗口邻域移动模板逐一计算相应区域Contourlet系数阵的不同局部特征量,然后选择适当的准则,对图像的近似和细节分量分别应用不同的策略在Contourlet系数域内进行选择性融合,通过Contourlet和亮度-色调-饱和度(IHS)逆变换得到融合的高分辨率多光谱图像。采用Landsat TM多光谱和SPOT全色图像进行的融合实验结果表明:提出的算法在显著提高空间分辨率的同时,又能很好地保持原始图像的光谱特征,并优于传统的融合方法。  相似文献   

4.
一种多光谱图像和全色图像融合算法   总被引:4,自引:1,他引:4  
文章提出了一种新的基于独立成分分析的多光谱图像和全色图像的融合算法:对于两幅图像的重叠区域,使用独立成分分析去除遥感图像中高阶数据冗余问题,通过独立成分分析使得所分离的各个分量进行叠加从而得到最终的融合结果,从而提高了融合后图像的信息量和信噪比,同时使融合图像兼具全色图像的高分辨率和多光谱图像中特有的对目标某些独特特征的精细描述,使融合图像包含更丰富更有效的信息.  相似文献   

5.
为解决图像空间信息的问题,文章提出了一种独立成分分析的多光谱图像融合算法,将多光谱图像的RGB 3个波段和近红外图像共4个波段进行独立成分分析变化,并对其做加权平均得到主图像信息,将主图像信息与全色图像加权求平均得到一副新的图像,然后将这幅图像还原到4个波段得到融合后的结果图像。  相似文献   

6.
为了提高多光谱图像与全色图像的融合质量,研究了多种滤波器和融合算法,提出了基于多级引导滤波器的区域融合方法。采用该方法对多光谱图像进行插值,利用改进的分水岭算法对全色图像进行区域划分,并将划分结果映射至每个多光谱图像,然后将多光谱图像与全色图像利用多级引导滤波器分别进行滤波,得到各自的细节信息,最后根据每个区域中全色图像和多光谱图像的关系指标局部相关系数与4阶相关系数的大小,对细节信息进行区域融合,得到融合后的多光谱图像。结果表明,该算法充分保留了多光谱图像的光谱信息,并尽可能多地注入了全色图像的细节信息,成功地提高了多光谱图像的融合效果。  相似文献   

7.
结合遥感图像融合的特点,提出一种联合辐射指数的多流融合生成对抗网络的遥感图像融合方法。该方法从初始的多光谱图像和全色图像中按照特定的规则提取特征图,利用生成器子网络分别提炼输入的多源图像,在特征域上叠加特征后通过生成器的主网络进行融合。根据遥感领域中多光谱图像和全色图像在波段上的特性,在判别器中引入调制传递函数(MTF)来判别融合图像的光谱信息和空间结构信息。为评估所提方法的有效性,进行视觉分析,并与其他算法进行客观评价的比较。实验结果表明,该方法在视觉效果和客观评价上优于其他算法。  相似文献   

8.
基于NSCT的多光谱和全色图像的融合   总被引:2,自引:0,他引:2  
翟军涛  那彦 《激光与红外》2008,38(3):282-284
提出了一种基于无下采样Contourlet变换的多光谱和全色图像的融合方法.该方法在对多光谱影像进行IHS变换的基础上,对多光谱的I分量和高分辨率的全色影像分别进行无下采样Contourlet变换(NSCT),然后对分解得到的近似分量以及各层金字塔各方向的细节分量利用本文提出的一定的融合准则分别对近似分量和细节分量进行影像融合,最后通过无下采样Conlourlet逆变换得到新的I分量,与H,S分量一起还原到RGB空间,最终得到融合后的高分辨率多光谱彩色图像.本文采用一组多光谱图像和全色图像数据进行融合实验,其实验融合图像的目视效果和统计指标均优于传统的IHS融合方法、小波融合方法以及Contourlet变换方法.  相似文献   

9.
提出了基于Curvelet变换的自适应多传感图像融合新算法。算法将全色图像和多光谱图像进行Curvelet变换分解后,针对不同的频率域特点选择不同的融合规则。对低频系数选取区域能量的加权系数自适应融合规则,对高频系数选用了区域特征自适应的融合规则。最后通过重构得到融合图像。将该算法和其他的融合算法进行对比,结果表明,该算法是一种有效可行的图像融合算法。  相似文献   

10.
颜料的分类识别是古代壁画进行保护修复的基础,多光谱成像方法能够无损快速地获取壁画颜料的光谱图像数据并进行分析。传统利用卷积神经网络进行特征提取的算法中连续的卷积和池化操作会丢失壁画多光谱图像的部分特征信息,使得图像细节无法重建,导致分类图像边界不平滑。针对该问题,提出了一种基于多尺度特征融合的三维空洞卷积残差神经网络对壁画多光谱图像进行颜料分类。首先,在卷积核中引入空洞结构提高卷积核的感受野来提取不同尺度信息,避免池化操作所导致的部分特征丢失;其次,使用特征融合的方法融合不同尺度的特征图,增加多尺度特征的结构层次;最后,引入残差学习模块避免网络层数加深导致的梯度消失问题,重建完整的边缘信息。实验结果表明,所提方法在模拟壁画多光谱图像数据集上的总体精度和平均精度分别达到了98.87%和96.89%,与各对照组相比,不仅具有更好的分类精度,而且得到了边界更清晰的分类图像。  相似文献   

11.
Interpreting remote sensing images by combining manual visual interpretation and computer automatic classification and recognition is an important application of human–computer interaction (HCI) in the field of remote sensing. Remote sensing images with high spatial resolution and high spectral resolution is an important basis for automatic classification and recognition. However, such images are often difficult to obtain directly. In order to solve the problem, a novel pan-sharpening method via multi-scale and multiple deep neural networks is presented. First, the non-subsampled contourlet transform (NSCT) is employed to decompose the high resolution (HR)/low resolution (LR) panchromatic (PAN) images into the high frequency (HF)/low frequency (LF) images, respectively. For pan-sharpening, the training sets are only sampled from the HF images. Then, the DNN is utilized to learn the feature of the HF images in different directions of HR/LR PAN images, which is trained by the image patch pair sampled from HF images of HR/LR PAN images. Moreover, in the fusion stage, NSCT is also employed to decompose the principal component of initially amplified LR multispectral (MS) image obtained by the transformation of adaptive PCA (A-PCA). The HF image patches of LR MS, as the input data of the trained DNN, go through forward propagation to obtain the output HR MS image. Finally, the output HF sub-band images and the original LF sub-band images of LR MS image fuse into a new sub-band set. The inverse transformations of NSCT and A-PCA , residual compensation are conducted to obtain the pan-sharpened HR MS. The experimental results show that our method is better than other well-known pan-sharpening methods.  相似文献   

12.
Image fusion is a technical method to integrate the spatial details of the high‐resolution panchromatic (HRP) image and the spectral information of low‐resolution multispectral (LRM) images to produce high‐resolution multispectral images. The most important point in image fusion is enhancing the spatial details of the HRP image and simultaneously maintaining the spectral information of the LRM images. This implies that the physical characteristics of a satellite sensor should be considered in the fusion process. Also, to fuse massive satellite images, the fusion method should have low computation costs. In this paper, we propose a fast and efficient satellite image fusion method. The proposed method uses the spectral response functions of a satellite sensor; thus, it rationally reflects the physical characteristics of the satellite sensor to the fused image. As a result, the proposed method provides high‐quality fused images in terms of spectral and spatial evaluations. The experimental results of IKONOS images indicate that the proposed method outperforms the intensity‐hue‐saturation and wavelet‐based methods.  相似文献   

13.
Usual image fusion methods inject features from a high spatial resolution panchromatic sensor into every low spatial resolution multispectral band trying to preserve spectral signatures and improve spatial resolution to that of the panchromatic sensor. The objective is to obtain the image that would be observed by a sensor with the same spectral response (i.e., spectral sensitivity and quantum efficiency) as the multispectral sensors and the spatial resolution of the panchromatic sensor. But in these methods, features from electromagnetic spectrum regions not covered by multispectral sensors are injected into them, and physical spectral responses of the sensors are not considered during this process. This produces some undesirable effects, such as resolution overinjection images and slightly modified spectral signatures in some features. The authors present a technique which takes into account the physical electromagnetic spectrum responses of sensors during the fusion process, which produces images closer to the image obtained by the ideal sensor than those obtained by usual wavelet-based image fusion methods. This technique is used to define a new wavelet-based image fusion method.  相似文献   

14.
Hyperspectral images have a higher spectral resolution (i.e., a larger number of bands covering the electromagnetic spectrum), but a lower spatial resolution with respect to multispectral or panchromatic acquisitions. For increasing the capabilities of the data in terms of utilization and interpretation, hyperspectral images having both high spectral and spatial resolution are desired. This can be achieved by combining the hyperspectral image with a high spatial resolution panchromatic image. These techniques are generally known as pansharpening and can be divided into component substitution (CS) and multi-resolution analysis (MRA) based methods. In general, the CS methods result in fused images having high spatial quality but the fused images suffer from spectral distortions. On the other hand, images obtained using MRA techniques are not as sharp as CS methods but they are spectrally consistent. Both substitution and filtering approaches are considered adequate when applied to multispectral and PAN images, but have many drawbacks when the low-resolution image is a hyperspectral image. Thus, one of the main challenges in hyperspectral pansharpening is to improve the spatial resolution while preserving as much as possible of the original spectral information. An effective solution to these problems has been found in the use of hybrid approaches, combining the better spatial information of CS and the more accurate spectral information of MRA techniques. In general, in a hybrid approach a CS technique is used to project the original data into a low dimensionality space. Thus, the PAN image is fused with one or more features by means of MRA approach. Finally the inverse projection is used to obtain the enhanced image in the original data space. These methods, permit to effectively enhance the spatial resolution of the hyperspectral image without relevant spectral distortions and on the same time to reduce the computational load of the entire process. In particular, in this paper we focus our attention on the use of Nonlinear Principal Component Analysis (NLPCA) for the projection of the image into a low dimensionality feature space. However, if on one hand the NLPCA has been proved to better represent the intrinsic information of hyperspectral images in the feature space, on the other hand an analysis of the impact of different fusion techniques applied to the nonlinear principal components in order to define the optimal framework for the hybrid pansharpening has not been carried out yet. More in particular, in this paper we analyze the overall impact of several widely used MRA pansharpening algorithms applied in the nonlinear feature space. The results obtained on both synthetic and real data demonstrate that an accurate selection of the pansharpening method can lead to an effective improvement of the enhanced hyperspectral image in terms of spectral quality and spatial consistency, as well as a strong reduction in the computational time.  相似文献   

15.
基于统计模型的遥感图像多分辨融合方法   总被引:2,自引:1,他引:1       下载免费PDF全文
本文针对多光谱图像的增强问题,提出了一种新的多分辨融合方法.将相关约束引入到统计融合模型中,从而充分增强了相关的空间信息,同时又有效抑制了光谱失真.真实的遥感图像实验结果表明了该文算法的有效性.  相似文献   

16.
High correlation among the neighboring pixels both spatially and spectrally in a multispectral image makes it necessary to use an efficient data transformation approach before performing pan-sharpening. Wavelets and principal component analysis (PCA) methods have been a popular choice for spatial and spectral transformations, respectively. Current PCA-based pan-sharpening methods make an assumption that the first principal component (PC) of high variance is an ideal choice for replacing or injecting it with high spatial details from the high-resolution histogram-matched panchromatic (PAN) image. This paper presents a combined adaptive PCA-contourlet approach for pan-sharpening, where the adaptive PCA is used to reduce the spectral distortion and the use of nonsubsampled contourlets for spatial transformation in pan-sharpening is incorporated to overcome the limitation of the wavelets in representing the directional information efficiently and capturing intrinsic geometrical structures of the objects. The efficiency of the presented method is tested by performing pan-sharpening of the high-resolution (IKONOS and QuickBird) and the medium-resolution (Landsat-7 Enhanced Thematic Mapper Plus) datasets. The evaluation of the pan-sharpened images using global validation indexes reveal that the adaptive PCA approach helps reducing the spectral distortion, and its merger with contourlets provides better fusion results.  相似文献   

17.
In this study, we propose a new deep learning architecture named Multi-Level Dense Network (MLDNet) for multi-focus image fusion (MFIF). We introduce shallow and dense feature extraction in our feature extraction module to extract images features in a more robust way. In particular, we extracted the features from a mixture of many distributions from prior to the complex distribution through densely connected convolutional layers, then the extracted features are fused to form dense local feature maps. We added global feature fusion into the proposed architecture in order to merge the dense local feature maps of each source image into a fused image representation for the reconstruction of the final fused image. Our proposed MLDNet learns feature extraction, feature fusion and reconstruction within the same network to provide an end-to-end solution for MFIF. Experimental results demonstrate that our proposed method achieved significant performance against different state-of-the-art MFIF methods.  相似文献   

18.
孟勇  周则明  胡宝鹏  胡彪 《红外》2015,36(8):12-17
为了充分利用红外和微光遥感图像中的互补信息,使其便于目视解译,提出了 一种基于变分的图像融合方法。该变分模型定义了细节注入项和结构保真项,在保持红外和微光图像 光谱特性的同时,还改进了融合图像的空间细节和结构特性;引入了正则化能量项,保证了泛函最优解 的平滑性。基于梯度下降流,通过数值迭代获得了融合图像。实验结果表明,该模型能够获取兼具 丰富细节信息和光谱信息的融合图像。与Laplacian金字塔分解方法和多孔小波方法相比,本文 方法具有更佳的融合性能。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号