首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Sentinel-2 satellite sensors acquire three kinds of optical remote sensing images with different spatial resolutions.How to improve the spatial resolution of lower spatial resolution bands by fusion method is one of the problems faced by Sentinel-2 applications.Taking the Sentinel\|2B image as the data source,a high spatial resolution band was generated or selected from the four 10m spatial resolution bands by four methods:the maximum correlation coefficient,the central wavelength nearest neighbor,the pixel maximum and the principal component analysis.We fused the one high spatial resolution band produced and six multispectral bands with 20 m spatial resolution by the five fusion methods of PCA,HPF,WT,GS and Pansharp to produce six multispectral bands with 10 m spatial resolution and the fusion results were evaluated from three aspects:qualitative and quantitative (information entropy,average gradient,spectral correlation coefficient,root mean square error and general image quality index) and classification accuracy of fused images.Results show that the fusion quality of Pansharp with the maximum correlation coefficient is better than other fusion methods,and the classification accuracy is slightly lower than the GS with the pixel maximum of the highest classification accuracy and far higher than the original four multispectral image with 10 m spatial resolution.According to the classification accuracy of experimental data,different fusion methods have different advantages in extraction of different ground objects.In application,appropriate schemes should be selected according to actual research needs.This research can provide reference for Sentinel-2 satellite and similar satellite data processing and application.  相似文献   

2.
A host of remote-sensing and mapping applications require both high spatial and high spectral resolutions. Availability of high spatial and spectral details at different resolutions from a suite of satellite sensors has necessitated the development of effective image fusion techniques that can effectively combine the information available from different sensors and take advantage of their varied capabilities. A common problem observed in the case of multi-sensor multi-temporal data fusion is spectral distortion of the fused images. Performance of a technique also varies with variation in scene characteristics. In this article, two sets of multi-temporal CARTOSAT-1 and Indian Remote Sensing satellite (IRS-P6) Linear Imaging and Self Scanning sensor (LISS-IV) image sub-scenes, with different urban landscape characteristics, are fused with an aim to evaluate the performance of five image fusion algorithms – high-pass filtering (HPF), Gram–Schmidt (GS), Ehlers, PANSHARP and colour-normalized Brovey (CN-Brovey). The resultant fused data sets are compared qualitatively and quantitatively with respect to spectral fidelity. Spatial enhancement is assessed visually. The difference in the performance of techniques with variation in scene characteristics is also examined. For both scenes, GS, HPF and PANSHARP fusion techniques produced comparable results with high spectral quality and spatial enhancement. For these three methods, the variation in performance over different scenes was not very significant. The Ehlers method resulted in spatially degraded images with a more or less constant negative offset in data values in all bands of one scene and in the first two bands in the other. The CN-Brovey method produced excellent spatial enhancement but highly distorted radiometry for both sub-scenes.  相似文献   

3.
The most frequent application of unmanned aerial vehicle (UAVs) is to collect optical colour images from an area of interest. Thus, high spatial resolution colour images with high amount of signal to noise ratio (SNR) are of great importance in UAV applications. Currently, most UAVs use single sensor colour filter array (CFA) cameras for image collection, within which the Bayer-pattern sensors are the most frequently used ones. Due to the limitations of the CFAs, the quality (in terms of spatial resolution, SNR, and sharpness) of UAV colour images is not optimal. In this article, a sensor fusion solution is proposed to improve the quality of UAV imaging. In the proposed solution, a high-resolution colour (HRC) Bayer-pattern sensor is replaced by a dual camera set containing a panchromatic (Pan) sensor, with the same pixel size and a Bayer-pattern colour (or a four-band multi-spectral) sensor with larger pixel size; the resulting images of the dual camera set are then fused. The enlarged pixel size of the colour sensor provides a higher SNR for the cost of lower spatial resolution. However, the accompanied Pan sensor provides single band images with high SNR and high spatial resolution. Fusing the images of the dual camera set generates colour (or MS) images with high spatial resolution, SNR, and sharpness compensating for the major problems of the Bayer-pattern filters.

This replacement solution is initially tested in a laboratory experiment. The results of quality assessments show that the SNR is increased by 2–3 times, the sharpness is improved by around 2 times, and the spatial resolution is increased up to the level of the pan images, while the colour errors remained almost as low as the original colour images. In addition, image classification capability of the images is examined using two methods: Support Vector Machine (SVM) and Maximum Likelihood (ML). The results of image classification also confirmed around 20–40% increase in accuracy. Therefore, the proposed sensor fusion can be a good alternative for UAV colour sensors.  相似文献   


4.
Image fusion is an important component of digital image processing and quantitative image analysis. Image fusion is the technique of integrating and merging information from different remote sensors to achieve refined or improved data. A number of fusion algorithms have been developed in the past two decades, and most of these methods are efficient for applications especially for same-sensor and single-date images. However, colour distortion is a common problem for multi-sensor or multi-date image fusion. In this study, a new image fusion method of regression kriging is presented. Regression kriging takes consideration of correlation between response variable (i.e., the image to be fused) and predictor variables (i.e., the image with finer spatial resolutions), spatial autocorrelation among pixels in the predictor images, and the unbiased estimation with minimized variance. Regression kriging is applied to fuse multi-temporal (e.g., Ikonos, QuickBird, and OrbView-3) images. The significant properties of image fusion using regression kriging are spectral preservation and relatively simple procedures. The qualitative assessments indicate that there is no apparent colour distortion in the fused images that coincides with the quantitative checks, which show that the fused images are highly correlated with the initial data and the per-pixel differences are too small to be considered as significant errors. Besides a basic comparison of image fusion between a wavelet based approach and regression kriging, general comparisons with other published fusion algorithms indicate that regression kriging is comparable with other sophisticated techniques for multi-sensor and multi-date image fusion.  相似文献   

5.
The Ehlers fusion method, which combines a standard intensity-hue-saturation (IHS) transform with fast Fourier transform filtering, is a high spectral characteristics preservation algorithm for multitemporal and multisensor data sets. However, for data sets of more than three bands, the fusion process is complicated, because only every three bands are fused repeatedly for multiple times until all bands are fused. The hyper-spherical colour sharpening (HCS) fusion method can fuse a data set with an arbitrary number of bands. The HCS approach uses a transform between an n-dimensional Cartesian space and an n-dimensional hyper-spherical space to get one single intensity component and n ? 1 angles. Moreover, from a structural point of view, the hyper-spherical colour space is very similar to the IHS colour space. Hence, we propose to combine the Ehlers fusion with an HCS transform to fuse n-band data sets with high spectral information preservation, even hyper-spectral images. A WorldView-2 data set including a panchromatic and eight multispectral bands is used for demonstrating the effectiveness and quality of the new Ehlers –HCS fusion. The WorldView-2 image covers different landscapes such as agriculture, forest, water and urban areas. The fused images are visually and quantitatively analysed for spectral preservation and spatial improvement. Pros and cons of the applied fusion methods are related to the analysed different landscapes. Overall, the Ehlers –HCS method shows the efficacy for n-band fusion.  相似文献   

6.
Monitoring vegetation dynamics is fundamental for improving Earth system models and for increasing our understanding of the terrestrial carbon cycle and the interactions between biosphere and climate. Medium spatial resolution sensors, like MERIS, exhibit a significant potential to study these dynamics over large areas because of their spatial, spectral and temporal resolution. However, the spatial resolution provided by MERIS (300 m in full resolution mode) is not appropriate to monitor heterogeneous landscapes, where typical length scales of these dynamics rarely reach 300 m. We, therefore, motivate the use of data fusion techniques to downscale medium spatial resolution data (MERIS full resolution, FR) to a Landsat-like spatial resolution (25 m). An unmixing-based data fusion approach was applied to a time series of MERIS FR images acquired over The Netherlands. The selected data fusion approach is based on the linear mixing model and uses a high spatial resolution land use database to produce images having the spectral and temporal resolution as provided by MERIS, but a Landsat-like spatial resolution. A quantitative assessment of the quality of the fused images was done in order to test the validity of the proposed method and to evaluate the radiometric characteristics of the MERIS fused images. The resulting series of fused images was subsequently used to compute two vegetation indices specifically designed for MERIS: the MERIS terrestrial chlorophyll index (MTCI) and the MERIS global vegetation index (MGVI). These indices represent continuous fields of canopy chlorophyll (MTCI) and of the fraction of photosynthetically active radiation absorbed by the canopy (MGVI). Results indicate that the selected data fusion approach can be successfully used to downscale MERIS data and, therefore, to monitor vegetation dynamics at Landsat-like spatial, and MERIS-like spectral and temporal resolution.  相似文献   

7.
Infrared images can distinguish targets from their backgrounds based on the radiation difference, which works well in all-weather and all-day/night conditions. By contrast, visible images can provide texture details with high spatial resolution and definition in a manner consistent with the human visual system. Therefore, it is desirable to fuse these two types of images, which can combine the advantages of thermal radiation information in infrared images and detailed texture information in visible images. In this work, we comprehensively survey the existing methods and applications for the fusion of infrared and visible images. First, infrared and visible image fusion methods are reviewed in detail. Meanwhile, image registration, as a prerequisite of image fusion, is briefly introduced. Second, we provide an overview of the main applications of infrared and visible image fusion. Third, the evaluation metrics of fusion performance are discussed and summarized. Fourth, we select eighteen representative methods and nine assessment metrics to conduct qualitative and quantitative experiments, which can provide an objective performance reference for different fusion methods and thus support relative engineering with credible and solid evidence. Finally, we conclude with the current status of infrared and visible image fusion and deliver insightful discussions and prospects for future work. This survey can serve as a reference for researchers in infrared and visible image fusion and related fields.  相似文献   

8.
面向土地覆盖分类的MODIS影像融合研究   总被引:1,自引:0,他引:1       下载免费PDF全文
MODIS影像的多波段及其1、2波段的250 m中等分辨率为大区域中空间分辨率的土地覆盖制图提供了可能。为了有效利用MODIS影像的空间和光谱信息,使用SFIM、HPF和PCA变换等遥感影像融合方法,分别采用MODIS影像的波段1(b1)和波段2(b2)对3~7(b3~b7)波段进行融合,并就融合影像的光谱保真度和分类精度对6种不同融合结果进行评价。结果表明不同的融合结果得到的分类精度均有不同程度的提高;3种融合方法中使用b2的融合效果均优于b1;SFIM变换在光谱失真较小的情况下能够较大程度地提高分类精度。因此使用b2的SFIM变换可以用于提高MODIS土地覆盖图的空间分辨率和精度。  相似文献   

9.
There are many image fusion processes to produce a high-resolution multispectral (MS) image from low-resolution MS and high-resolution panchromatic (PAN) images. But the most significant problems are colour distortion and fusion quality. Previously, we reported a fusion process that produced a1 m resolution IKONOS fused image with minimal spectral distortion. However, block distortion appeared at the edge of the curved sections of the fused image, which was reduced by performing the wavelet transformation as a post-process. Here, we propose an image fusion process using the steepest descent method with bi-linear interpolation, which can remove block distortion without using wavelet transformation. Bi-linear interpolation provides the proper initial values of the fused image, and then the steepest descent method produces the optimum results of the fusion process. These results achieve improvement on the spectral as well as spatial quality of a1 m resolution fused image when compared with other existing methods and remove block distortion completely.  相似文献   

10.
遥感影像的融合是遥感界的一个研究热点。根据数据源的不同,影像融合可分为异源传感器影像融合和同源传感器影像融合。以TM与SPOT作为异源影像融合的例子,以IKONOS的MS与Pan作为同源影像融合的例子,用5种算法对两种融合类型进行实验与比较。结果表明,同源传感器影像的融合效果好于异源传感器影像的融合效果;不同的融合算法在异源和同源传感器影像融合中的表现不尽相同。SVR变换可同时应用于异源及同源传感器影像的融合,且在提高影像空间分辨率、信息量和清晰度的同时能很好地保持原始多光谱影像的光谱特征。SFIM虽然也可以在两种数据源的融合实验中获得较好的融合效果,但其高频信息融入度最差。MB虽然提高了融合影像的高频信息融入程度,但光谱保真度、信息量和清晰度却不理想。Ehlers适用于异源传感器影像间的融合,而WT则适用于同源传感器影像的融合。  相似文献   

11.
Earth’s observation satellites provide simultaneously both multispectral (XS) and panchromatic (pan) images but XS image has a lower spatial resolution when compared to pan image. Pansharpening is a pixel-level fusion technique resulting in a high-resolution multispectral image in terms of both spatial and spectral resolution. The problem lies in maintaining the spectral characteristics of each channel of the XS image when pan image is used to estimate the high spatial XS image. Many techniques have been proposed to address the problem. A popular method involves a sensor-based approach where correlation among the XS channels and correlation between the pan and spectral channels are incorporated. In this paper, we take a wholesome approach based on the reflectance data irrespective of the sensor physics. A linear regression model is formulated between the XS channel and the panchromatic data. We formulate an optimization problem in terms of Lagrange multiplier to maximise the spectral consistency of the fused data with respect to the original XS data, and to minimise the error in variance between the reference data and the computed data. We validate and compare our method with IHS and Brovey methods based on evaluation metrics such as Chi-square test and the R2 test. The implementation is done and presented using IKONOS satellite data.  相似文献   

12.
Currently available image fusion techniques applied to the merging of fine resolution panchromatic and multispectral images are still not able to minimize colour distortion and maximize spatial detail. In this study, a new fusion method, based on Bidimensional Empirical Mode Decomposition (BEMD), is proposed. Unlike other multiresolution analysis tools, such as the discrete wavelet transform (DWT), which normally examines only horizontal, vertical and diagonal orthonormal details at each decomposed scale, the BEMD produces a fully two‐ dimensional decomposition of the panchromatic and multispectral images, based purely on spatial relationships between the extrema of the image. These are decomposed into a certain level of Intrinsic Mode Functions (IMFs) and residual images with the same number of columns and rows as the original image. In consequence, by injecting all the IMF images from the panchromatic image into the residue of the corresponding multispectral image, the fusion image may be reconstructed. The fusion results are evaluated and compared with other popular methods in terms of both the visual examination and the quantitative assessment of the merged images. Preliminary results show that BEMD is optimal and provides a delicate balance between spectral information preservation and enhancement of spatial detail.  相似文献   

13.
Worldview 3是目前最先进的高分辨率光学卫星之一。针对Worldview 3遥感卫星数据全色波段和短波红外波段(Short-Wave Infrared,SWIR)空间分辨率差异大、波谱范围不一致所导致的融合结果块状效应以及空间分辨率增强效果有限的问题,提出一种两步式影像融合框架。该框架降低全色波段空间分辨率,实现与SWIR波段的初步融合;将初步融合的结果与原始分辨率全色波段进行第二步融合。通过选择6种典型像素级融合方法两两组合,形成36种融合方案,验证框架的可用性。选取包含植被、建筑、水体等典型地物类型的Worldview 3数据集进行实验,并采用五个定量评价指标进行评价。实验结果表明:使用两步式融合框架进行融合,通过渐进式空间细节注入的方式,避免了直接融合产生的块状效应,实现了短波红外影像的空间分辨率增强;第一步融合采用高通滤波(HPF)融合法,第二步融合采用GS(Gram-Schmidt Transform)变换融合方法,引入的空间信息最多,获得的融合结果质量最好。提出的融合框架既能避免块状效应的产生,又能有效增强SWIR波段的空间分辨率,对于其他卫星的全色与短波红外波段的融合也具有一定借鉴意义。  相似文献   

14.
This paper proposes a novel spatial and spectral fusion method for satellite multispectral and hyperspectral (or high-spectral) images based on dictionary-pair learning. By combining the spectral information from sensors with low spatial resolution but high spectral resolution (LSHS) and the spatial information from sensors with high spatial resolution but low spectral resolution (HSLS), this method aims to generate fused data with both high spatial and spectral resolution. Based on the sparse non-negative matrix factorization technique, this method first extracts spectral bases of LSHS and HSLS images by making full use of the rich spectral information in LSHS data. The spectral bases of these two categories data then formulate a dictionary-pair due to their correspondence in representing each pixel spectra of LSHS data and HSLS data, respectively. Subsequently, the LSHS image is spatial unmixed by representing the HSLS image with respect to the corresponding learned dictionary to derive its representation coefficients. Combining the spectral bases of LSHS data and the representation coefficients of HSLS data, fused data are finally derived which are characterized by the spectral resolution of LSHS data and the spatial resolution of HSLS data. The experiments are carried out by comparing the proposed method with two representative methods on both simulation data and actual satellite images, including the fusion of Landsat/ETM+ and Aqua/MODIS data and the fusion of EO-1/Hyperion and SPOT5/HRG multispectral images. By visually comparing the fusion results and quantitatively evaluating them in term of several measurement indices, it can be concluded that the proposed method is effective in preserving both the spectral information and spatial details and performs better than the comparison approaches.  相似文献   

15.
目的 将高光谱图像和多光谱图像进行融合,可以获得具有高空间分辨率和高光谱分辨率的光谱图像,提升光谱图像的质量。现有的基于深度学习的融合方法虽然表现良好,但缺乏对多源图像特征中光谱和空间长距离依赖关系的联合探索。为有效利用图像的光谱相关性和空间相似性,提出一种联合自注意力的Transformer网络来实现多光谱和高光谱图像融合超分辨。方法 首先利用联合自注意力模块,通过光谱注意力机制提取高光谱图像的光谱相关性特征,通过空间注意力机制提取多光谱图像的空间相似性特征,将获得的联合相似性特征用于指导高光谱图像和多光谱图像的融合;随后,将得到的融合特征输入到基于滑动窗口的残差Transformer深度网络中,探索融合特征的长距离依赖信息,学习深度先验融合知识;最后,特征通过卷积层映射为高空间分辨率的高光谱图像。结果 在CAVE和Harvard光谱数据集上分别进行了不同采样倍率下的实验,实验结果表明,与对比方法相比,本文方法从定量指标和视觉效果上,都取得了更好的效果。本文方法相较于性能第二的方法EDBIN (enhanced deep blind iterative network),在CAVE数据集上峰值信噪比提高了0.5 dB,在Harvard数据集上峰值信噪比提高了0.6 dB。结论 本文方法能够更好地融合光谱信息和空间信息,显著提升高光谱融合超分图像的质量。  相似文献   

16.
Recent advances in spatial and spectral resolution of satellite imagery as well as in processing techniques are opening new possibilities of fine-scale vegetation analysis with interesting applications in natural resource management. Here we present the main results of a study carried out in Sierra Morena, Cordoba (southern Spain), aimed at assessing the potential of remote-sensing techniques to discriminate and map individual wild pear trees (Pyrus bourgaeana) in Mediterranean open woodland dominated by Quercus ilex. We used high spatial resolution (2.4 m multispectral/0.6 m panchromatic) QuickBird satellite imagery obtained during the summer of 2008. Given the size and features of wild pear tree crowns, we applied an atmospheric correction method, Fast Line-of-Sight Atmospheric Analysis of Spectral Hypercube (FLAASH), and six different fusion ‘pan-sharpening’ methods (wavelet ‘à trous’ weighted transform, colour normalized (CN), Gram–Schmidt (GS), hue–saturation–intensity (HSI) colour transformation, multidirection–multiresolution (MDMR), and principal component (PC)), to determine which procedure provides the best results. Finally, we assessed the potential of supervised classification techniques (maximum likelihood) to discriminate and map individual wild pear trees scattered over the Mediterranean open woodland.  相似文献   

17.
高分一号卫星4种融合方法评价   总被引:4,自引:0,他引:4  
以国产高分辨率卫星高分一号作为数据源,应用Pansharp融合、HPF融合、Gram-Schmidt融合和SFIM融合4种方法对高分一号卫星2m全色及8m多光谱数据进行了融合试验,并对融合结果的空间信息融入度和光谱保真度进行了评价。选取标准差、熵及联合熵、平均梯度、相对偏差4种客观评价指标对融合结果进行了计算与分析。研究结果表明:对于高分一号卫星,4种方法均显著提高了影像的空间分辨率,同时较好地保留了影像的光谱信息,提高了影像的利用率。其中Pansharp融合综合表现最好,HPF方法边界最为清晰,SFIM方法的光谱保真度最高,GramSchmidt融合在近红外波段效果最好。根据不同的研究目的,使用适宜的融合方法及参数,可以使高分一号卫星影像更好地为生产及科研工作服务。  相似文献   

18.

Image fusion represents an important tool for remote sensing data elaborations. This technique is used for many purposes. Very often it is used to produce improved spatial resolution. The most common situation is represented by a pair of images: the first acquired by a multispectral sensor with a pixel size greater than the pixel size of the second image given by a panchromatic sensor (PAN). Starting from these images fusion produces a new multispectral image with a spatial resolution equal, or close, to that of the PAN. Very often fusion introduces important distortions on the pixel spectra. This fact could compromise the extraction of information from the image, especially when using an automatic algorithm based on spectral signature such as in the case of image classification. In this work we present the analysis of two fusion methods based on multiresolution decomposition obtained using the 'a tròus' algorithm and applied to a pair of images acquired by Thematic Mapper (TM) and Indian Remote Sensing (IRS)-1C-PAN sensors. The methods studied are also compared with two classical fusion methods, the intensity, hue and saturation (IHS) and standardized principal components (SPC). Fused results are studied and compared using various tests including supervised classification. Most of the tests used have been extracted from literature regarding the assessment of spatial and spectral quality of fused images. This study shows that the methods based on multiresolution decomposition outperform the classical fusion methods considered with respect to spectral content preservation. Moreover, it is shown that some of the quality tests are more significant than others. The discussion of this last aspect furnishes important indications for data quality assessment methods.  相似文献   

19.
一种基于小波变换的图象融合算法   总被引:16,自引:4,他引:12       下载免费PDF全文
基于多分辨率小波变换的融合算法近年来得到了广泛的应用。为了充分利用各种遥感图象的信息中,针对“资源一号”卫星图象和SPOT全色波图象,提出了一种新的基于区域能量特征的小波变换复原-增强融合算法。通过同IHS、PCA和HPF几种传统融合方法的比较和结果评价,证明该方法在提高“资源一号”卫星图象空间分辨率的同时,还较好地保留了多光谱图象的光谱信息,并且有效地克服了“资源一号”卫星图象模糊的特点。  相似文献   

20.
This paper focuses on the establishment of a pixel-level fusion framework for optical and synthetic aperture radar (SAR) images to combine these two types of remotely sensed imagery for feature enhancement. We have proposed a new fusion technique, namely block-based synthetic variable ratio (Block-SVR), which is a technique based on multiple linear regression of block regions to fuse optical and SAR imagery. In order to investigate the effectiveness of the method, the fusion results of a higher resolution airborne SAR image and a lower resolution multispectral image are presented. According to the fusion results, the fused images have enhanced certain features, namely the spatial and textural content and features that are invisible in multispectral images, while preserving colour characteristics. The spectral, spatial and textural effects of the presented algorithm were also evaluated mainly by visual and quantitative methods, and compared to those of intensity-hue-saturation (IHS), principal component analysis (PCA) and wavelet-based methods. During the implementation of the block-regression based technique there are at least two advantages. One is that the block-regression based technique drastically decreases the amount of computation, whereas regression of the whole scene image is almost impossible. The other, most important, advantage is that adjustment of regressed block size can result in different emphasis between preservation of spectral characteristics and enhancement of spatial and textural content. The larger the regression block, the more the spatial and textural details are enhanced. In contrast, the smaller the regression block, the more the spectral features are preserved. The assessments indicate that the block-regression based method is more flexible than others, because it can achieve a satisfactory trade-off between preservation of spectral characteristics and enhancement of spatial and textural content by selection of optimal block size with respect to visual interpretation and mapping. This paper also proposes a scheme for the fusion of SPOT5 panchromatic, XS images with airborne SAR images using the block-regression based technique.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号