首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到17条相似文献,搜索用时 187 毫秒
1.
结合矢量量化的SPIHT算法用于多光谱图像压缩   总被引:4,自引:0,他引:4  
针对多波段遥感图像纹理复杂丰富、局部相关性较弱的特点,提出了结合矢量量化的SPIHT压缩算法。将经过小波变换后的遥感图像谱间相同位置的系数聚集构成矢量,根据高频子图的局部块纹理强弱进行自适应性的量化。使基于标量的SPIHT算法能够方便的处理矢量,有效去除数据间各类相关。实验表明,该方法对多波段遥感图像的压缩可以收到良好的效果,且算法具有良好的实时性,对单幅图像的压缩比和峰值信噪比(PSNR)均优于普通的二维SPIHT算法。  相似文献   

2.
多光谱图像的信息分析及数据压缩   总被引:1,自引:1,他引:0  
蒋青松  王建宇 《红外技术》2004,26(1):44-47,51
首先利用条件熵对成像光谱仪多光谱图像在空间维和光谱维方向的信息冗余度进行了分析,结果表明成像光谱图像在空间维具有很强的相关性,而在光谱维方向,图像信息不平稳,相关性略小。然后对标准的基于JPEG量化表进行改进,提出了一种既能有效保真图像边缘信息,又能提高压缩倍数的改进JPEG压缩算法-I-JPEG。随后又提出了I-JPEG/DPCM压缩算法。这种方法在I-JPEG基础上,利用局部化特性较好的无损DPCM方法去除图像在光谱维的相关性,使压缩性能得到进一步提高。  相似文献   

3.
基于帧间去相关的超光谱图像压缩方法   总被引:7,自引:1,他引:6  
针对超光谱图像的特点和硬件实现的实际需要,提出了一种基于小波变换的前向预测帧间去相关超光谱图像压缩算法。通过图像匹配和帧间去相关,消除超光谱图像帧间的冗余,对残差图像的压缩采用基于小波变换的快速位平面结合自适应算术编码的压缩算法,按照率失真准则控制输出码流,实现了对超光谱图像的高保真压缩。通过实验证明了该方案的有效性,基于小波变换的快速位平面结合自适应算术编码的压缩算法速度优于SPIHT,而且易于硬件实现。  相似文献   

4.
针对星载多光谱图像压缩,提出了基于子带谱间变换的压缩算法。该算法首先对多光谱图像序列的每个波段分别进行空间二维小波变换,以此去除多光谱图像的空间相关性;为了去除多光谱图像的谱间相关性,将小波分解后的每一层子带作为整体,采用串行成对变换的方式对两个波段进行子带谱间KLT变换;最后,利用最优截断的嵌入式块编码算法对变换后的所有主成分同时进行最优率失真压缩。实验结果表明,该算法能够获得较好的压缩性能,同时具有较低的编码复杂度,适用于星载多光谱图像的压缩。  相似文献   

5.
由于无线带宽的有限性,基于红外序列图像自身独特的特点,工程需要对红外序列图像进行无损压缩.通过比较JPEG 2000、SPIHT、JPEG + DPCM和JPEG_LS这4种算法的压缩性能和硬件实现复杂度,选取了性能最优易于硬件实现的JPEG_LS作为核心压缩算法,以DSP和FPGA作为硬件平台核心.为验证系统的可行性...  相似文献   

6.
为有效存储MODIS多光谱图像数据,该文提出一种基于谱间预测和整数小波变换的多光谱图像压缩算法.首先通过构造谱间最优预测器去除谱间冗余,再利用整数小波变换和SPIHT算法对预测误差图像去除空间冗余,最后进行自适应算术编码.该方法可实现MODIS多光谱图像的无损、近无损和有损压缩,取得了满意的实验结果;在不同小波基条件下与3D-SPIHT算法比较,表明了该方法的有效性.  相似文献   

7.
杨新锋  胡旭诺  粘永健 《红外与激光工程》2016,45(2):228003-0228003(4)
高光谱图像庞大的数据量给存储与传输带来巨大挑战,必须采用有效的压缩算法对其进行压缩。提出了一种基于分类的高光谱图像有损压缩算法。首先利用C均值算法对高光谱图像进行无监督光谱分类。根据分类图,针对每一类数据分别采用自适应KLT(Karhunen-Love transform)进行谱间去相关;然后对每个主成分分别进行二维小波变换。为了获得最佳的率失真性能,采用EBCOT(Embedded Block Coding with Optimized Truncation)算法对所有的主成分进行联合率失真编码。实验结果表明,所提出算法的有损压缩性能优于其它经典的压缩算法。  相似文献   

8.
结合目前广泛采用的嵌入式小波零树编码(EZW)和分层树集划分算法(SPIHT),提出了一种适用于图像小波变换高频压缩的提升算法.在经小波变换之后的矩阵中引入区间变化的概念,选取合适的数值取代区间中的数值,之后进行编码和传输.对于小波变换之后的图像在低频部分采用了DPCM算法,在高频部分采用了在EZW和SPIHT基础上改进的快速压缩算法,那么在编码时就可以用较短的时间保留原始图像的大部分能量,这对大幅图像的压缩和传输非常有利.虽然新算法与原先两种算法相比略微损失了部分保真度,但却在很大层度上降低了计算复杂度,缩短了编码时间.实验结果表明,此算法取得了较好的效果.  相似文献   

9.
一种结合空谱聚类的高光谱图像快速压缩算法   总被引:1,自引:0,他引:1  
对高光谱图像进行快速压缩已经成为了高光谱遥感领域的研究热点.针对现有的高光谱图像数据量大和压缩所需运算量大的问题,提出了一种基于频段聚类+主成分分析(PCA)与空间分类相结合的高光谱图像快速压缩算法.首先利用最大相关度频段聚类算法(MCBC)将频段聚类,接着将每一类频段用PCA压缩,然后将压缩后的图像利用聚类信号子空间投影(CSSP)算法进行图像分类,最后在每一类内利用LBG(Linde Buzo Gray)算法通过矢量量化快速完成高光谱图像的编码.在不同的压缩比下进行实验,结果表明提出的高光谱图像压缩算法能在保证良好的图像恢复质量的前提下,大幅度降低运算复杂度,实现高光谱图像的快速压缩.  相似文献   

10.
文中提出了一种基于分类预测的三维SPIHT算法,并对多光谱1~7波段图像进行了压缩实验。首先对图像数据作三维变换,窄域采用浮点97小波去除相关性,谱域分类预测去除冗余:再根据分类预测算法获得系数的残差图像,并对残差图像进行三维SPIHT编码;而对分类预测时得到的码书和索引表进行哈夫曼无损压缩;将这3个编码文件传送到解码端用于图像重构。实验证明该算法具有很好的重构效果。  相似文献   

11.
The authors carry out low bit-rate compression of multispectral images by means of the Said and Pearlman's SPIHT algorithm, suitably modified to take into account the interband dependencies. Two techniques are proposed: in the first, a three-dimensional (3D) transform is taken (wavelet in the spatial domain, Karhunen-Loeve in the spectral domain) and a simple 3D SPIHT is used; in the second, after taking a spatial wavelet transform, spectral vectors of pixels are vector quantized and a gain-driven SPIHT is used. Numerous experiments on two sample multispectral images show very good performance for both algorithms  相似文献   

12.
一种新的图像压缩编码算法   总被引:2,自引:1,他引:1  
对图像压缩编码算法进行了改进。首先,将小波分解后的3个高频系数进行预处理:将高频部分进行球坐标变换,降低了同一尺度内系数的相关性;基于小波域和球坐标域的两个前提,定义了多尺度模积的概念,用来控制收缩函数对小波高频部分进行收缩处理。这样,可以去除那些不影响视觉效果的小波系数以及噪声信息,达到较高的压缩比。然后,对小波变换的低频部分进行单独编码(DPCM),对球坐标下的高频部分采用改进的多级树集合分裂(SPIHT)编码。针对SPIHT编码中重复扫描的问题,引入了最大值矩阵MMP(matrix of maximum pixel),这种策略能够有效降低比较次数。仿真实验表明,本文提出的算法具有较好的编码效率。  相似文献   

13.
The set partitioning in hierarchical trees (SPIHT) algorithm is an efficient wavelet-based progressive image-compression technique, designed to minimize the mean-squared error (MSE) between the original and decoded imagery. However, the MSE-based distortion measure is not in general well correlated with image-recognition quality, especially at low bit rates. Specifically, low-amplitude wavelet coefficients that may be important for classification are given low priority by conventional SPIHT. In this paper, we use the kernel matching pursuits (KMP) method to autonomously estimate the importance of each wavelet subband for distinguishing between different textures, with textural segmentation first performed via a hidden Markov tree. Based on subband importance determined via KMP, we scale the wavelet coefficients prior to SPIHT coding, with the goal of minimizing a Lagrangian distortion based jointly on the MSE and classification error. For comparison we consider Bayes tree-structured vector quantization (B-TSVQ), also designed to obtain a tradeoff between MSE and classification error. The performances of the original SPIHT, the modified SPIHT, and B-TSVQ are compared.  相似文献   

14.
Underwater image compression has been the key technology for transmitting massive amount of image data via underwater acoustic channel with limited bandwidth. According to the characteristics of underwater color images, an efficient underwater image compression method has been developed. The new coding scheme employs a wavelet-based preprocessing method to remove the visual redundancy, and adopts a Wavelet Tree-based Wavelet Difference Reduction (WTWDR) algorithm to remove the spatial redundancy of underwater color images. Instead of scanning whole transformed image like the WDR method, the difference reduction coding is used for each significant wavelet tree in the proposed WTWDR algorithm based on the correlation between the subbands of higher levels and lower levels of a transformed image. The experimental results show that for underwater color images the proposed method outperforms both WDR and SPIHT at very low bit rates in terms of compression ratio and reconstructed quality, while for natural images it has similar performance with WDR and SPIHT. Hence, the proposed approach is especially suitable for underwater color image compression at very low bit rates.  相似文献   

15.
We present an implementable three dimensional terrain adaptive transform based bandwidth compression technique for multispectral imagery. The algorithm exploits the inherent spectral and spatial correlations in the data. The compression technique is based on Karhunen-Loeve transformation for spectral decorrelation followed by the standard JPEG algorithm for coding the resulting spectrally decorrelated eigen images. The algorithm is conveniently parameterized to accommodate reconstructed image fidelities ranging from near-lossless at about 5:1 CR to visually lossy beginning at about 30:1 CR. The novelty of this technique lies in its unique capability to adaptively vary the characteristics of the spectral correlation transformation as a function of the variation of the local terrain. The spectral and spatial modularity of the algorithm architecture allows the JPEG to be replaced by a alternate spatial coding procedure. The significant practical advantage of this proposed approach is that it is based on the standard and highly developed JPEG compression technology  相似文献   

16.
A new technique, called inter-band compensated prediction, for coding colour and multispectral images is presented. It is suitable to use for coding any spectral domain and can code colour and multispectral images with any number of bands. This technique is based on the same principles as the very efficient motion compensated prediction largely used in video coding. Thus, each band is predicted in the spectral direction by compensating the differences in the neighbouring bands and then coding the prediction error spatially by another method. This is a forward adaptive prediction and the information used for compensation is coded as side information with prediction error. The comparison of the coding results with the state-of-the-art coding algorithms, based on spectral transformations, proves that this technique is very efficient and can even outperform them. In addition, compensation can be combined with any spatial coder that allows lossless, lossy and scalable coding of any spectral content of the image. It has also the advantages of being simple to implement and to use with parallel architectures.  相似文献   

17.
Hyperspectral imagery: clutter adaptation in anomaly detection   总被引:5,自引:0,他引:5  
Hyperspectral sensors are passive sensors that simultaneously record images for hundreds of contiguous and narrowly spaced regions of the electromagnetic spectrum. Each image corresponds to the same ground scene, thus creating a cube of images that contain both spatial and spectral information about the objects and backgrounds in the scene. In this paper, we present an adaptive anomaly detector designed assuming that the background clutter in the hyperspectral imagery is a three-dimensional Gauss-Markov random field. This model leads to an efficient and effective algorithm for discriminating man-made objects (the anomalies) in real hyperspectral imagery. The major focus of the paper is on the adaptive stage of the detector, i.e., the estimation of the Gauss-Markov random field parameters. We develop three methods: maximum-likelihood; least squares; and approximate maximum-likelihood. We study these approaches along three directions: estimation error performance, computational cost, and detection performance. In terms of estimation error, we derive the Cramer-Rao bounds and carry out Monte Carlo simulation studies that show that the three estimation procedures have similar performance when the fields are highly correlated, as is often the case with real hyperspectral imagery. The approximate maximum-likelihood method has a clear advantage from the computational point of view. Finally, we test extensively with real hyperspectral imagery the adaptive anomaly detector incorporating either the least squares or the approximate maximum-likelihood estimators. Its performance compares very favorably with that of the RX algorithm  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号