共查询到17条相似文献,搜索用时 187 毫秒
1.
2.
多光谱图像的信息分析及数据压缩 总被引:1,自引:1,他引:0
首先利用条件熵对成像光谱仪多光谱图像在空间维和光谱维方向的信息冗余度进行了分析,结果表明成像光谱图像在空间维具有很强的相关性,而在光谱维方向,图像信息不平稳,相关性略小。然后对标准的基于JPEG量化表进行改进,提出了一种既能有效保真图像边缘信息,又能提高压缩倍数的改进JPEG压缩算法-I-JPEG。随后又提出了I-JPEG/DPCM压缩算法。这种方法在I-JPEG基础上,利用局部化特性较好的无损DPCM方法去除图像在光谱维的相关性,使压缩性能得到进一步提高。 相似文献
3.
基于帧间去相关的超光谱图像压缩方法 总被引:7,自引:1,他引:6
针对超光谱图像的特点和硬件实现的实际需要,提出了一种基于小波变换的前向预测帧间去相关超光谱图像压缩算法。通过图像匹配和帧间去相关,消除超光谱图像帧间的冗余,对残差图像的压缩采用基于小波变换的快速位平面结合自适应算术编码的压缩算法,按照率失真准则控制输出码流,实现了对超光谱图像的高保真压缩。通过实验证明了该方案的有效性,基于小波变换的快速位平面结合自适应算术编码的压缩算法速度优于SPIHT,而且易于硬件实现。 相似文献
4.
5.
6.
7.
高光谱图像庞大的数据量给存储与传输带来巨大挑战,必须采用有效的压缩算法对其进行压缩。提出了一种基于分类的高光谱图像有损压缩算法。首先利用C均值算法对高光谱图像进行无监督光谱分类。根据分类图,针对每一类数据分别采用自适应KLT(Karhunen-Love transform)进行谱间去相关;然后对每个主成分分别进行二维小波变换。为了获得最佳的率失真性能,采用EBCOT(Embedded Block Coding with Optimized Truncation)算法对所有的主成分进行联合率失真编码。实验结果表明,所提出算法的有损压缩性能优于其它经典的压缩算法。 相似文献
8.
结合目前广泛采用的嵌入式小波零树编码(EZW)和分层树集划分算法(SPIHT),提出了一种适用于图像小波变换高频压缩的提升算法.在经小波变换之后的矩阵中引入区间变化的概念,选取合适的数值取代区间中的数值,之后进行编码和传输.对于小波变换之后的图像在低频部分采用了DPCM算法,在高频部分采用了在EZW和SPIHT基础上改进的快速压缩算法,那么在编码时就可以用较短的时间保留原始图像的大部分能量,这对大幅图像的压缩和传输非常有利.虽然新算法与原先两种算法相比略微损失了部分保真度,但却在很大层度上降低了计算复杂度,缩短了编码时间.实验结果表明,此算法取得了较好的效果. 相似文献
9.
一种结合空谱聚类的高光谱图像快速压缩算法 总被引:1,自引:0,他引:1
对高光谱图像进行快速压缩已经成为了高光谱遥感领域的研究热点.针对现有的高光谱图像数据量大和压缩所需运算量大的问题,提出了一种基于频段聚类+主成分分析(PCA)与空间分类相结合的高光谱图像快速压缩算法.首先利用最大相关度频段聚类算法(MCBC)将频段聚类,接着将每一类频段用PCA压缩,然后将压缩后的图像利用聚类信号子空间投影(CSSP)算法进行图像分类,最后在每一类内利用LBG(Linde Buzo Gray)算法通过矢量量化快速完成高光谱图像的编码.在不同的压缩比下进行实验,结果表明提出的高光谱图像压缩算法能在保证良好的图像恢复质量的前提下,大幅度降低运算复杂度,实现高光谱图像的快速压缩. 相似文献
10.
11.
Luigi Dragotti P. Poggi G. Ragozini A.R.P. 《Geoscience and Remote Sensing, IEEE Transactions on》2000,38(1):416-428
The authors carry out low bit-rate compression of multispectral images by means of the Said and Pearlman's SPIHT algorithm, suitably modified to take into account the interband dependencies. Two techniques are proposed: in the first, a three-dimensional (3D) transform is taken (wavelet in the spatial domain, Karhunen-Loeve in the spectral domain) and a simple 3D SPIHT is used; in the second, after taking a spatial wavelet transform, spectral vectors of pixels are vector quantized and a gain-driven SPIHT is used. Numerous experiments on two sample multispectral images show very good performance for both algorithms 相似文献
12.
一种新的图像压缩编码算法 总被引:2,自引:1,他引:1
对图像压缩编码算法进行了改进。首先,将小波分解后的3个高频系数进行预处理:将高频部分进行球坐标变换,降低了同一尺度内系数的相关性;基于小波域和球坐标域的两个前提,定义了多尺度模积的概念,用来控制收缩函数对小波高频部分进行收缩处理。这样,可以去除那些不影响视觉效果的小波系数以及噪声信息,达到较高的压缩比。然后,对小波变换的低频部分进行单独编码(DPCM),对球坐标下的高频部分采用改进的多级树集合分裂(SPIHT)编码。针对SPIHT编码中重复扫描的问题,引入了最大值矩阵MMP(matrix of maximum pixel),这种策略能够有效降低比较次数。仿真实验表明,本文提出的算法具有较好的编码效率。 相似文献
13.
A modified SPIHT algorithm for image coding with a joint MSE and classification distortion measure. 总被引:1,自引:0,他引:1
The set partitioning in hierarchical trees (SPIHT) algorithm is an efficient wavelet-based progressive image-compression technique, designed to minimize the mean-squared error (MSE) between the original and decoded imagery. However, the MSE-based distortion measure is not in general well correlated with image-recognition quality, especially at low bit rates. Specifically, low-amplitude wavelet coefficients that may be important for classification are given low priority by conventional SPIHT. In this paper, we use the kernel matching pursuits (KMP) method to autonomously estimate the importance of each wavelet subband for distinguishing between different textures, with textural segmentation first performed via a hidden Markov tree. Based on subband importance determined via KMP, we scale the wavelet coefficients prior to SPIHT coding, with the goal of minimizing a Lagrangian distortion based jointly on the MSE and classification error. For comparison we consider Bayes tree-structured vector quantization (B-TSVQ), also designed to obtain a tradeoff between MSE and classification error. The performances of the original SPIHT, the modified SPIHT, and B-TSVQ are compared. 相似文献
14.
Qing-Zhong Li Wen-Jin Wang 《Journal of Visual Communication and Image Representation》2010,21(7):762-769
Underwater image compression has been the key technology for transmitting massive amount of image data via underwater acoustic channel with limited bandwidth. According to the characteristics of underwater color images, an efficient underwater image compression method has been developed. The new coding scheme employs a wavelet-based preprocessing method to remove the visual redundancy, and adopts a Wavelet Tree-based Wavelet Difference Reduction (WTWDR) algorithm to remove the spatial redundancy of underwater color images. Instead of scanning whole transformed image like the WDR method, the difference reduction coding is used for each significant wavelet tree in the proposed WTWDR algorithm based on the correlation between the subbands of higher levels and lower levels of a transformed image. The experimental results show that for underwater color images the proposed method outperforms both WDR and SPIHT at very low bit rates in terms of compression ratio and reconstructed quality, while for natural images it has similar performance with WDR and SPIHT. Hence, the proposed approach is especially suitable for underwater color image compression at very low bit rates. 相似文献
15.
We present an implementable three dimensional terrain adaptive transform based bandwidth compression technique for multispectral imagery. The algorithm exploits the inherent spectral and spatial correlations in the data. The compression technique is based on Karhunen-Loeve transformation for spectral decorrelation followed by the standard JPEG algorithm for coding the resulting spectrally decorrelated eigen images. The algorithm is conveniently parameterized to accommodate reconstructed image fidelities ranging from near-lossless at about 5:1 CR to visually lossy beginning at about 30:1 CR. The novelty of this technique lies in its unique capability to adaptively vary the characteristics of the spectral correlation transformation as a function of the variation of the local terrain. The spectral and spatial modularity of the algorithm architecture allows the JPEG to be replaced by a alternate spatial coding procedure. The significant practical advantage of this proposed approach is that it is based on the standard and highly developed JPEG compression technology 相似文献
16.
Benierbah S. Khamadja M. 《Vision, Image and Signal Processing, IEE Proceedings -》2006,153(2):237-243
A new technique, called inter-band compensated prediction, for coding colour and multispectral images is presented. It is suitable to use for coding any spectral domain and can code colour and multispectral images with any number of bands. This technique is based on the same principles as the very efficient motion compensated prediction largely used in video coding. Thus, each band is predicted in the spectral direction by compensating the differences in the neighbouring bands and then coding the prediction error spatially by another method. This is a forward adaptive prediction and the information used for compensation is coded as side information with prediction error. The comparison of the coding results with the state-of-the-art coding algorithms, based on spectral transformations, proves that this technique is very efficient and can even outperform them. In addition, compensation can be combined with any spatial coder that allows lossless, lossy and scalable coding of any spectral content of the image. It has also the advantages of being simple to implement and to use with parallel architectures. 相似文献
17.
Hyperspectral imagery: clutter adaptation in anomaly detection 总被引:5,自引:0,他引:5
Schweizer S.M. Moura J.M.F. 《IEEE transactions on information theory / Professional Technical Group on Information Theory》2000,46(5):1855-1871
Hyperspectral sensors are passive sensors that simultaneously record images for hundreds of contiguous and narrowly spaced regions of the electromagnetic spectrum. Each image corresponds to the same ground scene, thus creating a cube of images that contain both spatial and spectral information about the objects and backgrounds in the scene. In this paper, we present an adaptive anomaly detector designed assuming that the background clutter in the hyperspectral imagery is a three-dimensional Gauss-Markov random field. This model leads to an efficient and effective algorithm for discriminating man-made objects (the anomalies) in real hyperspectral imagery. The major focus of the paper is on the adaptive stage of the detector, i.e., the estimation of the Gauss-Markov random field parameters. We develop three methods: maximum-likelihood; least squares; and approximate maximum-likelihood. We study these approaches along three directions: estimation error performance, computational cost, and detection performance. In terms of estimation error, we derive the Cramer-Rao bounds and carry out Monte Carlo simulation studies that show that the three estimation procedures have similar performance when the fields are highly correlated, as is often the case with real hyperspectral imagery. The approximate maximum-likelihood method has a clear advantage from the computational point of view. Finally, we test extensively with real hyperspectral imagery the adaptive anomaly detector incorporating either the least squares or the approximate maximum-likelihood estimators. Its performance compares very favorably with that of the RX algorithm 相似文献