首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Adaptive vector quantization (AVQ) is a recently proposed approach for electrocardiogram (ECG) compression. The adaptability of the approach can be used to control the quality of reconstructed signals. However, like most of other ECG compression methods, AVQ only deals with the single-channel ECG, and for the multichannel (MC) ECG, coding ECG signals on a channel by channel basis is not efficient, because the correlation across channels is not exploited. To exploit this correlation, an MC version of AVQ is proposed. In the proposed approach, the AVQ index from each channel is collected to form a new input vector. The vector is then vector quantized adaptively using one additional codebook called index codebook. Both the MIT/BIH database and a clinical Holter database are tested. The experimental results show that, for exactly the same quality of reconstructed signals, the MC-AVQ performs better than single-channel AVQ in terms of bit rate. A theoretical analysis supporting this result is also demonstrated in this paper. For the same and relatively good visual quality, the average compressed data rate/channel is reduced from 293.5 b/s using the single-channel AVQ to 238.2 b/s using the MC-AVQ in the MIT/BIH case.  相似文献   

2.
This paper proposes a new vector quantization based (VQ-based) technique for very low bit rate encoding of multispectral images. We rely on the assumption that the shape of a generic spatial block does not change significantly from band to band, as is the case for high spectral-resolution imagery. In such a hypothesis, it is possible to accurately quantize a three-dimensional (3-D) block-composed of homologous two-dimensional (2-D) blocks drawn from several bands-as the Kronecker-product of a spatial-shape codevector and a spectral-gain codevector, with significant computation saving with respect to straight VQ. An even higher complexity reduction is obtained by representing each 3-D block in its minimum-square-error Kronecker-product form and by quantizing the component shape and gain vectors. For the block sizes considered, this encoding strategy is over 100 times more computationally efficient than unconstrained VQ, and over ten times more computationally efficient than direct gain-shape VQ. The proposed technique is obviously suboptimal with respect to VQ, but the huge complexity reduction allows one to use much larger blocks than usual and to better exploit both the statistical and psychovisual redundancy of the image. Numerical experiments show fully satisfactory results whenever the shape-invariance hypothesis turns out to be accurate enough, as in the case of hyperspectral images. In particular, for a given level of complexity and image quality, the compression ratio is up to five times larger than that provided by ordinary VQ, and also larger than that provided by other techniques specifically designed for multispectral image coding.  相似文献   

3.
In this paper, we propose a novel vector quantizer (VQ) in the wavelet domain for the compression of electrocardiogram (ECG) signals. A vector called tree vector (TV) is formed first in a novel structure, where wavelet transformed (WT) coefficients in the vector are arranged in the order of a hierarchical tree. Then, the TVs extracted from various WT subbands are collected in one single codebook. This feature is an advantage over traditional WT-VQ methods, where multiple codebooks are needed and are usually designed separately because numerical ranges of coefficient values in various WT subbands are quite different. Finally, a distortion-constrained codebook replenishment mechanism is incorporated into the VQ, where codevectors can be updated dynamically, to guarantee reliable quality of reconstructed ECG waveforms. With the proposed approach both visual quality and the objective quality in terms of the percent of root-mean-square difference (PRD) are excellent even in a very low bit rate. For the entire 48 records of Lead II ECG data in the MIT/BIH database, an average PRD of 7.3% at 146 b/s is obtained. For the same test data under consideration, the proposed method outperforms many recently published ones, including the best one known as the set partitioning in hierarchical trees.  相似文献   

4.
In a prior work, a wavelet-based vector quantization (VQ) approach was proposed to perform lossy compression of electrocardiogram (ECG) signals. In this paper, we investigate and fix its coding inefficiency problem in lossless compression and extend it to allow both lossy and lossless compression in a unified coding framework. The well-known 9/7 filters and 5/3 integer filters are used to implement the wavelet transform (WT) for lossy and lossless compression, respectively. The codebook updating mechanism, originally designed for lossy compression, is modified to allow lossless compression as well. In addition, a new and cost-effective coding strategy is proposed to enhance the coding efficiency of set partitioning in hierarchical tree (SPIHT) at the less significant bit representation of a WT coefficient. ECG records from the MIT/BIH Arrhythmia and European ST-T Databases are selected as test data. In terms of the coding efficiency for lossless compression, experimental results show that the proposed codec improves the direct SPIHT approach and the prior work by about 33% and 26%, respectively.  相似文献   

5.
In this paper, we present a new image compression scheme that exploits the VQ technique in a hierarchical nonlinear pyramid structure. We use multistage median filters (MMF) to build the image pyramids. Image pyramids generated by MMF show a better details preservation than the ones generated by Burt's kernel. It is shown that MMF effectively decorrelates the difference pyramids, resulting in smaller first order entropy. Our simulations on natural images show that NPVQ yields a higher SNR as well as better image quality, in comparison with LPVQ. The NPVQ scheme is also appropriate for progressive image transmission.  相似文献   

6.
Hyperspectral data compression using a fast vector quantization algorithm   总被引:4,自引:0,他引:4  
A fast vector quantization algorithm for data compression of hyperspectral imagery is proposed in this paper. It makes use of the fact that in the full search of the generalized Lloyd algorithm (GLA) a training vector does not require a search to find the minimum distance partition if its distance to the partition is improved in the current iteration compared to that of the previous iteration. The proposed method has the advantage of being simple, producing a large computation time saving and yielding compression fidelity as good as the GLA. Four hyperspectral data cubes covering a wide variety of scene types were tested. The loss of spectral information due to compression was evaluated using the spectral angle mapper and a remote sensing application.  相似文献   

7.
The gold washing (GW) adaptive vector quantization (AVQ) (GW-AVQ) is a relatively new scheme for data compression. The adaptive nature of the algorithm provides the robustness for wide variety of the signals. However, the performance of GW-AVQ is highly dependent on a preset parameter called distortion threshold (dth) which must be determined by experience or trial-and-error. We propose an algorithm that allows us to assign an initial dth arbitrarily and then automatically progress toward a desired dth according to a specified quality criterion, such as the percent of root mean square difference (PRD) for electrocardiogram (ECG) signals. A theoretical foundation of the algorithm is also presented. This algorithm is particularly useful when multiple GW-AVQ codebooks and, thus, multiple dth's are required in a subband coding framework. Four sets of ECG data with entirely different characteristics are selected from the MIT/BIH database to verify the proposed algorithm. Both the direct GW-AVQ and a wavelet-based GW-AVQ are tested. The results show that a user specified PRD can always be reached regardless of the ECG waveforms, the initial selection of dth or whether a wavelet transform is used in conjunction with the GW-AVQ. An average result of 6% in PRD and 410 bits/s in compressed data rate is obtained with excellent visual quality.  相似文献   

8.
Vector quantization for compression of multichannel ECG   总被引:2,自引:0,他引:2  
We propose a scheme based on vector quantization (VQ) for the data-compression of multichannel ECG waveforms. N-channel ECG is first coded using m-AZTEC, a new, multichannel extension of the AZTEC algorithm. As in AZTEC, the waveform is approximated using only lines and slopes; however, in m-AZTEC, the N-channels are coded simultaneously into a sequence of N + 1 dimensional vectors, thus exploiting the correlation that exists across channels in the AZTEC duration-parameter. Classified vector quantization (CVQ) of the m-AZTEC output is next performed to exploit the correlation in the other AZTEC parameter, namely, the value-parameter. CVQ preserves the waveform morphology by treating the lines and slopes as two perceptually-distinct classes. Both m-AZTEC and CVQ provide data-compression and their performance improves as the number of channels increases. Moreover, the final output differs little from the AZTEC output and hence ought to enjoy the same acceptability.  相似文献   

9.
First of all a simple and practical rectangular transform is given,and then thevector quantization technique which is rapidly developing recently is introduced.We combinethe rectangular transform with vector quantization technique for image data compression.Thecombination cuts down the dimensions of vector coding.The size of the codebook can reasonablybe reduced.This method can reduce the computation complexity and pick up the vector codingprocess.Experiments using image processing system show that this method is very effective inthe field of image data compression.  相似文献   

10.
Combining fractal image compression and vector quantization   总被引:7,自引:0,他引:7  
In fractal image compression, the code is an efficient binary representation of a contractive mapping whose unique fixed point approximates the original image. The mapping is typically composed of affine transformations, each approximating a block of the image by another block (called domain block) selected from the same image. The search for a suitable domain block is time-consuming. Moreover, the rate distortion performance of most fractal image coders is not satisfactory. We show how a few fixed vectors designed from a set of training images by a clustering algorithm accelerates the search for the domain blocks and improves both the rate-distortion performance and the decoding speed of a pure fractal coder, when they are used as a supplementary vector quantization codebook. We implemented two quadtree-based schemes: a fast top-down heuristic technique and one optimized with a Lagrange multiplier method. For the 8 bits per pixel (bpp) luminance part of the 512kappa512 Lena image, our best scheme achieved a peak-signal-to-noise ratio of 32.50 dB at 0.25 bpp.  相似文献   

11.
Error-resilient pyramid vector quantization for image compression   总被引:1,自引:0,他引:1  
Pyramid vector quantization (PVQ) uses the lattice points of a pyramidal shape in multidimensional space as the quantizer codebook. It is a fixed-rate quantization technique that can be used for the compression of Laplacian-like sources arising from transform and subband image coding, where its performance approaches the optimal entropy-coded scalar quantizer without the necessity of variable length codes. In this paper, we investigate the use of PVQ for compressed image transmission over noisy channels, where the fixed-rate quantization reduces the susceptibility to bit-error corruption. We propose a new method of deriving the indices of the lattice points of the multidimensional pyramid and describe how these techniques can also improve the channel noise immunity of general symmetric lattice quantizers. Our new indexing scheme improves channel robustness by up to 3 dB over previous indexing methods, and can be performed with similar computational cost. The final fixed-rate coding algorithm surpasses the performance of typical Joint Photographic Experts Group (JPEG) implementations and exhibits much greater error resilience.  相似文献   

12.
谢小军  苏涛 《信息技术》2020,(4):97-101,106
针对卷积神经网络(CNN)在图像压缩耗费较大存储空间问题,文中通过研究压缩CNN参数的矢量量化方法解决了CNN模型的存储问题。通过压缩密集连接层的存储方式使得矢量量化方法比现有的矩阵分解方法更具优势。将k-均值聚类(KM)应用于权重和乘积量化可以在模型大小和识别精度之间取得较好的权衡。实验结果表明,结构化量化方法的效果明显优于其他方法,通过对图像压缩检索验证了压缩模型的泛化能力。  相似文献   

13.
The authors consider data compression of binary error diffused images. The original contribution is using nonlinear filters to decode error-diffused images to compress them in the gray-scale domain; this gives better image quality than directly compressing the binary images. Their method is of low computational complexity and can work with any halftoning algorithm.  相似文献   

14.
Variable rate vector quantization for medical image compression   总被引:1,自引:0,他引:1  
Three techniques for variable-rate vector quantizer design are applied to medical images. The first two are extensions of an algorithm for optimal pruning in tree-structured classification and regression due to Breiman et al. The code design algorithms find subtrees of a given tree-structured vector quantizer (TSVQ), each one optimal in that it has the lowest average distortion of all subtrees of the TSVQ with the same or lesser average rate. Since the resulting subtrees have variable depth, natural variable-rate coders result. The third technique is a joint optimization of a vector quantizer and a noiseless variable-rate code. This technique is relatively complex but it has the potential to yield the highest performance of all three techniques.  相似文献   

15.
A novel vector quantization scheme, called the address-vector quantizer (A-VQ), is proposed. It is based on exploiting the interblock correlation by encoding a group of blocks together using an address-codebook. The address-codebook consists of a set of address-codevectors where each codevector represents a combination of addresses (indexes). Each element of this codevector is an address of an entry in the LBG-codebook, representing a vector quantized block. The address-codebook consists of two regions: one is the active (addressable) region, and the other is the inactive (nonaddressable) region. During the encoding process the codevectors in the address-codebook are reordered adaptively in order to bring the most probable address-codevectors into the active region. When encoding an address-codevector, the active region of the address-codebook is checked, and if such an address combination exist its index is transmitted to the receiver. Otherwise, the address of each block is transmitted individually. The quality (SNR value) of the images encoded by the proposed A-VQ method is the same as that of a memoryless vector quantizer, but the bit rate would be reduced by a factor of approximately two when compared to a memoryless vector quantizer  相似文献   

16.
Mean-shape vector quantizer for ECG signal compression   总被引:1,自引:0,他引:1  
A direct waveform mean-shape vector quantization (MSVQ) is proposed here as an alternative for electrocardiographic (ECG) signal compression. In this method, the mean values for short ECG signal segments are quantized as scalars and compression of the single-lead ECG by average beat substraction and residual differencing their waveshapes coded through a vector quantizer. An entropy encoder is applied to both, mean and vector codes, to further increase compression without degrading the quality of the reconstructed signals. In this paper, the fundamentals of MSVQ are discussed, along with various parameters specifications such as duration of signal segments, the wordlength of the mean-value quantization and the size of the vector codebook. The method is assessed through percent-residual-difference measures on reconstructed signals, whereas its computational complexity is analyzed considering its real-time implementation. As a result, MSVQ has been found to be an efficient compression method, leading to high compression ratios (CRs) while maintaining a low level of waveform distortion and, consequently, preserving the main clinically interesting features of the ECG signals. CRs in excess of 39 have been achieved, yielding low data rates of about 140 bps. This compression factor makes this technique especially attractive in the area of ambulatory monitoring  相似文献   

17.
This article discusses bit allocation and adaptive search algorithms for mean-residual vector quantization (MRVQ) and multistage vector quantization (MSVQ). The adaptive search algorithm uses a buffer and a distortion threshold function to control the bit rate that is assigned to each input vector. It achieves a constant rate for the entire image but variable bit rate for each vector in the image. For a given codebook and several bit rates, we compare the performance between the optimal bit allocation and adaptive search algorithms. The results show that the performance of the adaptive search algorithm is only 0.20-0.53 dB worse than that of the optimal bit allocation algorithm, but the complexity of the adaptive search algorithm is much less than that of the optimal bit allocation algorithm.  相似文献   

18.
二维网格编码矢量量化及其在静止图像量化中的应用   总被引:1,自引:0,他引:1  
该文提出了在二维码书空间中,在矢量量化(VQ)的基础上,应用网格编码量化(TCQ)的思想来实现量化的新方法--二维网格编码矢量量化(2D-TCVQ)。该方法首先把小码书扩展成大的虚码书,然后用网格编码矢量量化(TCVQ)的方法在扩大的二维码书空间中用维物比算法来寻找最佳量化路径。码书扩大造成第一子集最小失真减小从提高了量化性能。由于二维TCVQ采用的码书尺寸较小,因而可以应用到低存贮、低功耗的编解码环境。仿真结果表明,同一码书尺寸下,二维TCVQ比TCVQ好0.5dB左右。同时,该方法具有计算量适中,解码简单以及对误差扩散不敏感的优点。  相似文献   

19.
The lossless compression of AVIRIS images by vector quantization   总被引:15,自引:0,他引:15  
The structure of hyperspectral images reveals spectral responses that would seem ideal candidates for compression by vector quantization. This paper outlines the results of an investigation of lossless vector quantization of 224-band Airborne/Visible Infrared imaging Spectrometer (AVIRIS) images. Various vector formation techniques are identified and suitable quantization parameters are investigated. A new technique, mean-normalized vector quantization (M-NVQ), is proposed which produces compression performances approaching the theoretical minimum compressed image entropy of 5 bits/pixel. Images are compressed from original image entropies of between 8.28 and 10.89 bits/pixel to between 4.83 and 5.90 bits/pixel  相似文献   

20.
Multistage Vector Quantization(MSVQ) can achieve very low encoding and storage complexity in comparison to unstructured vector quantization. However, the conventional MSVQ is suboptimal with respect to the overall performance measure. This paper proposes a new technology to design the decoder codebook, which is different from the encoder codebook to optimise the overall performance. The performance improvement is achieved with no effect on encoding complexity, both storage and time consuming, but a modest increase in storage complexity of decoder.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号