首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
The authors analyze the coding errors due to quantization by explicitly incorporating a mathematical model for a Lloyd-Max quantizer into a quadrature mirror filter (QMF) splitting and reconstruction scheme (P. H. Westerink et al., 1988). This approach explicitly incorporates quantization errors into a QMF system by means of a quantizer model. This makes it possible to discriminate between different types of coding errors, such as the aliasing error. Other errors that can be distinguished are a QMF design error, a signal error, and a random error, which is uncorrelated with the original image. Both a mean-squared error calculation and a subjective judgment of the coding errors show that the aliasing errors can be neglected for filter lengths of 12 taps or more. The signal error determines the sharpness of the reconstructed image, while the random error is most visible in the flat areas  相似文献   

2.
3.
Alphabet-constrained rate-distortion theory is extended to coding of sources with memory. Two different cases are considered: when only the size of the codebook is constrained and when the codevector values are also held fixed. For both cases, nth-order constrained-alphabet rate-distortion functions are defined and a convergent algorithm for their evaluation is presented. Specific simulations using AR(1) sources show that performance near the rate-distortion bound is possible using a reproduction alphabet consisting of a small number of codevectors. It is also shown that the additional constraint of holding the codevector values fixed does not degrade performance of the coder in relation to the size-only constrained case. This observation motivates the development of a fixed-codebook vector quantizer, called the alphabet- and entropy-constrained vector quantizer, the performance of which is comparable to the entropy-constrained vector quantizer. A number of examples using an AR(1) and a speech source are presented to corroborate the theory  相似文献   

4.
5.
Trellis-coded vector quantization   总被引:5,自引:0,他引:5  
Trellis-coded quantization is generalized to allow a vector reproduction alphabet. Three encoding structures are described, several encoder design rules are presented, and two design algorithms are developed. It is shown that for a stationary ergodic vector source, if the optimized trellis-coded vector quantization reproduction process is jointly stationary and ergodic with the source, then the quantization noise is zero-mean and of a variance equal to the difference between the source variance and the variance of the reproduction sequence. Several examples illustrate the encoder design procedure and performance  相似文献   

6.
7.
Gersho  A. 《电信纪事》1986,41(9-10):470-480
Annals of Telecommunications - In adaptive quantization, the parameters of a quantizer are updated during real-time operation based on observed information regarding the statistics of the signal...  相似文献   

8.
A fast method for searching an unstructured vector quantization (VQ) codebook is introduced and analyzed. The method, fine-coarse vector quantization (FCVQ), operates in two stages: a `fine' structured VQ followed by a table lookup `coarse' unstructured VQ. Its rate, distortion, arithmetic complexity, and storage are investigated using analytical and experimental means. Optimality condition and an optimizing algorithm are presented. The results of experiments with both uniform scalar quantization and tree-structured VQ (TSVQ) as the first stage are reported. Comparisons are made with other fast approaches to vector quantization, especially TSVQ. It is found that when rate, distortion, arithmetic complexity, and storage are all taken into account, FCVQ outperforms TSVQ in a number of cases. In comparison to full search quantization, FCVQ has much lower arithmetic complexity, at the expense of a slight increase in distortion and a substantial increase in storage. The increase in mean-squared error (over full search) decays as a negative power of the available storage  相似文献   

9.
We present here design techniques for trellis-coded vector quantizers with symmetric codebooks that facilitate low-complexity quantization as well as partitioning into equiprobable sets for trellis coding. The quantization performance of this coder on the independently identically distributed (i.i.d.) Laplacian source matches the performance of trellis-based scalar-vector quantization (TB-SVQ), but requires less computational complexity  相似文献   

10.
Tree-structured vector quantizers (TSVQ) and their variants have recently been proposed. All trees used are fixed M-ary tree structured, such that the training samples in each node must be artificially divided into a fixed number of clusters. This paper proposes a variable-branch tree-structured vector quantizer (VBTSVQ) based on a genetic algorithm, which searches for the number of child nodes of each splitting node for optimal coding in VBTSVQ. Moreover, one disadvantage of TSVQ is that the searched codeword usually differs from the full searched codeword. Briefly, the searched codeword in TSVQ sometimes is not the closest codeword to the input vector. This paper proposes the multiclassification encoding method to select many classified components to represent each cluster, and the codeword encoded in the VBTSVQ is usually the same as that of the full search. VBTSVQ outperforms other TSVQs in the experiments presented here.  相似文献   

11.
A vector generalization of trellis coded quantization (TCQ), called trellis coded vector quantization (TCVQ), and experimental results showing its performance for an i.i.d. Gaussian source are presented. For a given rate, TCVQ yields a lower distortion that TCQ at the cost of an increase in implementation complexity. In addition, TCVQ allows fractional rates, which TCQ does not  相似文献   

12.
A vector quantization scheme based on the classified vector quantization (CVQ) concept, called predictive classified vector quantization (PCVQ), is presented. Unlike CVQ where the classification information has to be transmitted, PCVQ predicts it, thus saving valuable bit rate. Two classifiers, one operating in the Hadamard domain and the other in the spatial domain, were designed and tested. The classification information was predicted in the spatial domain. The PCVQ schemes achieved bit rate reductions over the CVQ ranging from 20 to 32% for two commonly used color test images while maintaining the same acceptable image quality. Bit rates of 0.70-0.93 bits per pixel (bpp) were obtained depending on the image and PCVQ scheme used.  相似文献   

13.
A novel fuzzy clustering algorithm for the design of channel-optimized source coding systems is presented in this letter. The algorithm, termed fuzzy channel-optimized vector quantizer (FCOVQ) design algorithm, optimizes the vector quantizer (VQ) design using a fuzzy clustering process in which the index crossover probabilities imposed by a noisy channel are taken into account. The fuzzy clustering process effectively enhances the robustness of the performance of VQ to channel noise without reducing the quantization accuracy. Numerical results demonstrate that the FCOVQ algorithm outperforms existing VQ algorithms under noisy channel conditions for both Gauss-Markov sources and still image data  相似文献   

14.
二维网格编码矢量量化及其在静止图像量化中的应用   总被引:1,自引:0,他引:1  
该文提出了在二维码书空间中,在矢量量化(VQ)的基础上,应用网格编码量化(TCQ)的思想来实现量化的新方法--二维网格编码矢量量化(2D-TCVQ)。该方法首先把小码书扩展成大的虚码书,然后用网格编码矢量量化(TCVQ)的方法在扩大的二维码书空间中用维物比算法来寻找最佳量化路径。码书扩大造成第一子集最小失真减小从提高了量化性能。由于二维TCVQ采用的码书尺寸较小,因而可以应用到低存贮、低功耗的编解码环境。仿真结果表明,同一码书尺寸下,二维TCVQ比TCVQ好0.5dB左右。同时,该方法具有计算量适中,解码简单以及对误差扩散不敏感的优点。  相似文献   

15.
We investigate high-rate quantization for various detection and reconstruction loss criteria. A new distortion measure is introduced which accounts for global loss in best attainable binary hypothesis testing performance. The distortion criterion is related to the area under the receiver-operating-characteristic (ROC) curve. Specifically, motivated by Sanov's theorem, we define a performance curve as the trajectory of the pair of optimal asymptotic Type I and Type II error rates of the most powerful Neyman-Pearson test of the hypotheses. The distortion measure is then defined as the difference between the area-under-the-curve (AUC) of the optimal pre-encoded hypothesis test and the AUC of the optimal post-encoded hypothesis test. As compared to many previously introduced distortion measures for decision making, this distortion measure has the advantage of being independent of any detection thresholds or priors on the hypotheses, which are generally difficult to specify in the code design process. A high-resolution Zador-Gersho type of analysis is applied to characterize the point density and the inertial profile associated with the optimal high-rate vector quantizer. The analysis applies to a restricted class of high-rate quantizers that have bounded cells with vanishing volumes. The optimal point density is used to specify a Lloyd-type algorithm which allocates its finest resolution to regions where the gradient of the pre-encoded likelihood ratio has greatest magnitude.  相似文献   

16.
本文通过将全搜索矢量量化算法(Full Search Vector Quantization)的计算转换成内积(inner product)运算,并利用Baugh-Wooley算法,阐述了FSVQ算法的一种新的有效的基于二进制补码的VLSI实现结构。由于该结构的规则性(regularity)和模块性(modularity),它可以被高效地应用在语音、图像、和视频编码的VLSI实现中。  相似文献   

17.
A process by which a reduced-dimensionality feature vector can be extracted from a high-dimensionality signal vector and then vector quantized with lower complexity than direct quantization of the signal vector is discussed. In this procedure, a receiver must estimate, or interpolate, the signal vector from the quantized features. The task of recovering a high-dimensional signal vector from a reduced-dimensionality feature vector can be viewed as a generalized form of interpolation or prediction. A way in which optimal nonlinear interpolation can be achieved with negligible complexity, eliminating the need for ad hoc linear or nonlinear interpolation techniques, is presented. The range of applicability of nonlinear interpolative vector quantization is illustrated with examples in which optimal nonlinear estimation from quantized data is needed for efficient signal compression  相似文献   

18.
The real time implementation of an efficient signal compression technique, Vector Quantization (VQ), is of great importance to many digital signal coding applications. In this paper, we describe a new family of bit level systolic VLSI architectures which offer an attractive solution to this problem. These architectures are based on a bit serial, word parallel approach and high performance and efficiency can be achieved for VQ applications of a wide range of bandwidths. Compared with their bit parallel counterparts, these bit serial circuits provide better alternatives for VQ implementations in terms of performance and cost.  相似文献   

19.
A new interband vector quantization of a human vision-based image representation is presented. The feature specific vector quantizer (FVQ) is suited for data compression beyond second-order decorrelation. The scheme is derived from statistical investigations of natural images and the processing principles of biological vision systems, the initial stage of the coding algorithm is a hierarchical, and orientation-selective, analytic bandpass decomposition, realized by even- and odd-symmetric filter pairs that are modeled after the simple cells of the visual cortex. The outputs of each even- and odd-symmetric filter pair are interpreted as real and imaginary parts of an analytic bandpass signal, which is transformed into a local amplitude and a local phase component according to the operation of cortical complex cells. Feature-specific multidimensional vector quantization is realized by combining the amplitude/phase samples of all orientation filters of one resolution layer. The resulting vectors are suited for a classification of the local image features with respect to their intrinsic dimensionality, and enable the exploitation of higher order statistical dependencies between the subbands. This final step is closely related to the operation of cortical hypercomplex or end-stopped cells. The codebook design is based on statistical as well as psychophysical and neurophysiological considerations, and avoids the common shortcomings of perceptually implausible mathematical error criteria. The resulting perceptual quality of compressed images is superior to that obtained with standard vector quantizers of comparable complexity.  相似文献   

20.
Advances in residual vector quantization (RVQ) are surveyed. Definitions of joint encoder optimality and joint decoder optimality are discussed. Design techniques for RVQs with large numbers of stages and generally different encoder and decoder codebooks are elaborated and extended. Fixed-rate RVQs, and variable-rate RVQs that employ entropy coding are examined. Predictive and finite state RVQs designed and integrated into neural-network based source coding structures are revisited. Successive approximation RVQs that achieve embedded and refinable coding are reviewed. A new type of successive approximation RVQ that varies the instantaneous block rate by using different numbers of stages on different blocks is introduced and applied to image waveforms, and a scalar version of the new residual quantizer is applied to image subbands in an embedded wavelet transform coding system.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号