首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 9 毫秒
1.
In designing a vector quantizer using a training sequence (TS), the training algorithm tries to find an empirically optimal quantizer that minimizes the selected distortion criteria using the sequence. In order to evaluate the performance of the trained quantizer, we can use the empirically minimized distortion that we obtain when designing the quantizer. Several upper bounds on the empirically minimized distortions are proposed with numerical results. The bound holds pointwise, i.e., for each distribution with finite second moment in a class. From the pointwise bounds, it is possible to derive the worst case bound, which is better than the current bounds for practical training ratio /spl beta/, the ratio of the TS size to the codebook size. It is shown that the empirically minimized distortion underestimates the true minimum distortion by more than a factor of (1-1/m), where m is the sequence size. Furthermore, through an asymptotic analysis in the codebook size, a multiplication factor [1-(1-e/sup -/spl beta//)//spl beta/]/spl ap/(1-1//spl beta/) for an asymptotic bound is shown. Several asymptotic bounds in terms of the vector dimension and the type of source are also introduced.  相似文献   

2.
Asymptotically accurate approximations for quantizer performance were developed by Gish and Pierce. Variational techniques were used to obtain asymptotically optimal performance. In this correspondence, optimal performance is demonstrated simply without variational techniques using Holder's and Jensen's inequalities.  相似文献   

3.
An algorithm introduced by L. Breiman et al. (1984) in the context of classification and regression trees is reinterpreted and extended to cover a variety of applications in source coding and modeling in which trees are involved. These include variable-rate and minimum-entropy tree-structured vector quantization, minimum expected cost decision trees, variable-order Markov modeling, optimum bit allocation, and computer graphics and image processing using quadtrees. A concentration on the first of these and a detailed analysis of variable-rate tree-structured vector quantization are provided. It is found that variable-rate tree-structured vector quantization outperforms not only the fixed-rate variety but also full-search vector quantization. The successive approximation character of variable-rate tree-structured vector quantization permits it to degrade gracefully if the rate is reduced at the encoder. This has applications to the problem of buffer overflow  相似文献   

4.
We develop a methodology for the analysis of signal quantization effects in critically sampled dyadic subband tree structures using a nonlinear gain-plus-additive-noise model for the probability density function (PDF)-optimized quantizer. We constrain the two-band nonquantized and uncompensated structure at each level to be perfect reconstruction (PR). We develop an equivalent uniform filter bank followed by its polyphase structure described by primitive submatrices and compute a rigorously correct mean squared error (MSE) in the frequency domain using cyclostationary concepts in terms of: (1) the allocated quantizer bits; (2) the filter coefficients; (3) an embedded compensation parameter vector. This MSE is then minimized over all three items above. Our optimization method is applied to the specific case of a four-channel dyadic tree with average bit rate constraint. This tree is represented by an eight-channel polyphase equivalent whose interchannel signals are correlated. We show how to represent rigorously the correlation of random noise between channels due to the embedded quantizers. Our design of paraunitary and biorthogonal structures with identical and nonidentical stages is performed, compared, and validated by computer simulation under the assumption of uncorrelated cross band noise. The nonidentical stage biorthogonal filter bank turned out to have the best performance in MSE sense, but the most robust structure is the nonidentical stage paraunitary filter bank  相似文献   

5.
This paper presents expressions for the optimal step length to use when training a vector quantizer by stochastic approximation. By treating each update as an estimation problem, it provides a unified framework covering both batch and incremental training, which were previously treated separately, and extends existing results to the semibatch case. In addition, the new results presented provide a measurable improvement over results which were previously thought to be optimal  相似文献   

6.
On the structure of optimal entropy-constrained scalar quantizers   总被引:1,自引:0,他引:1  
The nearest neighbor condition implies that when searching for a mean-square optimal fixed-rate quantizer it is enough to consider the class of regular quantizers, i.e., quantizers having convex cells and codepoints which lie inside the associated cells. In contrast, quantizer regularity can preclude optimality in entropy-constrained quantization. This can be seen by exhibiting a simple discrete scalar source for which the mean-square optimal entropy-constrained scalar quantizer (ECSQ) has disconnected (and hence nonconvex) cells at certain rates. In this work, new results concerning the structure and existence of optimal ECSQs are presented. One main result shows that for continuous sources and distortion measures of the form d(x,y)=ρ(|x-y|), where ρ is a nondecreasing convex function, any finite-level ECSQ can be "regularized" so that the resulting regular quantizer has the same entropy and equal or less distortion. Regarding the existence of optimal ECSQs, we prove that under rather general conditions there exists an "almost regular" optimal ECSQ for any entropy constraint. For the squared error distortion measure and sources with piecewise-monotone and continuous densities, the existence of a regular optimal ECSQ is shown  相似文献   

7.
A recursively pruned radix-(2×2) two-dimensional (2D) fast Fourier transform (FFT) algorithm is proposed to reduce the number of operations involved in the calculation of the 2D discrete Fourier transform (DFT). It is able to compute input and output data points having multiple and possibly irregularly shaped (nonsquare) regions of support. The computational performance of the recursively pruned radix-(2×2) 2D FFT algorithm is compared with that of pruning algorithms based on the one-dimensional (1D) FFT. The former is shown to yield significant computational savings when employed in the combined 2D DFT/1D linear difference equation filter method to enhance three-dimensional spatially planar image sequences, and when employed in the MixeD moving object detection and trajectory estimation algorithm  相似文献   

8.
Optimal hierarchical coding is sought, for progressive or scalable multidimensional signal transmission, by minimizing the variance of the error difference between the original image and its lower resolution renditions. The optimal, according to the above criterion, pyramidal coders are determined for images quantized using the optimal vector Lloyd-Max quantizers. A rigorous general statistical model of a vector Lloyd-Max quantizer is used, consisting of a linear time-invariant filter followed by additive noise uncorrelated with the input. Given arbitrary analysis filters, the optimal synthesis filters are found. The optimal analysis filters are subsequently determined, leading to formulas for globally optimal structures for pyramidal multidimensional signal decompositions. These structures produce replicas of the original image, which at lower resolutions retain as much similarity to the original as possible. This is highly useful for the progressive coding of two- or three-dimensional (2-D or 3-D) images needed in applications such as fast browsing through image databases. Furthermore, the minimization of the variance of the error image leads to minimization of the variance of the quantization noise for this image and, hence, to its optimally efficient compression. Experimental results illustrate the implementation and performance of the optimal pyramids in application for the coding of still 2-D images  相似文献   

9.
The asymptotic probability of error for quantization in maximum-likelihood tests is analyzed. The authors assume quantizers with large numbers of levels generated from a companding function. A theorem that relates the companding function to the asymptotic probability of error is proved. The companding function is then optimized  相似文献   

10.
11.
An entropy-constrained quantizer Q is optimal if it minimizes the expected distortion D(Q) subject to a constraint on the output entropy H(Q). We use the Lagrangian formulation to show the existence and study the structure of optimal entropy-constrained quantizers that achieve a point on the lower convex hull of the operational distortion-rate function D/sub h/(R) = inf/sub Q/{D(Q) : H(Q) /spl les/ R}. In general, an optimal entropy-constrained quantizer may have a countably infinite number of codewords. Our main results show that if the tail of the source distribution is sufficiently light (resp., heavy) with respect to the distortion measure, the Lagrangian-optimal entropy-constrained quantizer has a finite (resp., infinite) number of codewords. In particular, for the squared error distortion measure, if the tail of the source distribution is lighter than the tail of a Gaussian distribution, then the Lagrangian-optimal quantizer has only a finite number of codewords, while if the tail is heavier than that of the Gaussian, the Lagrangian-optimal quantizer has an infinite number of codewords.  相似文献   

12.
We investigate the application of expectation maximization (EM) algorithms to the classical problem of multiple target tracking (MTT) for a known number of targets. Conventional algorithms, which deal with this problem, have a computational complexity that depends exponentially on the number of targets, and usually divide the problem into a localization stage and a tracking stage. The new algorithms achieve a linear dependency and integrate these two stages. Three optimization criteria are proposed, using deterministic and stochastic dynamic models for the targets  相似文献   

13.
The paper describes an approach to generating optimal adaptive fuzzy neural models from I/O data. This approach combines structure and parameter identification of Takagi-Sugeno-Kang (TSK) fuzzy models. We propose to achieve structure determination via a combination of modified mountain clustering (MMC) algorithm, recursive least squares estimation (RLSE), and group method of data handling (GMDH). Parameter adjustment is achieved by training the initial TSK model using the algorithm of an adaptive network based fuzzy inference system (ANFIS), which employs backpropagation (BP) and RLSE. Further, a procedure for generating locally optimal model structures is suggested. The structure optimization procedure is composed of two phases: 1) locally optimal rule premise variables subsets (LOPVS) are identified using MMC, GMDH, and a search tree (ST); and 2) locally optimal numbers of model rules (LONOR) are determined using MMC/RLSE along with parallel simulation mean square error (PSMSE) as a performance index. The effectiveness of the proposed approach is verified by a variety of simulation examples. The examples include modeling of a nonlinear dynamical process from I/O data and modeling nonlinear components of dynamical plants, followed by tracking control based on a model reference adaptive scheme (MRAC). Simulation results show that this approach is fast and accurate and leads to several optimal models  相似文献   

14.
Template matching has many applications in signal processing, image processing, pattern recognition, and video compression. This paper proposes a fast coarse-to-fine template matching algorithm for finding the exact best match, i.e., the match that may be found by a full search. This is obtained by pruning the number of candidates in the full search using the results of a coarse search. Experimental results show that speed ups of a couple of orders of magnitude can easily be achieved using this method for typical low-noise cases of two-dimensional (2-D) template matching.  相似文献   

15.
In this letter, we propose an extension of the probabilistic tree pruning sphere decoding (PTP-SD) algorithm that provides further improvement of the computational complexity with minimal extra cost and negligible performance penalty. In contrast to the PTP-SD that considers the tightening of necessary conditions in the sphere search using per-layer radius adjustment, the proposed method focuses on the sphere radius control strategy when a candidate lattice point is found. For this purpose, the dynamic radius update strategy depending on the lattice point found as well as the lattice independent radius selection scheme are jointly exploited. As a result, while maintaining the effectiveness of the PTP-SD, further reduction of the computational complexity, in particular for high SNR regime, can be achieved. From simulations in multiple-input and multiple-output (MIMO) channels, it is shown that the proposed method provides a considerable improvement in complexity with near-ML performance.  相似文献   

16.
This paper analyzes mathematically the effect of quantizer threshold imperfection commonly encountered in the circuit implementation of analog-to-digital (A/D) converters such as pulse code modulation (PCM) and sigma-delta (/spl Sigma//spl Delta/) modulation. /spl Sigma//spl Delta/ modulation, which is based on coarse quantization of oversampled (redundant) samples of a signal, enjoys a type of self-correction property for quantizer threshold errors (bias) that is not shared by PCM. Although "classical" /spl Sigma//spl Delta/ modulation is inferior to PCM in the rate-distortion sense, this robustness feature is believed to be one of the reasons why /spl Sigma//spl Delta/ modulation is preferred over PCM in A/D converters with imperfect quantizers. Motivated by these facts, other encoders are constructed in this paper that use redundancy to obtain a similar self-correction property, but that achieve higher order accuracy relative to bit rate compared to classical /spl Sigma//spl Delta/. More precisely, two different types of encoders are introduced that exhibit exponential accuracy in the bit rate (in contrast to the polynomial-type accuracy of classical /spl Sigma//spl Delta/) while possessing the self-correction property.  相似文献   

17.
New lattice vector quantizer design procedures for nonuniform sources that yield excellent performance while retaining the structure required for fast quantization are described. Analytical methods for truncating and scaling lattices to be used in vector quantization are given, and an analytical technique for piecewise-linear multidimensional companding is presented. The uniform and piecewise-uniform lattice vector quantizers are then used to quantize the discrete cosine transform coefficients of images, and their objective and subjective performance and complexity are contrasted with other lattice vector quantizers and with LBG training-mode designs.  相似文献   

18.
A technique for directly designing a variable-rate tree-structured vector quantizer by growing the tree one node at a time rather than one layer at time is presented. The technique is a natural extension of a tree growing method for decision trees. When the tree is pruned with a generalized algorithm for optimally pruning trees, improvement is measured in the signal-to-noise ratio at high rates over pruning a fixed-rate tree-structured vector quantizer of the same initial rate. The growing algorithm can be interpreted as a constrained inverse of the pruning algorithm  相似文献   

19.
Upper bounds to the asymptotic performance of block quantizers with difference distortion measures are derived. In many eases, these upper bounds approach known lower bounds as the block length of the quantizer approaches infinity. A condition for the optimal point density function of the output levels is derived. It is shown to particularize to a known result of Gersho. The behavior of the bounds for large block lengths is investigated.  相似文献   

20.
Neural network pruning techniques can be effective in accelerating neural network models, making it possible to deploy them on edge devices. In this paper, we propose to prune neural networks using data variance. Unlike other existing methods, this method is somewhat robust and does not invalidate our criteria depending on the number of data batches and the number of training sessions. We also propose a pruning compensation technique. This technique fuses the pruned convolutional information into the remaining convolutional kernel close to it. This fusion operation can effectively help retain the pruned information. We evaluate the proposed method on a number of standard datasets and compare it with several current state-of-the-art methods. Our method always achieves better performance. For example, on Tiny ImageNet, our method can prune 54.2% FLOPs of ResNet50 while obtaining a 0.22% accuracy improvement.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号