排序方式: 共有33条查询结果,搜索用时 15 毫秒
1.
A precoding scheme for noise whitening on intersymbol interference (ISI) channels is presented. This scheme is compatible with trellis-coded modulation and, unlike Tomlinson precoding, allows constellation shaping. It can be used with almost any shaping scheme, including the optimal SVQ shaping, as opposed to trellis precoding, which can only be used with trellis shaping. The implementation complexity of this scheme is minimal-only three times that of the noise prediction filter, hence effective noise whitening can be achieved by using a high-order predictor 相似文献
2.
For past I see ibid., vol.4, no.4, p.405 (1995). The use of the image model of Part I is investigated in the context of image compression. The model decomposes the image into a primary component that contains the strong edge information, a smooth component that represents the background slow-intensity variations, and a texture component that contains the textures. The primary component, which is known to be perceptually important, is encoded separately by encoding the intensity and geometric information of the strong edge brim contours. Two alternatives for coding the smooth and texture components are studied: entropy-coded adaptive DCT and entropy-coded subband coding. It is shown via simulations that the proposed schemes, which can be thought of as a hybrid of waveform coding and feature-based coding techniques, result in both subjective and objective performance improvements over several other image coding schemes and, in particular, over the JPEG continuous-tone image compression standard. These improvements are especially noticeable at low bit rates. Furthermore, it is shown that a perceptual tuning based on the contrast-sensitivity of the human visual system can be used in the DCT-based scheme, which in conjunction with the three-component model, leads to additional subjective performance improvements. 相似文献
3.
A study of vector quantization for noisy channels 总被引:10,自引:0,他引:10
Farvardin N. 《IEEE transactions on information theory / Professional Technical Group on Information Theory》1990,36(4):799-809
Several issues related to vector quantization for noisy channels are discussed. An algorithm based on simulated annealing is developed for assigning binary codewords to the vector quantizer code-vectors. It is shown that this algorithm could result in dramatic performance improvements as compared to randomly selected codewords. A modification of the simulated annealing algorithm for binary codeword assignment is developed for the case where the bits in the codeword are subjected to unequal error probabilities (resulting from unequal levels of error protection). An algorithm for the design of an optimal vector quantizer for a noisy channel is briefly discussed, and its robustness under channel mismatch conditions is studied. Numerical results for a stationary first-order Gauss-Markov source and a binary symmetric channel are provided. It is concluded that the channel-optimized vector quantizer design algorithm, if used carefully, can result in a fairly robust system with no additional delay. The case in which the communication channel is nonstationary (as in mobile radio channels) is studied, and some preliminary ideas for quantizer design are presented 相似文献
4.
Goblirsch D.M. Farvardin N. 《IEEE transactions on information theory / Professional Technical Group on Information Theory》1992,38(5):1455-1473
An algorithm for designing switched scalar quantizers for hidden Markov sources is described. The design problem is cast as a nonlinear optimization problem. The optimization variables are the thresholds and reproduction levels for each quantizer and the parameters defining the next-quantizer map. The cost function is the average distribution incurred by the system in steady-state. The next-quantizer map is treated as a stochastic map so that all of the optimization variables are continuous-valued, allowing the use of a gradient-based optimization procedure. This approach solves a major problem in the design of switched scalar quantizing systems, namely, that of determining an optimal next-quantizer decision rule. Details are given for computing the cost function and its gradient for weighted-squared-error distortion. Simulation results which compare the new system to current systems show that the present system performs better. It is observed that the optimal system can in fact have a next-quantizer map with stochastic components 相似文献
5.
An embedded source code allows the decoder to reconstruct the source progressively from the prefixes of a single bit stream. It is desirable to design joint source-channel coding schemes which retain the capability of progressive reconstruction in the presence of channel noise or packet loss. Here, we address the problem of joint source-channel coding of images for progressive transmission over memoryless bit error or packet erasure channels. We develop a framework for encoding based on embedded source codes and embedded error correcting and error detecting channel codes. For a target transmission rate, we provide solutions and an algorithm for the design of optimal unequal error/erasure protection. Three performance measures are considered: the average distortion, the average peak signal-to-noise ratio, and the average useful source coding rate. Under the assumption of rate compatibility of the underlying channel codes, we provide necessary conditions for progressive transmission of joint source-channel codes. We also show that the unequal error/erasure protection policies that maximize the average useful source coding rate allow progressive transmission with optimal unequal protection at a number of intermediate rates 相似文献
6.
Laroia R. Farvardin N. 《IEEE transactions on information theory / Professional Technical Group on Information Theory》1994,40(3):860-870
The paper describes a structured vector quantization approach for stationary memoryless sources that combines the scalar-vector quantizer (SVQ) ideas (Laroia and Farvardin, 1993) with trellis coded quantization (Marcellin and Fischer, 1990). The resulting quantizer is called the trellis-based scalar-vector quantizer (TB-SVQ). The SVQ structure allows the TB-SVQ to realize a large boundary gain while the underlying trellis code enables it to achieve a significant portion of the total granular gain. For large block-lengths and powerful (possibly complex) trellis codes the TB-SVQ can, in principle, achieve the rate-distortion bound. As indicated by the results obtained, even for reasonable block-lengths and relatively simple trellis codes, the TB-SVQ outperforms all other fixed-rate quantizers at reasonable complexity 相似文献
7.
8.
A method is presented for the joint source-channel coding optimization of a scheme based on the two-dimensional block cosine transform when the output of the encoder is to be transmitted via a memoryless binary symmetric channel. The authors' approach involves an iterative algorithm for the design of the quantizers (in the presence of channel errors) used for encoding the transform coefficients. This algorithm produces a set of locally optimum (in the mean-squared error sense) quantizers and the corresponding binary codeword assignment for the assumed transform coefficient statistics. To determine the optimum bit assignment among the transform coefficients, the authors have used an algorithm based on the steepest descent method, which, under certain convexity conditions on the performance of the channel-optimized quantizers, yields the optimal bit allocation. Simulation results for the performance of this locally optimum system over noisy channels have been obtained, and appropriate comparisons with a reference system designed for no channel errors have been made. It is shown that substantial performance improvements can be obtained by using this scheme. Furthermore, theoretically predicted results and rate distortion-theoretic bounds for an assumed two-dimensional image model are provided 相似文献
9.
Alajaji F. Phamdo N. Farvardin N. Fuja T.E. 《IEEE transactions on information theory / Professional Technical Group on Information Theory》1996,42(1):230-239
We consider maximum a posteriori (MAP) detection of a binary asymmetric Markov source transmitted over a binary Markov channel. The MAP detector observes a long (but finite) sequence of channel outputs and determines the most probable source sequence. In some cases, the MAP detector can be implemented by simple rules such as the “believe what you see” rule or the “guess zero (or one) regardless of what you see” rule. We provide necessary and sufficient conditions under which this is true. When these conditions are satisfied, the exact bit error probability of the sequence MAP detector can be determined. We examine in detail two special cases of the above source: (i) binary independent and identically distributed (i.i.d.) source and (ii) binary symmetric Markov source. In case (i), our simulations show that the performance of the MAP detector improves as the channel noise becomes more correlated. Furthermore, a comparison of the proposed system with a (substantially more complex) traditional tandem source-channel coding scheme portrays superior performance for the proposed scheme at relatively high channel bit error rates. In case (ii), analytical as well as simulation results show the existence of a “mismatch” between the source and the channel (the performance degrades as the channel noise becomes more correlated). This mismatch is reduced by the use of a simple rate-one convolutional encoder 相似文献
10.
We present a new classification scheme, dubbed spectral classification, which uses the spectral characteristics of the image blocks to classify them into one of a finite number of classes. A vector quantizer with an appropriate distortion measure is designed to perform the classification operation. The application of the proposed spectral classification scheme is then demonstrated in the context of adaptive image coding. It is shown that the spectral classifier outperforms gain-based classifiers while requiring a lower computational complexity. 相似文献