首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
The behavior of linear phase wavelet transforms in low bit-rate image coding is investigated. The influence of certain characteristics of these transforms such as regularity, number of vanishing moments, filter length, coding gain, frequency selectivity, and the shape of the wavelets on the coding performance is analyzed. The wavelet transforms performance is assessed based on a first-order Markov source and on the image quality, using subjective tests. More than 20 wavelet transforms of a test image were coded with a product code lattice quantizer with the image quality rated by different viewers. The results show that, as long as the wavelet transforms perform reasonably well, features like regularity and number of vanishing moments do not have any important impact on final image quality. The influence of the coding gain by itself is also small. On the other hand, the shape of the synthesis wavelet, which determines the visibility of coding errors on reconstructed images, is very important. Analysis of the data obtained strongly suggests that the design of good wavelet transforms for low bit-rate image coding should take into account chiefly the shape of the synthesis wavelet and, to a lesser extent, the coding.  相似文献   

2.
In this work, we present a coding scheme based on a rate-distortion optimum wavelet packets decomposition and on an adaptive coding procedure that exploits spatial non-stationarity within each subband. We show, by means of a generalization of the concept of coding gain to the case of non-stationary signals, that it may be convenient to perform subband decomposition optimization in conjunction with intraband optimal bit allocation. In our implementation, each subband is partitioned into blocks of coefficients that are coded using a geometric vector quantizer with a rate determined on the basis of spatially local statistical characteristics. The proposed scheme appears to be simpler than other wavelet packets-based schemes presented in the literature and achieves good results in terms of both compression and visual quality.  相似文献   

3.
Advances in residual vector quantization (RVQ) are surveyed. Definitions of joint encoder optimality and joint decoder optimality are discussed. Design techniques for RVQs with large numbers of stages and generally different encoder and decoder codebooks are elaborated and extended. Fixed-rate RVQs, and variable-rate RVQs that employ entropy coding are examined. Predictive and finite state RVQs designed and integrated into neural-network based source coding structures are revisited. Successive approximation RVQs that achieve embedded and refinable coding are reviewed. A new type of successive approximation RVQ that varies the instantaneous block rate by using different numbers of stages on different blocks is introduced and applied to image waveforms, and a scalar version of the new residual quantizer is applied to image subbands in an embedded wavelet transform coding system.  相似文献   

4.
Wavelet packet image coding using space-frequency quantization   总被引:10,自引:0,他引:10  
We extend our previous work on space-frequency quantization (SFQ) for image coding from wavelet transforms to the more general wavelet packet transforms. The resulting wavelet packet coder offers a universal transform coding framework within the constraints of filterbank structures by allowing joint transform and quantizer design without assuming a priori statistics of the input image. In other words, the new coder adaptively chooses the representation to suit the image and the quantization to suit the representation. Experimental results show that, for some image classes, our new coder gives excellent coding performance.  相似文献   

5.
In this paper, a new wavelet transform image coding algorithm is presented. The discrete wavelet transform (DWT) is applied to the original image. The DWT coefficients are firstly quantized with a uniform scalar dead zone quantizer. Then the quantized coefficients are decomposed into four symbol streams: a binary significance map symbol stream, a binary sign stream, a position of the most significant bit (PMSB) symbol stream and a residual bit stream. An adaptive arithmetic coder with different context models is employed for the entropy coding of these symbol streams. Experimental results show that the compression performance of the proposed coding algorithm is competitive to other wavelet-based image coding algorithms reported in the literature.  相似文献   

6.
Entropy coding is a well-known technique to reduce the rate of a quantizer. It plays a particularly important role in universal quantization, where the quantizer codebook is not matched to the source statistics. We investigate the gain due to entropy coding by considering the entropy of the index of the first codeword, in a mismatched random codebook, that D-matches the source word. We show that the index entropy is strictly lower than the "uncoded" rate of the code, provided that the entropy is conditioned on the codebook. The number of bits saved by conditional entropy coding is equal to the divergence between the "favorite type" (the limiting empirical distribution of the first D-matching codeword) and the codebook-generating distribution. Specific examples are provided  相似文献   

7.
Geometric source coding and vector quantization   总被引:1,自引:0,他引:1  
A geometric formulation is presented for source coding and vector quantizer design. Motivated by the asymptotic equipartition principle, the authors consider two broad classes of source codes and vector quantizers: elliptical codes and quantizers based on the Gaussian density function, and pyramid codes and quantizers based on the Laplacian density function. Elliptical and weighted pyramid vector quantizers are developed by selecting codewords as points in a lattice that lie on (or near) a specified ellipse or pyramid. The combination of geometric structure and lattice basis allows simple encoding and decoding algorithms  相似文献   

8.
A robust quantizer is developed for encoding memoryless sources and transmission over the binary symmetric channel (BSC). The system combines channel optimized scalar quantization (COSQ) with all-pass filtering, the latter performed using a binary phase-scrambling/descrambling method. Applied to a broad class of sources, the robust quantizer achieves the same performance as the Gaussian COSQ for the memoryless Gaussian source. This quantizer is used in image coding for transmission over a BSC. The peak signal-to-noise ratio (PSNR) performance degrades gracefully as the channel bit error rate increases.  相似文献   

9.
Adaptive compression methods have been a key component of many proposed subband (or wavelet) image coding techniques. This paper deals with a particular type of adaptive subband image coding where we focus on the image coder's ability to adjust itself "on the fly" to the spatially varying statistical nature of image contents. This backward adaptation is distinguished from more frequently used forward adaptation in that forward adaptation selects the best operating parameters from a predesigned set and thus uses considerable amount of side information in order for the encoder and the decoder to operate with the same parameters. Specifically, we present backward adaptive quantization using a new context-based classification technique which classifies each subband coefficient based on the surrounding quantized coefficients. We couple this classification with online parametric adaptation of the quantizer applied to each class. A simple uniform threshold quantizer is employed as the baseline quantizer for which adaptation is achieved. Our subband image coder based on the proposed adaptive classification quantization idea exhibits excellent rate-distortion performance, in particular at very low rates. For popular test images, it is comparable or superior to most of the state-of-the-art coders in the literature.  相似文献   

10.
研究了如何通过解码端含有边信息时获得未知信源的数据,即加噪信源的量化、编码及函数重建.将WZ高速量化及变换编码理论扩展到编码端对噪声进行观测的情况,并通过实验说明不同约束条件下的性能特点.验证了在构建最佳量化器的过程中能保持近似相同的率失真性能.  相似文献   

11.
With a facsimile bi-level quantizer, notches often occur along the white and black borders of a document when the output of a photosensor goes up and down around a threshold. The objects of this work are, first, to present a notchless bi-level quantizer, and secondly, to improve coding efficiency by using the quantizer. It is pointed out that quantization is comprised of many important aspects from aesthetic as well as coding efficiency standpoints, and that significant improvement is achieved in coding efficiency through the use of the notchless bi-level quantizer.  相似文献   

12.
We design a conceptual transmission scheme that adjusts rate and power of data codewords to send them over a slowly fading channel, when quantized and possibly erroneous channel state information (CSI) is available at the transmitter. The goal is to maximize the data throughput or the expected data rate using a multi-layer superposition coding technique and temporal power control at the transmitter. The main challenge here is to design a CSI quantizer structure for a noisy feedback link. This structure resembles conventional joint source and channel coding schemes, however, with a newly introduced quasi-gray bit-mapping. Our results show that with proper CSI quantizer design, even erroneous feedback can provide performance gains. Also, with an unreliable feedback link, superposition coding provides significant gains when feedback channel is poorly conditioned and channel uncertainty at the transmitter is severe, whereas power control is more effective with more reliable feedback.  相似文献   

13.
In a causal source coding system, the reconstruction of the present source sample is restricted to be a function of the present and past source samples, while the code stream itself may be noncausal and have variable rate. Neuhoff and Gilbert showed that for memoryless sources, optimum performance among all causal source codes is achieved by time-sharing at most two memoryless codes (quantizers) followed by entropy coding. In this work, we extend Neuhoff and Gilbert's result in the limit of small distortion (high resolution) to two new settings. First, we show that at high resolution, an optimal causal code for a stationary source with finite differential entropy rate consists of a uniform quantizer followed by a (sequence) entropy coder. This implies that the price of causality at high resolution is approximately 0.254 bit, i.e., the space-filling loss of the uniform quantizer. Then, we consider individual sequences and introduce a deterministic analogue of differential entropy, which we call "Lempel-Ziv differential entropy." We show that for any bounded individual sequence with finite Lempel-Ziv differential entropy, optimum high-resolution performance among all finite-memory variable-rate causal codes is achieved by dithered scalar uniform quantization followed by Lempel-Ziv coding. As a by-product, we also prove an individual-sequence version of the Shannon lower bound.  相似文献   

14.
黄博强  陈建华  汪源源 《电子学报》2008,36(9):1810-1813
 提出一种基于Context模型的ECG信号二维压缩方案.通过模极大检测和循环匹配识别R波特征,自动构建ECG图像,并根据心动周期信息制作编码数据图,之后对ECG图像进行一维离散小波变换和带截止区均匀量化,量化系数被分解为重要位置图、符号流、最高位位置流和剩余比特流,最后结合编码数据图进行基于Context模型的自适应算术编码.实验针对MIT-BIH心律失常数据库的两个数据集进行压缩.压缩比为20时,新方案的百分均方根误差分别为2.93%、4.31%,低于JPEG2000压缩方案的3.26%、4.8%.结果表明新方案优于其它ECG压缩算法.  相似文献   

15.
Universal trellis coded quantization   总被引:2,自引:0,他引:2  
A new form of trellis coded quantization based on uniform quantization thresholds and "on-the-fly" quantizer training is presented. The universal trellis coded quantization (UTCQ) technique requires neither stored codebooks nor a computationally intense codebook design algorithm. Its performance is comparable with that of fully optimized entropy-constrained trellis coded quantization (ECTCQ) for most encoding rates. The codebook and trellis geometry of UTCQ are symmetric with respect to the trellis superset. This allows sources with a symmetric probability density to be encoded with a single variable-rate code. Rate allocation and quantizer modeling procedures are given for UTCQ which allow access to continuous quantization rates. An image coding application based on adaptive wavelet coefficient subblock classification, arithmetic coding, and UTCQ is presented. The excellent performance of this coder demonstrates the efficacy of UTCQ. We also present a simple scheme to improve the perceptual performance of UTCQ for certain imagery at low bit rates. This scheme has the added advantage of being applied during image decoding, without the need to reencode the original image.  相似文献   

16.
Source encoding of images for bandwidth compression has become attractive in recent years because of decreasing hardware costs. By combining the source encoding approach with transform coding techniques, it is possible to obtain good image quality at low data rates. The general aspects of such a system are presented. The design of the quantizer for transform coefficients, which is the major source of error associated with the compression process, is considered using a visual fidelity criterion and subject to the constraint that the entropy of the quantizer be a prespecified quantity. A visually weighted suboptimal quantization scheme is developed to take into account the relative importance of different transform coefficients to the human visual system.  相似文献   

17.
Context modeling is widely used in image coding to improve the compression performance. However, with no special treatment, the expected compression gain will be cancelled by the model cost introduced by high order context models. Context quantization is an efficient method to deal with this problem. In this paper, we analyze the general context quantization problem in detail and show that context quantization is similar to a common vector quantization problem. If a suitable distortion measure is defined, the optimal context quantizer can be designed by a Lloyd style iterative algorithm. This context quantization strategy is applied to an embedded wavelet coding scheme in which the significance map symbols and sign symbols are directly coded by arithmetic coding with context models designed by the proposed quantization algorithm. Good coding performance is achieved.  相似文献   

18.
本文提出了基于双正交小波变换(biorthogonalwavelettransform)和格型矢量量化(LVQlatticevectorquantization)的视频编码算法。在该方案中,小波变换将图像分解成多分辨率(multiresolution)的子带图像,多分辨率运动估值(MRME,multiresolutionmotionestimation)技术实现子带图像的帧间预测,格型矢量量化对预测差值子带图像进行编码,从而获得了性能较好的活动图像编码新算法。  相似文献   

19.
Hierarchical partition priority wavelet image compression   总被引:3,自引:0,他引:3  
Image compression methods for progressive transmission using optimal hierarchical decomposition, partition priority coding (PPC), and multiple distribution entropy coding (MDEC) are presented. In the proposed coder, a hierarchical subband/wavelet decomposition transforms the original image. The analysis filter banks are selected to maximize the reproduction fidelity in each stage of progressive image transmission. An efficient triple-state differential pulse code modulation (DPCM) method is applied to the smoothed subband coefficients, and the corresponding prediction error is Lloyd-Max quantized. Such a quantizer is also designed to fit the characteristics of the detail transform coefficients in each subband, which are then coded using novel hierarchical PPC (HPPC) and predictive HPPC (PHPPC) algorithms. More specifically, given a suitable partitioning of their absolute range, the quantized detail coefficients are ordered based on both their decomposition level and partition and then are coded along with the corresponding address map. Space filling scanning further reduces the coding cost by providing a highly spatially correlated address map of the coefficients in each PPC partition. Finally, adaptive MDEC is applied to both the DPCM and HPPC/PHPPC outputs by considering a division of the source (quantized coefficients) into multiple subsources and adaptive arithmetic coding based on their corresponding histograms. Experimental results demonstrate the great performance of the proposed compression methods.  相似文献   

20.
Joint source-channel coding for stationary memoryless and Gauss-Markov sources and binary Markov channels is considered. The channel is an additive-noise channel where the noise process is an Mth-order Markov chain. Two joint source-channel coding schemes are considered. The first is a channel-optimized vector quantizer-optimized for both source and channel. The second scheme consists of a scalar quantizer and a maximum a posteriori detector. In this scheme, it is assumed that the scalar quantizer output has residual redundancy that can be exploited by the maximum a posteriori detector to combat the correlated channel noise. These two schemes are then compared against two schemes which use channel interleaving. Numerical results show that the proposed schemes outperform the interleaving schemes. For very noisy channels with high noise correlation, gains of 4-5 dB in signal-to-noise ratio are possible  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号