首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Wavelet-based lossless compression of coronary angiographic images   总被引:6,自引:0,他引:6  
The final diagnosis in coronary angiography has to be performed on a large set of original images. Therefore, lossless compression schemes play a key role in medical database management and telediagnosis applications. This paper proposes a wavelet-based compression scheme that is able to operate in the lossless mode. The quantization module implements a new way of coding of the wavelet coefficients that is more effective than the classical zerotree coding. The experimental results obtained on a set of 20 angiograms show that the algorithm outperforms the embedded zerotree coder, combined with the integer wavelet transform, by 0.38 bpp, the set partitioning coder by 0.21 bpp, and the lossless JPEG coder by 0.71 bpp. The scheme is a good candidate for radiological applications such as teleradiology and picture archiving and communications systems (PACS's).  相似文献   

2.
This paper proposes a new motion-compensated wavelet transform video coder for very low bit-rate visual telephony. The proposed coder sequentially employs: (1) selective motion estimation on the wavelet transform domain, (2) motion-compensated prediction (MCP) of wavelet coefficients, and (3) selective entropy-constrained vector quantization (ECVQ) of the resultant MCP errors. The selective schemes in motion estimation and in quantization, which efficiently exploit the characteristic of image sequences in a visual telephony, considerably reduce the computational burden. The coder also employs a tree structure encoding to represent efficiently which blocks were encoded. In addition, in order to reduce the number of ECVQ codebooks and the image dependency of their performance, we introduce a preprocessing of signals which normalizes input vectors of ECVQ. Simulation results show that our video coder provides good PSNR (peak-to-peak signal-to-noise ratio) performance and efficient rate control.  相似文献   

3.
Many modern wavelet quantization schemes specify wavelet coefficient step sizes as continuous functions of an input step-size selection criterion; rate control is achieved by selecting an appropriate set of step sizes. In embedded wavelet coders, however, rate control is achieved simply by truncating the coded bit stream at the desired rate. The order in which wavelet data are coded implicitly controls quantization step sizes applied to create the reconstructed image. Since these step sizes are effectively discontinuous, piecewise-constant functions of rate, this paper examines the problem of designing a coding order for such a coder, guided by a quantization scheme where step sizes evolve continuously with rate. In particular, it formulates an optimization problem that minimizes the average relative difference between the piecewise-constant implicit step sizes associated with a layered coding strategy and the smooth step sizes given by a quantization scheme. The solution to this problem implies a coding order. Elegant, near-optimal solutions are presented to optimize step sizes over a variety of regions of rates, either continuous or discrete. This method can be used to create layers of coded data using any scalar quantization scheme combined with any wavelet bit-plane coder. It is illustrated using a variety of state-of-the-art coders and quantization schemes. In addition, the proposed method is verified with objective and subjective testing.  相似文献   

4.
This paper presents a wavelet-based image coder that is optimized for transmission over the binary symmetric channel (BSC). The proposed coder uses a robust channel-optimized trellis-coded quantization (COTCQ) stage that is designed to optimize the image coding based on the channel characteristics. A phase scrambling stage is also used to further increase the coding performance and robustness to nonstationary signals and channels. The resilience to channel errors is obtained by optimizing the coder performance only at the level of the source encoder with no explicit channel coding for error protection. For the considered TCQ trellis structure, a general expression is derived for the transition probability matrix. In terms of the TCQ encoding rat and the channel bit error rate, and is used to design the COTCQ stage of the image coder. The robust nature of the coder also increases the security level of the encoded bit stream and provides a much more visually pleasing rendition of the decoded image. Examples are presented to illustrate the performance of the proposed robust image coder  相似文献   

5.
This paper presents a wavelet-based hyperspectral image coder that is optimized for transmission over the binary symmetric channel (BSC). The proposed coder uses a robust channel-optimized trellis-coded quantization (COTCQ) stage that is designed to optimize the image coding based on the channel characteristics. This optimization is performed only at the level of the source encoder and does not include any channel coding for error protection. The robust nature of the coder increases the security level of the encoded bit stream, and provides a much higher quality decoded image. In the absence of channel noise, the proposed coder is shown to achieve a compression ratio greater than 70:1, with an average peak SNR of the coded hyperspectral sequence exceeding 40 dB. Additionally, the coder is shown to exhibit graceful degradation with increasing channel errors  相似文献   

6.
7.
实现基于标准视频编码过程的一个关键问题是宏块编码模式选择,而选择宏块编码模式控制参量的关键是确定DCT系数的量化因子。本文提出了一种基于宏块分类的最佳量化因子选择方法,着重讨论了二个问题:(1)宏块的R—Q模型和D—Q模型;(2)引人适当先验假设条件,根据宏块R—Q、D—Q模型,建立有限制条件的最优化表达式,得到最佳的分类宏块量化因子。模拟结果表明:与传统方法相比,基于宏块分类的最佳量化因子选择方法一定程度上能减少所需的编码比特。  相似文献   

8.
A Scalable Architecture for MPEG-4 Wavelet Quantization   总被引:3,自引:0,他引:3  
Wavelet-based image compression has been adopted in MPEG-4 for visual texture coding. All wavelet quantization schemes in MPEG-4—Single Quantization (SQ), Multiple Quantization (MQ) and Bi-level Quantization—use Embedded Zero Tree (EZT) coding followed by an adaptive arithmetic coder for the compression and quantization of a wavelet image. This paper presents the OZONE chip, a dedicated hardware coprocessor for EZT and arithmetic coding. Realized in a 0.5 m CMOS technology and operating at 32 MHz, the EZT coder is capable of processing up to 25.6 Mega pixel-bitplanes per second. This is equivalent to the lossless compression of 31.6 8-bit grayscale CIF images (352 × 288) per second. The adaptive arithmetic coder processes up to 10 Mbit per second. The combination of the performance of the EZT coder and the arithmetic coder allows the OZONE to perform visual-lossless compression of more than 30 CIF images per second. Due to its novel and scalable architecture, parallel operation of multiple OZONEs is supported. The OZONE functionality is demonstrated on a PC-based compression system.  相似文献   

9.
Combining fractal image compression and vector quantization   总被引:7,自引:0,他引:7  
In fractal image compression, the code is an efficient binary representation of a contractive mapping whose unique fixed point approximates the original image. The mapping is typically composed of affine transformations, each approximating a block of the image by another block (called domain block) selected from the same image. The search for a suitable domain block is time-consuming. Moreover, the rate distortion performance of most fractal image coders is not satisfactory. We show how a few fixed vectors designed from a set of training images by a clustering algorithm accelerates the search for the domain blocks and improves both the rate-distortion performance and the decoding speed of a pure fractal coder, when they are used as a supplementary vector quantization codebook. We implemented two quadtree-based schemes: a fast top-down heuristic technique and one optimized with a Lagrange multiplier method. For the 8 bits per pixel (bpp) luminance part of the 512kappa512 Lena image, our best scheme achieved a peak-signal-to-noise ratio of 32.50 dB at 0.25 bpp.  相似文献   

10.
An adaptive image-coding algorithm for compression of medical ultrasound (US) images in the wavelet domain is presented. First, it is shown that the histograms of wavelet coefficients of the subbands in the US images are heavy-tailed and can be better modelled by using the generalised Student's t-distribution. Then, by exploiting these statistics, an adaptive image coder named JTQVS-WV is designed, which unifies the two approaches to image-adaptive coding: rate-distortion (R-D) optimised quantiser selection and R-D optimal thresholding, and is based on the varying-slope quantisation strategy. The use of varying-slope quantisation strategy (instead of fixed R-D slope) allows coding of the wavelet coefficients across various scales according to their importance for the quality of reconstructed image. The experimental results show that the varying-slope quantisation strategy leads to a significant improvement in the compression performance of the JTQVS-WV over the best state-of-the-art image coder, SPIHT, JPEG2000 and the fixed-slope variant of JTQVS-WV named JTQ-WV. For example, the coding of US images at 0.5 bpp yields a peak signal-to-noise ratio gain of >0.6, 3.86 and 0.3 dB over the benchmark, SPIHT, JPEG2000 and JTQ-WV, respectively.  相似文献   

11.
蒙海光  李兴华  荆涛  李少华 《信号处理》2010,26(8):1187-1192
通过对视频信源的统计分析,本文提出一种改进型Weibull密度分布,并据此密度分布建立一个新的率失真模型。文章通过使用Weibull概率密度函数,逼近实际DCT系数分布,并通过编码熵理论和失真分析,将模型表示成为两个关于视频编码量化步长的函数,即码率量化(R-Q)模型和失真量化(D-Q)模型,进而估计编码码率和量化失真。仿真结果表明,该密度分布比Cauchy密度分布更逼近于实际视频序列的DCT系数统计特性,并且该模型较基于Cauchy密度分布的率失真模型能够更为精确地估计实际编码码率和失真性能。   相似文献   

12.
Progressive image coding using trellis coded quantization   总被引:3,自引:0,他引:3  
In this work, we present coding techniques that enable progressive transmission when trellis coded quantization (TCQ) is applied to wavelet coefficients. A method for approximately inverting TCQ in the absence of least significant bits is developed. Results are presented using different rate allocation strategies and different entropy coders. The proposed wavelet-TCQ coder yields excellent coding efficiency while supporting progressive modes analogous to those available in JPEG.  相似文献   

13.
Wavelet packet image coding using space-frequency quantization   总被引:10,自引:0,他引:10  
We extend our previous work on space-frequency quantization (SFQ) for image coding from wavelet transforms to the more general wavelet packet transforms. The resulting wavelet packet coder offers a universal transform coding framework within the constraints of filterbank structures by allowing joint transform and quantizer design without assuming a priori statistics of the input image. In other words, the new coder adaptively chooses the representation to suit the image and the quantization to suit the representation. Experimental results show that, for some image classes, our new coder gives excellent coding performance.  相似文献   

14.
Low resolution region discriminator for wavelet coding   总被引:1,自引:0,他引:1  
Syed  Y.F. Rao  K.R. 《Electronics letters》2001,37(12):748-749
A wavelet block chain (WBC) method is used in the initial coding of the low-low subband created by a wavelet transform to separate and label homogenous regions of the image which require no additional overhead in the bitstream. This information is then used to enhance the coding performance in a modified wavelet based coder. This method uses a two stage ZTE/SPIHT entropy coder (called a homogenous connected-region interested ordered transmission coder) to create a bitstream with properties of progressive transmission, scalability, and perceptual optimisation after a minimum bit rate is reached. Simulation results show good scalable low bit rate (0.04-0.4 bpp) compression, comparable to a SPIHT coder, but with better perceptual quality due to use of the region based information acquired by the WBC method  相似文献   

15.
该文提出了一种基于双正交小波变换(BWT)和模糊矢量量化(FVQ)的极低比特率图像编码算法。该算法通过构造符合图像小波变换系数特征的跨频带矢量,充分利用了不同频带小波系数之间的相关性,有效地提高了图像的编码效率和重构质量。该算法采用非线性插补矢量量化(NLIVQ)的思想,从大维数矢量中提取小维数的特征矢量,并提出了一种新的模糊矢量量化方法一渐进构造模糊聚类(PCFC)算法用于特征矢量的量化,从而大大提高了矢量量化的速度和码书质量。实验结果证明,该算法在比特率为0.172bpp的条件下仍能获得PSNR>30dB的高质量重构图像。  相似文献   

16.
Under a rate constraint, wavelet-based image coding involves strategic discarding of information such that the remaining data can be described with a given amount of rate. In a practical coding system, this task requires knowledge of the relationship between quantization step size and compressed rate for each group of wavelet coefficients, the R-Q curve. A common approach to this problem is to fit each subband with a scalar probability distribution and compute entropy estimates based on the model. This approach is not effective at rates below 1.0 bits-per-pixel because the distributions of quantized data do not reflect the dependencies in coefficient magnitudes. These dependencies can be addressed with doubly stochastic models, which have been previously proposed to characterize more localized behavior, though there are tradeoffs between storage, computation time, and accuracy. Using a doubly stochastic generalized Gaussian model, it is demonstrated that the relationship between step size and rate is accurately described by a low degree polynomial in the logarithm of the step size. Based on this observation, an entropy estimation scheme is presented which offers an excellent tradeoff between speed and accuracy; after a simple data-gathering step, estimates are computed instantaneously by evaluating a single polynomial for each group of wavelet coefficients quantized with the same step size. These estimates are on average within 3% of a desired target rate for several of state-of-the-art coders.  相似文献   

17.
Universal trellis coded quantization   总被引:2,自引:0,他引:2  
A new form of trellis coded quantization based on uniform quantization thresholds and "on-the-fly" quantizer training is presented. The universal trellis coded quantization (UTCQ) technique requires neither stored codebooks nor a computationally intense codebook design algorithm. Its performance is comparable with that of fully optimized entropy-constrained trellis coded quantization (ECTCQ) for most encoding rates. The codebook and trellis geometry of UTCQ are symmetric with respect to the trellis superset. This allows sources with a symmetric probability density to be encoded with a single variable-rate code. Rate allocation and quantizer modeling procedures are given for UTCQ which allow access to continuous quantization rates. An image coding application based on adaptive wavelet coefficient subblock classification, arithmetic coding, and UTCQ is presented. The excellent performance of this coder demonstrates the efficacy of UTCQ. We also present a simple scheme to improve the perceptual performance of UTCQ for certain imagery at low bit rates. This scheme has the added advantage of being applied during image decoding, without the need to reencode the original image.  相似文献   

18.
A medium-band speech coder is proposed that uses a weighted vector quantization scheme in the transformed domain. The linear prediction residue is transformed and vector-quantized. In order to control the quantization errors in the transformed domain, adaptively weighted matching is used instead of conventional adaptive bit allocation. Therefore, the residual signal can be reconstructed by the decoder, even if the spectral envelope parameters are destroyed due to transmission errors. This coder is also capable of maintaining higher SNR (signal-to-noise ratio) performance than time-domain vector quantization coders for a wide range of computation complexities and bit rates. Coded speech is natural and unaffected by background noise. The mean opinion score for this coder at 7.2 kb/s is comparable to that of 5.5-bit log PCM coded speech sampled at 6.4 kHz  相似文献   

19.
This paper describes source encoding of the outputs of a block truncation coder (BTC), namely, the overhead statistical information and the truncated block. The statistical overhead and the truncated block exhibit properties which can be effectively used for their quantization as vectors. Vector quantization of these BTC outputs results into reduction of the bit rate of the coder. The bit rate reduces up to 1.5 bits/ pel if vector quantization is used on one of the outputs; i.e., either the overhead information or the truncated block. By vector quantizing both the BTC outputs the bit rate can he reduced up to 1.0 bits/pel without introducing many perceivable errors in the reconstructed output.  相似文献   

20.
Adaptive wavelet packet basis selection for zerotree image coding   总被引:5,自引:0,他引:5  
Image coding methods based on adaptive wavelet transforms and those employing zerotree quantization have been shown to be successful. We present a general zerotree structure for an arbitrary wavelet packet geometry in an image coding framework. A fast basis selection algorithm is developed; it uses a Markov chain based cost estimate of encoding the image using this structure. As a result, our adaptive wavelet zerotree image coder has a relatively low computational complexity, performs comparably to state-of-the-art image coders, and is capable of progressively encoding images.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号