首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 640 毫秒
1.
A novel technique for despeckling the medical ultrasound images using lossy compression is presented. The logarithm of the input image is first transformed to the multiscale wavelet domain. It is then shown that the subband coefficients of the log-transformed ultrasound image can be successfully modeled using the generalized Laplacian distribution. Based on this modeling, a simple adaptation of the zero-zone and reconstruction levels of the uniform threshold quantizer is proposed in order to achieve simultaneous despeckling and quantization. This adaptation is based on: (1) an estimate of the corrupting speckle noise level in the image; (2) the estimated statistics of the noise-free subband coefficients; and (3) the required compression rate. The Laplacian distribution is considered as a special case of the generalized Laplacian distribution and its efficacy is demonstrated for the problem under consideration. Context-based classification is also applied to the noisy coefficients to enhance the performance of the subband coder. Simulation results using a contrast detail phantom image and several real ultrasound images are presented. To validate the performance of the proposed scheme, comparison with two two-stage schemes, wherein the speckled image is first filtered and then compressed using the state-of-the-art JPEG2000 encoder, is presented. Experimental results show that the proposed scheme works better, both in terms of the signal to noise ratio and the visual quality.  相似文献   

2.
The Laplacian pyramid (LP) is appropriate for lossy image compression; conversely, the reduced-difference pyramid (RDP), having as many data as pixels, gives a better performance with lossless encoding. A reduced LP is designed by discarding the anti-aliasing filter and adopting a half-band interpolator, thus retaining three over four of the LP coefficients. Lossless coding outperforms both LP and RDP, especially when dealing with medical images  相似文献   

3.
Error-resilient pyramid vector quantization for image compression   总被引:1,自引:0,他引:1  
Pyramid vector quantization (PVQ) uses the lattice points of a pyramidal shape in multidimensional space as the quantizer codebook. It is a fixed-rate quantization technique that can be used for the compression of Laplacian-like sources arising from transform and subband image coding, where its performance approaches the optimal entropy-coded scalar quantizer without the necessity of variable length codes. In this paper, we investigate the use of PVQ for compressed image transmission over noisy channels, where the fixed-rate quantization reduces the susceptibility to bit-error corruption. We propose a new method of deriving the indices of the lattice points of the multidimensional pyramid and describe how these techniques can also improve the channel noise immunity of general symmetric lattice quantizers. Our new indexing scheme improves channel robustness by up to 3 dB over previous indexing methods, and can be performed with similar computational cost. The final fixed-rate coding algorithm surpasses the performance of typical Joint Photographic Experts Group (JPEG) implementations and exhibits much greater error resilience.  相似文献   

4.
The JPEG image compression standard is very sensitive to errors. Even though it contains error resilience features, it cannot easily cope with induced errors from computer soft faults prevalent in remote-sensing applications. Hence, new fault tolerance detection methods are developed to sense the soft errors in major parts of the system while also protecting data across the boundaries where data flow from one subsystem to the other. The design goal is to guarantee no compressed or decompressed data contain computer-induced errors without detection. Detection methods are expressed at the algorithm level so that a wide range of hardware and software implementation techniques can be covered by the fault tolerance procedures while still maintaining the JPEG output format. The major subsystems to be addressed are the discrete cosine transform, quantizer, entropy coding, and packet assembly. Each error detection method is determined by the data representations within the subsystem or across the boundaries. They vary from real number parities in the DCT to bit-level residue codes in the quantizer, cyclic redundancy check parities for entropy coding, and packet assembly. The simulation results verify detection performances even across boundaries while also examining roundoff noise effects in detecting computer-induced errors in processing steps.  相似文献   

5.
The wireless sensor network utilizes image compression algorithms like JPEG, JPEG2000, and SPIHT for image transmission with high coding efficiency. During compression, discrete cosine transform (DCT)–based JPEG has blocking artifacts at low bit-rates. But this effect is reduced by discrete wavelet transform (DWT)–based JPEG2000 and SPIHT algorithm but it possess high computational complexity. This paper proposes an efficient lapped biorthogonal transform (LBT)–based low-complexity zerotree codec (LZC), an entropy coder for image coding algorithm to achieve high compression. The LBT-LZC algorithm yields high compression, better visual quality with low computational complexity. The performance of the proposed method is compared with other popular coding schemes based on LBT, DCT and wavelet transforms. The simulation results reveal that the proposed algorithm reduces the blocking artifacts and achieves high compression. Besides, it is analyzed for noise resilience.  相似文献   

6.
Geometric source coding and vector quantization   总被引:1,自引:0,他引:1  
A geometric formulation is presented for source coding and vector quantizer design. Motivated by the asymptotic equipartition principle, the authors consider two broad classes of source codes and vector quantizers: elliptical codes and quantizers based on the Gaussian density function, and pyramid codes and quantizers based on the Laplacian density function. Elliptical and weighted pyramid vector quantizers are developed by selecting codewords as points in a lattice that lie on (or near) a specified ellipse or pyramid. The combination of geometric structure and lattice basis allows simple encoding and decoding algorithms  相似文献   

7.
A real-time, low-power video encoder design for pyramid vector quantization (PVQ) has been presented. The quantizer is estimated to dissipate only 2.1 mW for real-time video compression of images of 256 × 256 pixels at 30 frames per second in standard 0.8-micronCMOS technology with a 1.5 V supply. Applying this quantizer to subband decomposed images, the quantizer performs better than JPEG on average. We achieve this high level of power efficiency with image quality exceeding that of variable rate codes through algorithmic and architectural reformulation. The PVQ encoder is well-suited for wireless, portable communication applications.  相似文献   

8.
A pyramid vector quantizer   总被引:5,自引:0,他引:5  
The geometric properties of a memoryless Laplacian source are presented and used to establish a source coding theorem. Motivated by this geometric structure, a pyramid vector quantizer (PVQ) is developed for arbitrary vector dimension. The PVQ is based on the cubic lattice points that lie on the surface of anL-dimensional pyramid and has simple encoding and decoding algorithms. A product code version of the PVQ is developed and generalized to apply to a variety of sources. Analytical expressions are derived for the PVQ mean square error (mse), and simulation results are presented for PVQ encoding of several memoryless sources. For large rate and dimension, PVQ encoding of memoryless Laplacian, gamma, and Gaussian sources provides rose improvements of5.64, 8.40, and2.39dB, respectively, over the corresponding optimum scalar quantizer. Although suboptimum in a rate-distortion sense, because the PVQ can encode large-dimensional vectors, it offers significant reduction in rose distortion compared with the optimum Lloyd-Max scalar quantizer, and provides an attractive alternative to currently available vector quantizers.  相似文献   

9.
A low entropy pyramidal image data structure suited for lossless coding and progressive transmission is proposed in this work. The new coder, called the progressively predictive pyramid (PPP) is based on the well-known Laplacian pyramid. By introducing inter-resolution predictors into the original Laplacian pyramid, we show that the entropy level in the original pyramid can be reduced significantly. To take full advantage of progressive transmission, a scheme is introduced to create the predictor adaptively, thus eliminating the need to transmit the predictor and reducing the coding overheads. A method for designing the predictor is presented. Numerical results show that PPP is superior to traditional approaches to pyramid generation in the sense that the pyramids generated by PPP always have significantly lower entropy values.  相似文献   

10.
Noise degrades the performance of any image compression algorithm. This paper studies the effect of noise on lossy image compression. The effect of Gaussian, Poisson, and film-grain noise on compression is studied. To reduce the effect of the noise on compression, the distortion is measured with respect to the original image not to the input of the coder. Results of noisy source coding are then used to design the optimal coder. In the minimum-mean-square-error (MMSE) sense, this is equivalent to an MMSE estimator followed by an MMSE coder. The coders for the Poisson noise and the film-grain noise cases are derived and their performance is studied. The effect of this preprocessing step is studied using standard coders, e.g., JPEG, also. As is demonstrated, higher quality is achieved at lower bit rates.  相似文献   

11.
In this paper, we propose an entropy minimization histogram mergence (EMHM) scheme that can significantly reduce the number of grayscales with nonzero pixel populations (GSNPP) without visible loss to image quality. We proved in theory that the entropy of an image is reduced after histogram mergence and that the reduction in entropy is maximized using our EMHM. The reduction in image entropy is good for entropy encoding considering that the minimum average code word length per source symbol is the entropy of the source signal according to Shannon’s first theorem. Extensive experimental results show that our EMHM can significantly reduce the code length of entropy coding, such as Huffman, Shannon, and arithmetic coding, by over 20% while preserving the image subjective and objective quality very well. Moreover, the performance of some classic lossy image compression techniques, such as the Joint Photographic Experts Group (JPEG), JPEG2000, and Better Portable Graphics (BPG), can be improved by preprocessing images using our EMHM.  相似文献   

12.
The enormous data of volumetric medical images (VMI) bring a transmission and storage problem that can be solved by using a compression technique. For the lossy compression of a very long VMI sequence, automatically maintaining the diagnosis features in reconstructed images is essential. The proposed wavelet-based adaptive vector quantizer incorporates a distortion-constrained codebook replenishment (DCCR) mechanism to meet a user-defined quality demand in peak signal-to-noise ratio. Combining a codebook updating strategy and the well-known set partitioning in hierarchical trees (SPIHT) technique, the DCCR mechanism provides an excellent coding gain. Experimental results show that the proposed approach is superior to the pure SPIHT and the JPEG2000 algorithms in terms of coding performance. We also propose an iterative fast searching algorithm to find the desired signal quality along an energy-quality curve instead of a traditional rate-distortion curve. The algorithm performs the quality control quickly, smoothly, and reliably.  相似文献   

13.
基于脊波变换的图像压缩算法   总被引:2,自引:1,他引:2  
自然图像包括大量的具有明显“直线边缘”的图像,而且边缘表示了图像的主要信息。利用脊波对“直线奇异”的良好刻画,针对具有直线特征的图像,设计基于脊波变换的有损压缩算法。首先对图像进行脊波变换,然后对变换系数进行标量量化、扫描、游程编码和熵编码。仿真实验表明,与基于小波变换的JPEG 2000压缩算法相比,该算法能获得更高的压缩率,同时保持较高的信噪比。  相似文献   

14.
吴家骥  吴成柯  吴振森 《电子学报》2006,34(10):1828-1832
感兴趣区(ROI)编码是在JPEG2000中提出的一种重要的技术,然而JPEG2000算法却无法同时支持任意形状ROI和任意提升因子.本文提出了一种基于任意形状ROI和3D提升小波零块编码的3D体数据图像压缩算法.新的算法支持ROI内外从有损到无损的编码.一种简单的任意形状无损ROI掩码(Mask)生成方法被提出.考虑到3D子带的特点,我们采用改进的3DSPECK零块算法对变换后的系数进行编码.一些其它支持任意形状ROI编码的算法也在本文中被评估,试验显示本文算法具有更好的编码性能.  相似文献   

15.
We present an implementable three dimensional terrain adaptive transform based bandwidth compression technique for multispectral imagery. The algorithm exploits the inherent spectral and spatial correlations in the data. The compression technique is based on Karhunen-Loeve transformation for spectral decorrelation followed by the standard JPEG algorithm for coding the resulting spectrally decorrelated eigen images. The algorithm is conveniently parameterized to accommodate reconstructed image fidelities ranging from near-lossless at about 5:1 CR to visually lossy beginning at about 30:1 CR. The novelty of this technique lies in its unique capability to adaptively vary the characteristics of the spectral correlation transformation as a function of the variation of the local terrain. The spectral and spatial modularity of the algorithm architecture allows the JPEG to be replaced by a alternate spatial coding procedure. The significant practical advantage of this proposed approach is that it is based on the standard and highly developed JPEG compression technology  相似文献   

16.
A new and effective video coding scheme for contribution quality is proposed. The CMTT/2, a joint committee of CCIR and CCITT, has proposed a video coding scheme (already approved at European level by ETS) working at 34-45 Mbit/s. Basically this proposal includes a DCT transform for spatial correlation removal and motion compensation for temporal correlation removal. The individual transform coefficients are then scalar quantized with a non uniform bit assignment. Starting from the CMTT/2 proposal, the study presents a new video coding scheme designed using a vector quantizer solution instead of the scalar one. Specifically, the pyramid vector quantization (PVQ) has been chosen as the vector quantization method as it is able to reduce the DCT coefficients Laplacian distribution. Simulation results show that the proposed video coding scheme gives the same contribution quality at 22 Mbit/s as the one obtained with the CMTT/2 proposal at 45 Mbit/s.  相似文献   

17.
18.
An image multiresolution representation for lossless and lossycompression   总被引:27,自引:0,他引:27  
We propose a new image multiresolution transform that is suited for both lossless (reversible) and lossy compression. The new transformation is similar to the subband decomposition, but can be computed with only integer addition and bit-shift operations. During its calculation, the number of bits required to represent the transformed image is kept small through careful scaling and truncations. Numerical results show that the entropy obtained with the new transform is smaller than that obtained with predictive coding of similar complexity. In addition, we propose entropy-coding methods that exploit the multiresolution structure, and can efficiently compress the transformed image for progressive transmission (up to exact recovery). The lossless compression ratios are among the best in the literature, and simultaneously the rate versus distortion performance is comparable to those of the most efficient lossy compression methods.  相似文献   

19.
A correlation exists between luminance samples and chrominance samples of a color image. It is beneficial to exploit such interchannel redundancy for color image compression. We propose an algorithm that predicts chrominance components Cb and Cr from the luminance component Y. The prediction model is trained by supervised learning with Laplacian‐regularized least squares to minimize the total prediction error. Kernel principal component analysis mapping, which reduces computational complexity, is implemented on the same point set at both the encoder and decoder to ensure that predictions are identical at both the ends without signaling extra location information. In addition, chrominance subsampling and entropy coding for model parameters are adopted to further reduce the bit rate. Finally, luminance information and model parameters are stored for image reconstruction. Experimental results show the performance superiority of the proposed algorithm over its predecessor and JPEG, and even over JPEG‐XR. The compensation version with the chrominance difference of the proposed algorithm performs close to and even better than JPEG2000 in some cases.  相似文献   

20.
JPEG2000的静止图像压缩算法具有很多优良特性,其核心算法是EBCOT,构造复杂、硬件实现难度大。而基于SPIHT算法的图像压缩效率接近EBCOT,但结构简单、易于硬件实现。通过对SPIHT和JPEG2000算法进行融合,提出了一套压缩比高、可实现由有损到无损、码流渐进传输的静止图像编码方案,并对SPIHT算法与(9,7)小波提升算法的融合方法进行分析研究,所构造系统性能与JPEG2000算法接近,具有较好的应用前景。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号