首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Image coding using dual-tree discrete wavelet transform   总被引:2,自引:0,他引:2  
In this paper, we explore the application of 2-D dual-tree discrete wavelet transform (DDWT), which is a directional and redundant transform, for image coding. Three methods for sparsifying DDWT coefficients, i.e., matching pursuit, basis pursuit, and noise shaping, are compared. We found that noise shaping achieves the best nonlinear approximation efficiency with the lowest computational complexity. The interscale, intersubband, and intrasubband dependency among the DDWT coefficients are analyzed. Three subband coding methods, i.e., SPIHT, EBCOT, and TCE, are evaluated for coding DDWT coefficients. Experimental results show that TCE has the best performance. In spite of the redundancy of the transform, our DDWT _ TCE scheme outperforms JPEG2000 up to 0.70 dB at low bit rates and is comparable to JPEG2000 at high bit rates. The DDWT _TCE scheme also outperforms two other image coders that are based on directional filter banks. To further improve coding efficiency, we extend the DDWT to an anisotropic dual-tree discrete wavelet packets (ADDWP), which incorporates adaptive and anisotropic decomposition into DDWT. The ADDWP subbands are coded with TCE coder. Experimental results show that ADDWP _ TCE provides up to 1.47 dB improvement over the DDWT _TCE scheme, outperforming JPEG2000 up to 2.00 dB. Reconstructed images of our coding schemes are visually more appealing compared with DWT-based coding schemes thanks to the directionality of wavelets.  相似文献   

2.
The wavelet transform, which provides a multiresolution representation of images, has been widely used in image compression. A new image coding scheme using the wavelet transform and classified vector quantisation is presented. The input image is first decomposed into a hierarchy of three layers containing ten subimages by the discrete wavelet transform. The lowest resolution low frequency subimage is scalar quantised with 8 bits/pixel. The high frequency subimages are compressed by classified vector quantisation to utilise the crosscorrelation among different resolutions while reducing the edge distortion and computational complexity. Vectors are constructed by combining the corresponding wavelet coefficients of different resolutions in the same orientation and classified according to the magnitude and the position of wavelet transform coefficients. Simulation results show that the proposed scheme has a better performance than those utilising current scalar or vector quantisation schemes  相似文献   

3.
Yao  S. Clarke  R.J. 《Electronics letters》1992,28(17):1566-1568
A new scheme for image sequence coding using the wavelet transform and adaptive vector quantisation is proposed. The transform is used to decompose an image into multiresolution and multiband sub-images. Adaptive vector quantisation is then applied to achieve image data compression with good quality and low bit rate. Experimental results are presented.<>  相似文献   

4.
The wireless sensor network utilizes image compression algorithms like JPEG, JPEG2000, and SPIHT for image transmission with high coding efficiency. During compression, discrete cosine transform (DCT)–based JPEG has blocking artifacts at low bit-rates. But this effect is reduced by discrete wavelet transform (DWT)–based JPEG2000 and SPIHT algorithm but it possess high computational complexity. This paper proposes an efficient lapped biorthogonal transform (LBT)–based low-complexity zerotree codec (LZC), an entropy coder for image coding algorithm to achieve high compression. The LBT-LZC algorithm yields high compression, better visual quality with low computational complexity. The performance of the proposed method is compared with other popular coding schemes based on LBT, DCT and wavelet transforms. The simulation results reveal that the proposed algorithm reduces the blocking artifacts and achieves high compression. Besides, it is analyzed for noise resilience.  相似文献   

5.
Schemes for image compression of black-and-white images based on the wavelet transform are presented. The multiresolution nature of the discrete wavelet transform is proven as a powerful tool to represent images decomposed along the vertical and horizontal directions using the pyramidal multiresolution scheme. The wavelet transform decomposes the image into a set of subimages called shapes with different resolutions corresponding to different frequency bands. Hence, different allocations are tested, assuming that details at high resolution and diagonal directions are less visible to the human eye. The resultant coefficients are vector quantized (VQ) using the LGB algorithm. By using an error correction method that approximates the reconstructed coefficients quantization error, we minimize distortion for a given compression rate at low computational cost. Several compression techniques are tested. In the first experiment, several 512x512 images are trained together and common table codes created. Using these tables, the training sequence black-and-white images achieve a compression ratio of 60-65 and a PSNR of 30-33. To investigate the compression on images not part of the training set, many 480x480 images of uncalibrated faces are trained together and yield global tables code. Images of faces outside the training set are compressed and reconstructed using the resulting tables. The compression ratio is 40; PSNRs are 30-36. Images from the training set have similar compression values and quality. Finally, another compression method based on the end vector bit allocation is examined.  相似文献   

6.
A new vector quantizer codebook design for video compression is described. This uses the notion that symmetries in the data, which are seldom captured exactly in any training dataset, are important perceptually and lead to a more robust codebook. The method is illustrated using three-dimensional (3-D) wavelet transformed video sequences.  相似文献   

7.
The discrete wavelet transform has recently emerged as a powerful technique for decomposing images into various multi-resolution approximations. Multi-resolution decomposition schemes have proven to be very effective for high-quality, low bit-rate image coding. In this work, we investigate the use of entropy-constrained trellis-coded quantization (ECTCQ) for encoding the wavelet coefficients of both monochrome and color images. ECTCQ is known as an effective scheme for quantizing memoryless sources with low to moderate complexity, The ECTCQ approach to data compression has led to some of the most effective source codes found to date for memoryless sources. Performance comparisons are made using the classical quadrature mirror filter bank of Johnston and nine-tap spline filters that were built from biorthogonal wavelet bases. We conclude that the encoded images obtained from the system employing nine-tap spline filters are marginally superior although at the expense of additional computational burden. Excellent peak-signal-to-noise ratios are obtained for encoding monochrome and color versions of the 512x512 "Lenna" image. Comparisons with other results from the literature reveal that the proposed wavelet coder is quite competitive.  相似文献   

8.
The authors propose new simple image coder based on a discrete wavelet transform (DWT). The DWT coefficients are coded in bit-planes. They use an improved version of the JBIG bi-level image compression method to code the DWT coefficient bit-planes. The experimental results are shown, both in distortion measurement and visual comparison, and are very promising  相似文献   

9.
Image compression using the 2-D wavelet transform   总被引:108,自引:0,他引:108  
The 2-D orthogonal wavelet transform decomposes images into both spatial and spectrally local coefficients. The transformed coefficients were coded hierarchically and individually quantized in accordance with the local estimated noise sensitivity of the human visual system (HVS). The algorithm can be mapped easily onto VLSI. For the Miss America and Lena monochrome images, the technique gave high to acceptable quality reconstruction at compression ratios of 0.3-0.2 and 0.64-0.43 bits per pixel (bpp), respectively.  相似文献   

10.
11.
小波变换帧间预测医学图像编码   总被引:1,自引:0,他引:1  
该文报道了一种新的医学图像编码方法,称为具有运动补偿的小波变换帧间预测医学图像编码。模拟结果表明:这种新算法比现有的同类算法有更好的性能。  相似文献   

12.
In this paper, a new wavelet transform image coding algorithm is presented. The discrete wavelet transform (DWT) is applied to the original image. The DWT coefficients are firstly quantized with a uniform scalar dead zone quantizer. Then the quantized coefficients are decomposed into four symbol streams: a binary significance map symbol stream, a binary sign stream, a position of the most significant bit (PMSB) symbol stream and a residual bit stream. An adaptive arithmetic coder with different context models is employed for the entropy coding of these symbol streams. Experimental results show that the compression performance of the proposed coding algorithm is competitive to other wavelet-based image coding algorithms reported in the literature.  相似文献   

13.
In this correspondence, we address the problem of translation sensitivity of conventional wavelet transforms for two-dimensional (2-D) signals. We propose wavelet transform algorithms that achieve the following desirable properties simultaneously: (i) translation invariance, (ii) reduced edge effects, and (iii) size-limitedness. We apply this translation invariant biorthogonal wavelet transform with symmetric extensions to image coding applications with good results.  相似文献   

14.
We conducted positron emission tomography (PET) image reconstruction experiments using the wavelet transform. The Wavelet-Vaguelette decomposition was used as a framework from which expressions for the necessary wavelet coefficients might be derived, and then the wavelet shrinkage was applied to the wavelet coefficients for the reconstruction (WVS). The performances of WVS were evaluated and compared with those of the filtered back-projection (FBP) using software phantoms, physical phantoms, and human PET studies. The results demonstrated that WVS gave stable reconstruction over the range of shrinkage parameters and provided better noise and spatial resolution characteristics than FBP.  相似文献   

15.
In this paper, an image watermarking scheme is developed using the tree-based spatial-frequency feature of wavelet transform. With our approach, the watermark sequence is inserted in those high activity texture regions of an image having the maximum strength of just noticeable distortion (JND) tolerance of the human visual system (HVS). Simulation results show that the proposed method achieves a good compromise between the robustness and transparency.  相似文献   

16.
We propose a two-dimensional generalization to the M-band case of the dual-tree decomposition structure (initially proposed by Kingsbury and further investigated by Selesnick) based on a Hilbert pair of wavelets. We particularly address: 1) the construction of the dual basis and 2) the resulting directional analysis. We also revisit the necessary pre-processing stage in the M-band case. While several reconstructions are possible because of the redundancy of the representation, we propose a new optimal signal reconstruction technique, which minimizes potential estimation errors. The effectiveness of the proposed M-band decomposition is demonstrated via denoising comparisons on several image types (natural, texture, seismics), with various M-band wavelets and thresholding strategies. Significant improvements in terms of both overall noise reduction and direction preservation are observed.  相似文献   

17.
This paper presents new wideband speech coding and integrated speech coding-enhancement systems based on frame-synchronized fast wavelet packet transform algorithms. It also formulates temporal and spectral psychoacoustic models of masking adapted to wavelet packet analysis. The algorithm of the proposed FFT-like overlapped block orthogonal wavelet packet transform permits us to efficiently approximate the auditory critical band decomposition in the time and frequency domains. This allows us to make use of the temporal and spectral masking properties of the human auditory system to decrease the average bit rate of the encoder while perceptually hiding the quantization error. The same wavelet packet representation is used to merge speech enhancement and coding in the context of auditory modeling. The advantage of the method presented in this paper over previous approaches is that perceptual enhancement and coding, which is usually implemented as a cascade of two separate systems, are combined. This leads to a decreased computational load. Experiments show that the proposed wideband coding procedure by itself can achieve transparent coding of speech signals sampled at 16 kHz at an average bit rate of 39.4 kbit/s. The combined speech coding-enhancement procedure achieves higher bit rate values that depend on the residual noise characteristics at the output of the enhancement process  相似文献   

18.
The conventional two-dimensional wavelet transform used in existing image coders is usually performed through one-dimensional (1-D) filtering in the vertical and horizontal directions, which cannot efficiently represent edges and lines in images. The curved wavelet transform presented in this paper is carried out by applying 1-D filters along curves, rather than being restricted to vertical and horizontal straight lines. The curves are determined based on image content and are usually parallel to edges and lines in the image to be coded. The pixels along these curves can be well represented by a small number of wavelet coefficients. The curved wavelet transform is used to construct a new image coder. The code-stream syntax of the new coder is the same as that of JPEG2000, except that a new marker segment is added to the tile headers. Results of image coding and subjective quality assessment show that the new image coder performs better than, or as well as, JPEG2000. It is particularly efficient for images that contain sharp edges and can provide a PSNR gain of up to 1.67 dB for natural images compared with JPEG2000.  相似文献   

19.
Wavelet transform can decompose images into various multiresolution subbands. In these subbands the correlation exists. A novel technique for image coding by taking advantage of the correlation is addressed. It is based on predictive edge detection from the LL band of the lowest resolution level to predict the edge in the LH, HL and HH bands in the higher resolution level. If the coefficient is predicted as an edge it is preserved; otherwise, it is discarded. In the decoder, the location of the preserved coefficients can also be found as in the encoder. Therefore, no overhead is needed. Instead of complex vector quantization, which is commonly used in subband image coding for high compression ratio, simple scalar quantization is used to code the remaining coefficients and achieves very good results.  相似文献   

20.
李知菲  胡平 《信息技术》2006,30(5):40-42
在图像处理过程中,压缩算法的编制是最为重要的部分,详细论述了利用小波变换来母缩彩色连续图像的技术细节,同时也对在舰蚴环境中实现该算法时遇到的问题做了分析,并给出了解答。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号