首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 227 毫秒
1.
小波图像的膨胀-游程编码算法   总被引:3,自引:0,他引:3  
提出了一种基于形态膨胀运算和游程编码的新型小波编码器膨胀-游程(Dilation-Run)算法。编码器根据图像小波变换后重要系数的带内聚类特性和重要系数分布的带间相似性,利用数学形态学中的膨胀运算搜索并编码各聚类中的重要系数;同时使用一种高效的游程编码技术对各聚类的种子系数,即膨胀运算起始点的位置进行编码,从而避免了小波图像中非重要系数的逐个编码。编码器算法简单,并且基于位平面实现,因此输出码流具有渐进性。实验结果表明,膨胀-游程算法的性能优于零树小波编码器SPIHT,并能与两种形态学小波编码器MRWD 和SLCCA的性能媲美。对于聚类特性显著的图像,算法的性能则优于上述形态学小波编码器。  相似文献   

2.
Theory of optimal orthonormal subband coders   总被引:2,自引:0,他引:2  
The theory of the orthogonal transform coder and methods for its optimal design have been known for a long time. We derive a set of necessary and sufficient conditions for the coding-gain optimality of an orthonormal subband coder for given input statistics. We also show how these conditions can be satisfied by the construction of a sequence of optimal compaction filters one at a time. Several theoretical properties of optimal compaction filters and optimal subband coders are then derived, especially pertaining to behavior as the number of subbands increases. Significant theoretical differences between optimum subband coders, transform coders, and predictive coders are summarized. Finally, conditions are presented under which optimal orthonormal subband coders yield as much coding gain as biorthogonal ones for a fixed number of subbands  相似文献   

3.
基于DCT变换的内嵌静止图像压缩算法   总被引:9,自引:0,他引:9  
陈军  吴成柯 《电子学报》2002,30(10):1570-1572
提出了一种有效的基于离散余弦变换(DCT)的内嵌子带图像编码算法.Xiong等人提出的EZDCT算法采用零树结构实现了一种内嵌DCT编码器,且其性能优于JPEG.本文指出DCT的零树结构在内嵌DCT算法中并非很有效,同时提出了一种不依赖零树结构的简便、高效的内嵌DCT子带编码算法.实验结果表明本文算的压缩性能(PSNR)比EZDCT高约0.5~1.5dB,且接近当前最通用的内嵌小波SPIHT算法,在对某些图像压缩时还优于SPIHT算法.  相似文献   

4.
Image coding using dual-tree discrete wavelet transform   总被引:2,自引:0,他引:2  
In this paper, we explore the application of 2-D dual-tree discrete wavelet transform (DDWT), which is a directional and redundant transform, for image coding. Three methods for sparsifying DDWT coefficients, i.e., matching pursuit, basis pursuit, and noise shaping, are compared. We found that noise shaping achieves the best nonlinear approximation efficiency with the lowest computational complexity. The interscale, intersubband, and intrasubband dependency among the DDWT coefficients are analyzed. Three subband coding methods, i.e., SPIHT, EBCOT, and TCE, are evaluated for coding DDWT coefficients. Experimental results show that TCE has the best performance. In spite of the redundancy of the transform, our DDWT _ TCE scheme outperforms JPEG2000 up to 0.70 dB at low bit rates and is comparable to JPEG2000 at high bit rates. The DDWT _TCE scheme also outperforms two other image coders that are based on directional filter banks. To further improve coding efficiency, we extend the DDWT to an anisotropic dual-tree discrete wavelet packets (ADDWP), which incorporates adaptive and anisotropic decomposition into DDWT. The ADDWP subbands are coded with TCE coder. Experimental results show that ADDWP _ TCE provides up to 1.47 dB improvement over the DDWT _TCE scheme, outperforming JPEG2000 up to 2.00 dB. Reconstructed images of our coding schemes are visually more appealing compared with DWT-based coding schemes thanks to the directionality of wavelets.  相似文献   

5.
In this paper, we establish a probabilistic framework for adaptive transform coding that leads to a generalized Lloyd type algorithm for transform coder design. Transform coders are often constructed by concatenating an ad hoc choice of transform with suboptimal bit allocation and quantizer design. Instead, we start from a probabilistic latent variable model in the form of a mixture of constrained Gaussian mixtures. From this model, we derive an transform coder design algorithm, which integrates optimization of all transform coder parameters. An essential part this algorithm is our introduction of a new transform basis-the coding optimal transform-which, unlike commonly used transforms, minimizes compression distortion. Adaptive transform coders can be effective for compressing databases of related imagery since the high overhead associated with these coders can be amortized over the entire database. For this work, we performed compression experiments on a database of synthetic aperture radar images. Our results show that adaptive coders improve compressed signal-to-noise ratio (SNR) by approximately 0.5 dB compared with global coders. Coders that incorporated the coding optimal transform had the best SNRs on the images used to develop the coder. However, coders that incorporated the discrete cosine transform generalized better to new images.  相似文献   

6.
The wireless sensor network utilizes image compression algorithms like JPEG, JPEG2000, and SPIHT for image transmission with high coding efficiency. During compression, discrete cosine transform (DCT)–based JPEG has blocking artifacts at low bit-rates. But this effect is reduced by discrete wavelet transform (DWT)–based JPEG2000 and SPIHT algorithm but it possess high computational complexity. This paper proposes an efficient lapped biorthogonal transform (LBT)–based low-complexity zerotree codec (LZC), an entropy coder for image coding algorithm to achieve high compression. The LBT-LZC algorithm yields high compression, better visual quality with low computational complexity. The performance of the proposed method is compared with other popular coding schemes based on LBT, DCT and wavelet transforms. The simulation results reveal that the proposed algorithm reduces the blocking artifacts and achieves high compression. Besides, it is analyzed for noise resilience.  相似文献   

7.
In recent literature, there exist many high-performance wavelet coders that use different spatially adaptive coding techniques in order to exploit the spatial energy compaction property of the wavelet transform. Two crucial issues in adaptive methods are the level of flexibility and the coding efficiency achieved while modeling different image regions and allocating bitrate within the wavelet subbands. In this paper, we introduce the ldquospherical coder,rdquo which provides a new adaptive framework for handling these issues in a simple and effective manner. The coder uses local energy as a direct measure to differentiate between parts of the wavelet subband and to decide how to allocate the available bitrate. As local energy becomes available at finer resolutions, i.e., in smaller size windows, the coder automatically updates its decisions about how to spend the bitrate. We use a hierarchical set of variables to specify and code the local energy up to the highest resolution, i.e., the energy of individual wavelet coefficients. The overall scheme is nonredundant, meaning that the subband information is conveyed using this equivalent set of variables without the need for any side parameters. Despite its simplicity, the algorithm produces PSNR results that are competitive with the state-of-art coders in literature.  相似文献   

8.
基于离散余弦变换的波形内插语音编码算法   总被引:2,自引:0,他引:2       下载免费PDF全文
刘靖宇  鲍长春  李如玮 《电子学报》2009,37(7):1599-1605
 针对波形内插(Waveform Interpolation,WI)语音编码的特征波形分解问题,本文首先提出了基于离散余弦变换(Discrete Cosine Transform,DCT)的特征波形分解方法,避免了复杂的特征波形对齐运算;其次,针对WI的相位重建问题,提出了清/浊音相位判决和浊音相位分类的方法,提高了重建语音质量;最后,分别构建了速率为2.0kbps和1.6kbps的DCT-WI声码器.主观MOS分表明,2.0kbps的DCT-WI声码器质量优于2.4kbps MELP声码器,1.6kbps的DCT-WI声码器亦取得了良好的听觉效果.  相似文献   

9.
This paper presents a listless implementation of wavelet based block tree coding (WBTC) algorithm of varying root block sizes. WBTC algorithm improves the image compression performance of set partitioning in hierarchical trees (SPIHT) at lower rates by efficiently encoding both inter and intra scale correlation using block trees. Though WBTC lowers the memory requirement by using block trees compared to SPIHT, it makes use of three ordered auxiliary lists. This feature makes WBTC undesirable for hardware implementation; as it needs a lot of memory management when the list nodes grow exponentially on each pass. The proposed listless implementation of WBTC algorithm uses special markers instead of lists. This reduces dynamic memory requirement by 88% with respect to WBTC and 89% with respect to SPIHT. The proposed algorithm is combined with discrete cosine transform (DCT) and discrete wavelet transform (DWT) to show its superiority over DCT and DWT based embedded coders, including JPEG 2000 at lower rates. The compression performance on most of the standard test images is nearly same as WBTC, and outperforms SPIHT by a wide margin particularly at lower bit rates.  相似文献   

10.
An image-coding scheme which combines transform coding with a human visual system (HVS) model is described. The system includes an eye tracker to pick up the point of regard of a single viewer. One can then utilize that the acuity of the HVS is less in the peripheral vision than in the central part of the visual field. A model of the decreasing acuity of the HVS, which can be applied to a wide class of transform coders is described. Such a coding system has large potential for data compression.In this paper, we have incorporated the model into four different transform coders, one from each of the main classes of transform coders. Two of the coders are block-based decomposition schemes, the discrete cosine transform-based JPEG coder and a lapped transform scheme. The two others are subband-based decomposition schemes, a wavelet based and a wavelet packet-based scheme.  相似文献   

11.
Low resolution region discriminator for wavelet coding   总被引:1,自引:0,他引:1  
Syed  Y.F. Rao  K.R. 《Electronics letters》2001,37(12):748-749
A wavelet block chain (WBC) method is used in the initial coding of the low-low subband created by a wavelet transform to separate and label homogenous regions of the image which require no additional overhead in the bitstream. This information is then used to enhance the coding performance in a modified wavelet based coder. This method uses a two stage ZTE/SPIHT entropy coder (called a homogenous connected-region interested ordered transmission coder) to create a bitstream with properties of progressive transmission, scalability, and perceptual optimisation after a minimum bit rate is reached. Simulation results show good scalable low bit rate (0.04-0.4 bpp) compression, comparable to a SPIHT coder, but with better perceptual quality due to use of the region based information acquired by the WBC method  相似文献   

12.
A perceptual audio coder, in which each audio segment is adaptively analyzed using either a sinusoidal or an optimum wavelet basis according to the time-varying characteristics of the audio signals, has been constructed. The basis optimization is achieved by a novel switched filter bank scheme, which switches between a uniform filter bank structure (discrete cosine transform) and a non-uniform filter bank structure (discrete wavelet transform). A major artifact of the International ISO/Moving Pictures Experts Group (MPEG) audio coding standard (MPEG-I layers 1 and 2) known as pre-echo distortion which uses a uniform filter bank structure for audio signal analysis, is almost eliminated in the proposed coder. A perceptual masking model implemented using a high-resolution wavelet packet filter bank with 27 subbands, closely mimicking the critical bands of the human auditory system, is employed in this audio coder. The resulting scheme is a variable bit-rate audio coder, which provides compression ratios comparable to MPEG-I layers 1 and 2 with almost transparent quality.  相似文献   

13.
A fast and efficient hybrid fractal-wavelet image coder.   总被引:1,自引:0,他引:1  
The excellent visual quality and compression rate of fractal image coding have limited applications due to exhaustive inherent encoding time. This paper presents a new fast and efficient image coder that applies the speed of the wavelet transform to the image quality of the fractal compression. Fast fractal encoding using Fisher's domain classification is applied to the lowpass subband of wavelet transformed image and a modified set partitioning in hierarchical trees (SPIHT) coding, on the remaining coefficients. Furthermore, image details and wavelet progressive transmission characteristics are maintained, no blocking effects from fractal techniques are introduced, and the encoding fidelity problem common in fractal-wavelet hybrid coders is solved. The proposed scheme promotes an average of 94% reduction in encoding-decoding time comparing to the pure accelerated Fractal coding results. The simulations also compare the results to the SPIHT wavelet coding. In both cases, the new scheme improves the subjective quality of pictures for high-medium-low bitrates.  相似文献   

14.
It has been well established that state-of-the-art wavelet image coders outperform block transform image coders in the rate-distortion (R-D) sense by a wide margin. Wavelet-based JPEG2000 is emerging as the new high-performance international standard for still image compression. An often asked question is: how much of the coding improvement is due to the transform and how much is due to the encoding strategy? Current block transform coders such as JPEG suffer from poor context modeling and fail to take full advantage of correlation in both space and frequency sense. This paper presents a simple, fast, and efficient adaptive block transform image coding algorithm based on a combination of prefiltering, postfiltering, and high-order space-frequency context modeling of block transform coefficients. Despite the simplicity constraints, coding results show that the proposed coder achieves competitive R-D performance compared to the best wavelet coders in the literature.  相似文献   

15.
The mainstream approach to subband coding has been to partition the input signal into subband signals and to code those signals separately with optimal or near-optimal quantizers and entropy coders. A more effective approach, however, is one where the subband coders are optimized jointly so that the average distortion introduced by the subband quantizers is minimized subject to a constraint on the output rate of the subband encoder. A subband coder with jointly optimized multistage residual quantizers and entropy coders is introduced and applied to image coding. The high performance of the coder is attributed to its ability to exploit statistical dependencies within and across the subbands. The efficiency of the multistage residual quantization structure and the effectiveness of the statistical modeling algorithm result in an attractive balance among the reproduction quality, rate, and complexity.  相似文献   

16.
Coding isotropic images   总被引:1,自引:0,他引:1  
Rate-distortion functions for 2-dimensional homogeneous isotropic images are compared with the performance of five source encoders designed for such images. Both unweighted and frequency weighted mean-square error distortion measures are considered. The coders considered are a) differential pulse code modulation (DPCM) using six previous samples or picture elements (pels) in the prediction--herein called 6-pel DPCM, b) simple DPCM using single-sample prediction, c) 6-pel DPCM followed by entropy coding, d)8 times 8discrete cosine transform coding, and e)4 times 4Hadamard transform coding. Other transform coders were studied and found to have about the same performance as the two transform coders above. With the mean-square error distortion measure, 6-pel DPCM with entropy coding performed best. Next best was the8 times 8discrete cosine transform coder and the 6-pel DPCM--these two had approximately the same distortion. Next were the4 times 4Hadamard and simple DPCM, in that order. The relative performance of the coders changed slightly when the distortion measure was frequency weighted mean-square error. FromR = 1to 3 bits/pel, which was the range studied here, the performances of all the coders were separated by only about 4 dB.  相似文献   

17.
An image coding algorithm, Progressive Resolution Coding (PROGRES), for a high-speed resolution scalable decoding is proposed. The algorithm is designed based on a prediction of the decaying dynamic ranges of wavelet subbands. Most interestingly, because of the syntactic relationship between two coders, the proposed method costs an amount of bits very similar to that used by uncoded (i.e., not entropy coded) SPIHT. The algorithm bypasses bit-plane coding and complicated list processing of SPIHT in order to obtain a considerable speed improvement, giving up quality scalability, but without compromising coding efficiency. Since each tree of coefficients is separately coded, where the root of the tree corresponds to the coefficient in LL subband, the algorithm is easily extensible to random access decoding. The algorithm is designed and implemented for both 2-D and 3-D wavelet subbands. Experiments show that the decoding speeds of the proposed coding model are four times and nine times faster than uncoded 2-D-SPIHT and 3-D-SPIHT, respectively, with almost the same decoded quality. The higher decoding speed gain in a larger image source validates the suitability of the proposed method to very large scale image encoding and decoding. In the Appendix, we explain the syntactic relationship of the proposed PROGRES method to uncoded SPIHT, and demonstrate that, in the lossless case, the bits sent to the codestream for each algorithm are identical, except that they are sent in different order.  相似文献   

18.
The demand for high-speed transmission of best quality images over the internet has led to a strong need for the development of better algorithm for coding images. The transform coders are highly popular in encoding images. M-dimensional real transform (MRT) gives an alternate representation of a signal in the frequency domain which involves only real addition. In this paper a new transform coder based on 4×4 MRT is proposed and its performance is analyzed for all types of grayscale images.  相似文献   

19.
基于自适应小波变换的嵌入图像压缩算法   总被引:3,自引:1,他引:2  
针对遥感、指纹、地震资料等图像纹理复杂丰富、局部相关性较弱等特点,文章通过实施自适应小波变换、合理确定系数扫描次序、分类量化小波系数等措施,提出了一种高效的图像压缩编码算法.仿真结果表明,相同压缩比下,本文算法的图像复原质量明显优于SPIHT算法(特别是对于纹理图像,如标准图像Barbara).  相似文献   

20.
Image coding by block prediction of multiresolution subimages   总被引:20,自引:0,他引:20  
The redundancy of the multiresolution representation has been clearly demonstrated in the case of fractal images, but it has not been fully recognized and exploited for general images. Fractal block coders have exploited the self-similarity among blocks in images. We devise an image coder in which the causal similarity among blocks of different subbands in a multiresolution decomposition of the image is exploited. In a pyramid subband decomposition, the image is decomposed into a set of subbands that are localized in scale, orientation, and space. The proposed coding scheme consists of predicting blocks in one subimage from blocks in lower resolution subbands with the same orientation. Although our prediction maps are of the same kind of those used in fractal block coders, which are based on an iterative mapping scheme, our coding technique does not impose any contractivity constraint on the block maps. This makes the decoding procedure very simple and allows a direct evaluation of the mean squared error (MSE) between the original and the reconstructed image at coding time. More importantly, we show that the subband pyramid acts as an automatic block classifier, thus making the block search simpler and the block matching more effective. These advantages are confirmed by the experimental results, which show that the performance of our scheme is superior for both visual quality and MSE to that obtainable with standard fractal block coders and also to that of other popular image coders such as JPEG.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号