首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
An adaptive image-coding algorithm for compression of medical ultrasound (US) images in the wavelet domain is presented. First, it is shown that the histograms of wavelet coefficients of the subbands in the US images are heavy-tailed and can be better modelled by using the generalised Student's t-distribution. Then, by exploiting these statistics, an adaptive image coder named JTQVS-WV is designed, which unifies the two approaches to image-adaptive coding: rate-distortion (R-D) optimised quantiser selection and R-D optimal thresholding, and is based on the varying-slope quantisation strategy. The use of varying-slope quantisation strategy (instead of fixed R-D slope) allows coding of the wavelet coefficients across various scales according to their importance for the quality of reconstructed image. The experimental results show that the varying-slope quantisation strategy leads to a significant improvement in the compression performance of the JTQVS-WV over the best state-of-the-art image coder, SPIHT, JPEG2000 and the fixed-slope variant of JTQVS-WV named JTQ-WV. For example, the coding of US images at 0.5 bpp yields a peak signal-to-noise ratio gain of >0.6, 3.86 and 0.3 dB over the benchmark, SPIHT, JPEG2000 and JTQ-WV, respectively.  相似文献   

2.
3.
Wavelet transform coefficients are defined by both a magnitude and a sign. While efficient algorithms exist for coding the transform coefficient magnitudes, current wavelet image coding algorithms are not as efficient at coding the sign of the transform coefficients. It is generally assumed that there is no compression gain to be obtained from entropy coding of the sign. Only recently have some authors begun to investigate this component of wavelet image coding. In this paper, sign coding is examined in detail in the context of an embedded wavelet image coder. In addition to using intraband wavelet coefficients in a sign coding context model, a projection technique is described that allows nonintraband wavelet coefficients to be incorporated into the context model. At the decoder, accumulated sign prediction statistics are also used to derive improved reconstruction estimates for zero-quantized coefficients. These techniques are shown to yield PSNR improvements averaging 0.3 dB, and are applicable to any genre of embedded wavelet image codec.  相似文献   

4.
The discrete wavelet transform has recently emerged as a powerful technique for decomposing images into various multi-resolution approximations. Multi-resolution decomposition schemes have proven to be very effective for high-quality, low bit-rate image coding. In this work, we investigate the use of entropy-constrained trellis-coded quantization (ECTCQ) for encoding the wavelet coefficients of both monochrome and color images. ECTCQ is known as an effective scheme for quantizing memoryless sources with low to moderate complexity, The ECTCQ approach to data compression has led to some of the most effective source codes found to date for memoryless sources. Performance comparisons are made using the classical quadrature mirror filter bank of Johnston and nine-tap spline filters that were built from biorthogonal wavelet bases. We conclude that the encoded images obtained from the system employing nine-tap spline filters are marginally superior although at the expense of additional computational burden. Excellent peak-signal-to-noise ratios are obtained for encoding monochrome and color versions of the 512x512 "Lenna" image. Comparisons with other results from the literature reveal that the proposed wavelet coder is quite competitive.  相似文献   

5.
基于自适应小波变换的嵌入图像压缩算法   总被引:3,自引:1,他引:2  
针对遥感、指纹、地震资料等图像纹理复杂丰富、局部相关性较弱等特点,文章通过实施自适应小波变换、合理确定系数扫描次序、分类量化小波系数等措施,提出了一种高效的图像压缩编码算法.仿真结果表明,相同压缩比下,本文算法的图像复原质量明显优于SPIHT算法(特别是对于纹理图像,如标准图像Barbara).  相似文献   

6.
Wavelet image decompositions generate a tree-structured set of coefficients, providing an hierarchical data-structure for representing images. A new class of previously proposed image compression algorithms has focused on new ways for exploiting dependencies between this hierarchy of wavelet coefficients using “zero-tree” data structures. This paper presents a new framework for understanding the efficiency of one specific algorithm in this class we introduced previously and dubbed the space-frequency quantization (SFQ)-based coder. It describes, at a higher level, how the SFQ-based image coder of our earlier work can be construed as a simplified attempt to design a global entropy-constrained vector quantizer (ECVQ) with two noteworthy features: (i) it uses an image-sized codebook dimension (departing from conventional small-dimensional codebooks that are applied to small image blocks); and (ii) it uses an on-line image-adaptive application of constrained ECVQ (which typically uses off-line training data in its codebook design phase). The principal insight offered by the new framework is that improved performance is achieved by more accurately characterizing the joint probabilities of arbitrary sets of wavelet coefficients. We also present an empirical statistical study of the distribution of the wavelet coefficients of high-frequency bands, which are responsible for most of the performance gain of the new class of algorithms. This study verifies that the improved performance achieved by the new class of algorithms like the SFQ-based coder can be attributed to its being designed around one conveniently structured and efficient collection of such sets, namely, the zero-tree data structure. The results of this study further inspire the design of alternative, novel data structures based on nonlinear morphological operators  相似文献   

7.
Image compression with adaptive local cosines: a comparative study   总被引:4,自引:0,他引:4  
The goal of this work is twofold. First, we demonstrate that an advantage can be gained by using local cosine bases over wavelets to encode images that contain periodic textures. We designed a coder that outperforms one of the best wavelet coders on a large number of images. The coder finds the optimal segmentation of the image in terms of local cosine bases. The coefficients are encoded using a scalar quantizer optimized for Laplacian distributions. This new coder constitutes the first concrete contribution of the paper. Second, we used our coder to perform an extensive comparison of several optimized bells in terms of rate-distortion and visual quality for a large collection of images. This study provides for the first time a rigorous evaluation in realistic conditions of these bells. Our experiments show that bells that are designed to reproduce exactly polynomials of degree 1 resulted in the worst performance in terms of the PSNR. However, a visual inspection of the compressed images indicates that these bells often provide reconstructed images with very few visual artifacts, even at low bit rates. The bell with the most narrow Fourier transform gave the best results in terms of the PSNR on most images. This bell tends however to create annoying visual artifacts in very smooth regions at low bit rate.  相似文献   

8.
小波图像的膨胀-游程编码算法   总被引:3,自引:0,他引:3  
提出了一种基于形态膨胀运算和游程编码的新型小波编码器膨胀-游程(Dilation-Run)算法。编码器根据图像小波变换后重要系数的带内聚类特性和重要系数分布的带间相似性,利用数学形态学中的膨胀运算搜索并编码各聚类中的重要系数;同时使用一种高效的游程编码技术对各聚类的种子系数,即膨胀运算起始点的位置进行编码,从而避免了小波图像中非重要系数的逐个编码。编码器算法简单,并且基于位平面实现,因此输出码流具有渐进性。实验结果表明,膨胀-游程算法的性能优于零树小波编码器SPIHT,并能与两种形态学小波编码器MRWD 和SLCCA的性能媲美。对于聚类特性显著的图像,算法的性能则优于上述形态学小波编码器。  相似文献   

9.
One method of transmitting wavelet based zerotree encoded images over noisy channels is to add channel coding without altering the source coder. A second method is to reorder the embedded zerotree bitstream into packets containing a small set of wavelet coefficient trees. We consider a hybrid mixture of these two approaches and demonstrate situations in which the hybrid image coder can outperform either of the two building block methods, namely on channels that can suffer packet losses as well as statistically varying bit errors.  相似文献   

10.
Homomorphic wavelet-based statistical despeckling of SAR images   总被引:2,自引:0,他引:2  
In this paper, we introduce the homomorphic /spl Gamma/-WMAP (wavelet maximum a posteriori) filter, a wavelet-based statistical speckle filter equivalent to the well known /spl Gamma/-MAP filter. We perform a logarithmic transformation in order to make the speckle contribution additive and statistically independent of the radar cross section. Further, we propose to use the normal inverse Gaussian (NIG) distribution as a statistical model for the wavelet coefficients of both the reflectance image and the noise image. We show that the NIG distribution is an excellent statistical model for the wavelet coefficients of synthetic aperture radar images, and we present a method for estimating the parameters. We compare the homomorphic /spl Gamma/-WMAP filter with the /spl Gamma/-MAP filter and and the recently introduced /spl Gamma/-WMAP filter, which are both based on the same statistical assumptions. The homomorphic /spl Gamma/-WMAP filter is shown to have better performance with regard to smoothing homogeneous regions. It may in some cases introduce a small bias, but in our studies it is always less than that introduced by the /spl Gamma/-MAP filter. Further, the speckle removed by the homomorphic /spl Gamma/-WMAP filter has statistics closer to the theoretical model than the speckle contribution removed with the other filters.  相似文献   

11.
Directional multiscale modeling of images using the contourlet transform.   总被引:43,自引:0,他引:43  
The contourlet transform is a new two-dimensional extension of the wavelet transform using multiscale and directional filter banks. The contourlet expansion is composed of basis images oriented at various directions in multiple scales, with flexible aspect ratios. Given this rich set of basis images, the contourlet transform effectively captures smooth contours that are the dominant feature in natural images. We begin with a detailed study on the statistics of the contourlet coefficients of natural images: using histograms to estimate the marginal and joint distributions and mutual information to measure the dependencies between coefficients. This study reveals the highly non-Gaussian marginal statistics and strong interlocation, interscale, and interdirection dependencies of contourlet coefficients. We also find that conditioned on the magnitudes of their generalized neighborhood coefficients, contourlet coefficients can be approximately modeled as Gaussian random variables. Based on these findings, we model contourlet coefficients using a hidden Markov tree (HMT) model with Gaussian mixtures that can capture all interscale, interdirection, and interlocation dependencies. We present experimental results using this model in image denoising and texture retrieval applications. In denoising, the contourlet HMT outperforms other wavelet methods in terms of visual quality, especially around edges. In texture retrieval, it shows improvements in performance for various oriented textures.  相似文献   

12.
The conventional two-dimensional wavelet transform used in existing image coders is usually performed through one-dimensional (1-D) filtering in the vertical and horizontal directions, which cannot efficiently represent edges and lines in images. The curved wavelet transform presented in this paper is carried out by applying 1-D filters along curves, rather than being restricted to vertical and horizontal straight lines. The curves are determined based on image content and are usually parallel to edges and lines in the image to be coded. The pixels along these curves can be well represented by a small number of wavelet coefficients. The curved wavelet transform is used to construct a new image coder. The code-stream syntax of the new coder is the same as that of JPEG2000, except that a new marker segment is added to the tile headers. Results of image coding and subjective quality assessment show that the new image coder performs better than, or as well as, JPEG2000. It is particularly efficient for images that contain sharp edges and can provide a PSNR gain of up to 1.67 dB for natural images compared with JPEG2000.  相似文献   

13.
It is widely known that the wavelet coefficients of natural scenes possess certain statistical regularities which can be affected by the presence of distortions. The DIIVINE (Distortion Identification-based Image Verity and Integrity Evaluation) algorithm is a successful no-reference image quality assessment (NR IQA) algorithm, which estimates quality based on changes in these regularities. However, DIIVINE operates based on real-valued wavelet coefficients, whereas the visual appearance of an image can be strongly determined by both the magnitude and phase information.In this paper, we present a complex extension of the DIIVINE algorithm (called C-DIIVINE), which blindly assesses image quality based on the complex Gaussian scale mixture model corresponding to the complex version of the steerable pyramid wavelet transform. Specifically, we applied three commonly used distribution models to fit the statistics of the wavelet coefficients: (1) the complex generalized Gaussian distribution is used to model the wavelet coefficient magnitudes, (2) the generalized Gaussian distribution is used to model the coefficients׳ relative magnitudes, and (3) the wrapped Cauchy distribution is used to model the coefficients׳ relative phases. All these distributions have characteristic shapes that are consistent across different natural images but change significantly in the presence of distortions. We also employ the complex wavelet structural similarity index to measure degradation of the correlations across image scales, which serves as an important indicator of the subbands׳ energy distribution and the loss of alignment of local spectral components contributing to image structure. Experimental results show that these complex extensions allow C-DIIVINE to yield a substantial improvement in predictive performance as compared to its predecessor, and highly competitive performance relative to other recent no-reference algorithms.  相似文献   

14.
Speckle Suppression in SAR Images Using the 2-D GARCH Model   总被引:2,自引:0,他引:2  
A novel Bayesian-based speckle suppression method for Synthetic Aperture Radar ( SAR) images is presented that preserves the structural features and textural information of the scene. First, the logarithmic transform of the original image is analyzed into the multiscale wavelet domain. We show that the wavelet coefficients of SAR images have significantly non-Gaussian statistics that are best described by the 2-D GARCH model. By using the 2-D GARCH model on the wavelet coefficients, we are capable of taking into account important characteristics of wavelet coefficients, such as heavy tailed marginal distribution and the dependencies between the coefficients. Furthermore, we use a maximum a posteriori (MAP) estimator for estimating the clean image wavelet coefficients. Finally, we compare our proposed method with various speckle suppression methods applied on synthetic and actual SAR images and we verify the performance improvement in utilizing the new strategy.  相似文献   

15.
A novel technique for despeckling the medical ultrasound images using lossy compression is presented. The logarithm of the input image is first transformed to the multiscale wavelet domain. It is then shown that the subband coefficients of the log-transformed ultrasound image can be successfully modeled using the generalized Laplacian distribution. Based on this modeling, a simple adaptation of the zero-zone and reconstruction levels of the uniform threshold quantizer is proposed in order to achieve simultaneous despeckling and quantization. This adaptation is based on: (1) an estimate of the corrupting speckle noise level in the image; (2) the estimated statistics of the noise-free subband coefficients; and (3) the required compression rate. The Laplacian distribution is considered as a special case of the generalized Laplacian distribution and its efficacy is demonstrated for the problem under consideration. Context-based classification is also applied to the noisy coefficients to enhance the performance of the subband coder. Simulation results using a contrast detail phantom image and several real ultrasound images are presented. To validate the performance of the proposed scheme, comparison with two two-stage schemes, wherein the speckled image is first filtered and then compressed using the state-of-the-art JPEG2000 encoder, is presented. Experimental results show that the proposed scheme works better, both in terms of the signal to noise ratio and the visual quality.  相似文献   

16.
Fast adaptive wavelet packet image compression   总被引:15,自引:0,他引:15  
Wavelets are ill-suited to represent oscillatory patterns: rapid variations of intensity can only be described by the small scale wavelet coefficients, which are often quantized to zero, even at high bit rates. Our goal is to provide a fast numerical implementation of the best wavelet packet algorithm in order to demonstrate that an advantage can be gained by constructing a basis adapted to a target image. Emphasis is placed on developing algorithms that are computationally efficient. We developed a new fast two-dimensional (2-D) convolution decimation algorithm with factorized nonseparable 2-D filters. The algorithm is four times faster than a standard convolution-decimation. An extensive evaluation of the algorithm was performed on a large class of textured images. Because of its ability to reproduce textures so well, the wavelet packet coder significantly out performs one of the best wavelet coder on images such as Barbara and fingerprints, both visually and in term of PSNR.  相似文献   

17.
The success in wavelet image coding is mainly attributed to a recognition of the importance of data organization and representation. There have been several very competitive wavelet coders developed, namely, Shapiro's (1993) embedded zerotree wavelets (EZW), Servetto et al.'s (1995) morphological representation of wavelet data (MRWD), and Said and Pearlman's (see IEEE Trans. Circuits Syst. Video Technol., vol.6, p.245-50, 1996) set partitioning in hierarchical trees (SPIHT). We develop a novel wavelet image coder called significance-linked connected component analysis (SLCCA) of wavelet coefficients that extends MRWD by exploiting both within-subband clustering of significant coefficients and cross-subband dependency in significant fields. Extensive computer experiments on both natural and texture images show convincingly that the proposed SLCCA outperforms EZW, MRWD, and SPIHT. For example, for the Barbara image, at 0.25 b/pixel, SLCCA outperforms EZW, MRWD, and SPIHT by 1.41 dB, 0.32 dB, and 0.60 dB in PSNR, respectively. It is also observed that SLCCA works extremely well for images with a large portion of texture. For eight typical 256x256 grayscale texture images compressed at 0.40 b/pixel, SLCCA outperforms SPIHT by 0.16 dB-0.63 dB in PSNR. This performance is achieved without using any optimal bit allocation procedure. Thus both the encoding and decoding procedures are fast.  相似文献   

18.
This paper proposes a new motion-compensated wavelet transform video coder for very low bit-rate visual telephony. The proposed coder sequentially employs: (1) selective motion estimation on the wavelet transform domain, (2) motion-compensated prediction (MCP) of wavelet coefficients, and (3) selective entropy-constrained vector quantization (ECVQ) of the resultant MCP errors. The selective schemes in motion estimation and in quantization, which efficiently exploit the characteristic of image sequences in a visual telephony, considerably reduce the computational burden. The coder also employs a tree structure encoding to represent efficiently which blocks were encoded. In addition, in order to reduce the number of ECVQ codebooks and the image dependency of their performance, we introduce a preprocessing of signals which normalizes input vectors of ECVQ. Simulation results show that our video coder provides good PSNR (peak-to-peak signal-to-noise ratio) performance and efficient rate control.  相似文献   

19.
非高斯双变量模型contourlet图像去噪   总被引:2,自引:2,他引:0       下载免费PDF全文
Contourlet变换是继小波变换之后的又一新变换.由于contourlet变换的多尺度和多方向特性,能有效地捕获到自然图像中的轮廓,并对其进行稀疏表示.详细分析了图像contourlet系数的统计特性,并利用非高斯双变量分布对系数层间相关性进行建模.最后,将此分布应用于图像去噪,就PSNR、NMSE和视觉质量这三方面的评价指标与contourlet HMT和小波阈值法进行了比较.实验结果表明:算法能获得较好的结果,尤其是对于含有丰富纹理的图像.  相似文献   

20.
Recently a variety of efficient image denoising methods using wavelet transforms have been proposed by many researchers. In this paper, we derive the general estimation rule in the wavelet domain to obtain the denoised coefficients from the noisy image based on the multivariate statistical theory. The multivariate distributions of the original clean image can be estimated empirically from a sample image set. We define a parametric multivariate generalized Gaussian distribution (MGGD) model which closely fits the sample distribution. Multivariate model makes it possible to exploit the dependency between the estimated wavelet coefficients and their neighbours or other coefficients in different subbands. Also it can be shown that some of the existing methods based on statistical modeling are subsets of our multivariate approach. Our method could achieve high quality image denoising. Among the existing image denoising methods using the same type of wavelet (Daubechies 8) filter, our results produce the highest peak signal-to-noise ratio (PSNR).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号