首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 250 毫秒
1.
An algorithm is proposed for improving Servetto et al.'s (see Proceedings of the International Conference on Image Processing, Washington, DC, p.530-3, 1995) method of morphological representation of wavelet data (MRWD), which is among the most efficient wavelet-based image compression algorithms. In MRWD, morphological dilation is used to capture and encode the arbitrarily shaped clusters of significant coefficients within each subband and high compression is achieved. But there are still several deficiencies for rectification in MRWD. An efficient image compression algorithm is proposed, in which, for each subband, morphological dilation is first used to extract and encode the clustered significant coefficients, and the remaining space is encoded in an efficient way. Instead of encoding the large number of zeros one by one, only the small number of remaining significant coefficients and their positional information are encoded. Experimental results show that this improvement is very effective, especially for images with large and relatively smooth regions  相似文献   

2.
A fast and efficient hybrid fractal-wavelet image coder.   总被引:1,自引:0,他引:1  
The excellent visual quality and compression rate of fractal image coding have limited applications due to exhaustive inherent encoding time. This paper presents a new fast and efficient image coder that applies the speed of the wavelet transform to the image quality of the fractal compression. Fast fractal encoding using Fisher's domain classification is applied to the lowpass subband of wavelet transformed image and a modified set partitioning in hierarchical trees (SPIHT) coding, on the remaining coefficients. Furthermore, image details and wavelet progressive transmission characteristics are maintained, no blocking effects from fractal techniques are introduced, and the encoding fidelity problem common in fractal-wavelet hybrid coders is solved. The proposed scheme promotes an average of 94% reduction in encoding-decoding time comparing to the pure accelerated Fractal coding results. The simulations also compare the results to the SPIHT wavelet coding. In both cases, the new scheme improves the subjective quality of pictures for high-medium-low bitrates.  相似文献   

3.
零树框架下整数小波图像编码的改进   总被引:2,自引:0,他引:2  
整数小波变换(Integer Wavelet Transform)有许多优点,但是图象经整数小波变换(IWT)后,能量集中性较第一代小波变换差很多,不利于嵌入式零树编码(Embedded Zerotree Wavelet Encoding)。因此本文提出一种新算法,从两方面加以改进。首先,采用“整数平方量化阈值选取算法”,根据整数小波变换后各子带系数幅值的动态变化较小,小波图像能量较一般小波差的特点,选取从1开始的正整数平方作为量化闽值的同时引入可调节的量化阈值系统,根据图像中不同区域的重要性选取与之相应的量化阈值,从而增加了零树的数量;其次,提出基于索引表和游程编码的小波零树编码的新思路,简化了编码与解码的过程。实验表明,本文算法充分的将整数小波变换与零树编码结合在一起,改善了压缩质量,提高了压缩效率。  相似文献   

4.
Peak transform for efficient image representation and coding.   总被引:3,自引:0,他引:3  
In this work, we introduce a nonlinear geometric transform, called peak transform (PT), for efficient image representation and coding. The proposed PT is able to convert high-frequency signals into low-frequency ones, making them much easier to be compressed. Coupled with wavelet transform and subband decomposition, the PT is able to significantly reduce signal energy in high-frequency subbands and achieve a significant transform coding gain. This has important applications in efficient data representation and compression. To maximize the transform coding gain, we develop a dynamic programming solution for optimum PT design. Based on PT, we design an image encoder, called the PT encoder, for efficient image compression. Our extensive experimental results demonstrate that, in wavelet-based subband decomposition, the signal energy in high-frequency subbands can be reduced by up to 60% if a PT is applied. The PT image encoder outperforms state-of-the-art JPEG2000 and H.264 (INTRA) encoders by up to 2-3 dB in peak signal-to-noise ratio (PSNR), especially for images with a significant amount of high-frequency components. Our experimental results also show that the proposed PT is able to efficiently capture and preserve high-frequency image features (e.g., edges) and yields significantly improved visual quality. We believe that the concept explored in this work, designing a nonlinear transform to convert hard-to-compress signals into easy ones, is very useful. We hope this work would motivate more research work along this direction.  相似文献   

5.
针对目前基于小波变换图像融合增强算法原始图 像中的多尺度细节信息的不足,提 出了一种改进的多尺度小波变换与深度残差选择相结合的图像增强算法。利用小波变换对原 始图像进行分解提取得到它的多级分解系数后,再利用不同规则对不同层次的小波系数进行 重构,与此同时引入深度残差算法的思想对子带系数做残差。对于高频子带系数,计算子带 残差的系数与梯度特征融合方法的系数,选用两者最大值进行融合增强;而对于低频子带系 数则采用梯度特征融合增强系数与子带残差系数取平均值的算法进行融合。通过在MATLAB 平台上的实验对所提出算法进行验证,峰值信噪比相较于对比的方法都有所提高,且均方根 误差也得到减小,结构相似度都得到提高,结果表明该算法能增强图像的多尺度细节信息, 提高图像的信噪比,且具有更好的图像增强效果。  相似文献   

6.
Image compression using binary space partitioning trees   总被引:1,自引:0,他引:1  
For low bit-rate compression applications, segmentation-based coding methods provide, in general, high compression ratios when compared with traditional (e.g., transform and subband) coding approaches. In this paper, we present a new segmentation-based image coding method that divides the desired image using binary space partitioning (BSP). The BSP approach partitions the desired image recursively by arbitrarily oriented lines in a hierarchical manner. This recursive partitioning generates a binary tree, which is referred to as the BSP-tree representation of the desired image. The most critical aspect of the BSP-tree method is the criterion used to select the partitioning lines of the BSP tree representation, In previous works, we developed novel methods for selecting the BSP-tree lines, and showed that the BSP approach provides efficient segmentation of images. In this paper, we describe a hierarchical approach for coding the partitioning lines of the BSP-tree representation. We also show that the image signal within the different regions (resulting from the recursive partitioning) can be represented using low-order polynomials. Furthermore, we employ an optimum pruning algorithm to minimize the bit rate of the BSP tree representation (for a given budget constraint) while minimizing distortion. Simulation results and comparisons with other compression methods are also presented.  相似文献   

7.
Wavelet feature selection for image classification   总被引:2,自引:0,他引:2  
Energy distribution over wavelet subbands is a widely used feature for wavelet packet based texture classification. Due to the overcomplete nature of the wavelet packet decomposition, feature selection is usually applied for a better classification accuracy and a compact feature representation. The majority of wavelet feature selection algorithms conduct feature selection based on the evaluation of each subband separately, which implicitly assumes that the wavelet features from different subbands are independent. In this paper, the dependence between features from different subbands is investigated theoretically and simulated for a given image model. Based on the analysis and simulation, a wavelet feature selection algorithm based on statistical dependence is proposed. This algorithm is further improved by combining the dependence between wavelet feature and the evaluation of individual feature component. Experimental results show the effectiveness of the proposed algorithms in incorporating dependence into wavelet feature selection.  相似文献   

8.
This paper describes a novel scalable 3D triangular mesh coding method based on wavelet transform and successive approximation quantization. The algorithm efficiently exploits the intracorrelations between wavelet coefficients independently in each subband. Non‐significant wavelet coefficients are clustered, in a per bit‐plane manner, by using an octree‐based approach. An hierarchical bitstream is then generated allowing to gradually decode the 3D mesh at the desired quality or resolution representation. Our proposal can be executed on arbitrary topology meshes by using irregular wavelet decomposition. Objective and subjective quality evaluation on representative 3D meshes shows that the proposed codec provides competitive compression results when compared to the state‐of‐the‐art. Furthermore, it fits well to applications that require fast interactive handling of highly detailed 3D meshes, over networks with limited and/or variable bandwidth.  相似文献   

9.
Hierarchical partition priority wavelet image compression   总被引:3,自引:0,他引:3  
Image compression methods for progressive transmission using optimal hierarchical decomposition, partition priority coding (PPC), and multiple distribution entropy coding (MDEC) are presented. In the proposed coder, a hierarchical subband/wavelet decomposition transforms the original image. The analysis filter banks are selected to maximize the reproduction fidelity in each stage of progressive image transmission. An efficient triple-state differential pulse code modulation (DPCM) method is applied to the smoothed subband coefficients, and the corresponding prediction error is Lloyd-Max quantized. Such a quantizer is also designed to fit the characteristics of the detail transform coefficients in each subband, which are then coded using novel hierarchical PPC (HPPC) and predictive HPPC (PHPPC) algorithms. More specifically, given a suitable partitioning of their absolute range, the quantized detail coefficients are ordered based on both their decomposition level and partition and then are coded along with the corresponding address map. Space filling scanning further reduces the coding cost by providing a highly spatially correlated address map of the coefficients in each PPC partition. Finally, adaptive MDEC is applied to both the DPCM and HPPC/PHPPC outputs by considering a division of the source (quantized coefficients) into multiple subsources and adaptive arithmetic coding based on their corresponding histograms. Experimental results demonstrate the great performance of the proposed compression methods.  相似文献   

10.
基于自适应小波变换的嵌入图像压缩算法   总被引:3,自引:1,他引:2  
针对遥感、指纹、地震资料等图像纹理复杂丰富、局部相关性较弱等特点,文章通过实施自适应小波变换、合理确定系数扫描次序、分类量化小波系数等措施,提出了一种高效的图像压缩编码算法.仿真结果表明,相同压缩比下,本文算法的图像复原质量明显优于SPIHT算法(特别是对于纹理图像,如标准图像Barbara).  相似文献   

11.
This paper proposes a statistically optimum adaptive wavelet packet (WP) thresholding function for image denoising based on the generalized Gaussian distribution. It applies computationally efficient multilevel WP decomposition to noisy images to obtain the best tree or optimal wavelet basis, utilizing Shannon entropy. It selects an adaptive threshold value which is level and subband dependent based on analyzing the statistical parameters of subband coefficients. In the utilized thresholding function, which is based on a maximum a posteriori estimate, the modified version of dominant coefficients was estimated by optimal linear interpolation between each coefficient and the mean value of the corresponding subband. Experimental results, on several test images under different noise intensity conditions, show that the proposed algorithm, called OLI-Shrink, yields better peak signal noise ratio and superior visual image quality-measured by universal image quality index-compared to standard denoising methods, especially in the presence of high noise intensity. It also outperforms some of the best state-of-the-art wavelet-based denoising techniques.  相似文献   

12.
The concept of adapted waveform analysis using a best-basis selection out of a predefined library of wavelet packet (WP) bases allows an efficient image representation for the purpose of compression. Image coding methods based on the best-basis WP representation have shown significant coding gains for some image classes compared with methods using a fixed dyadic structured wavelet basis, at the expense however, of considerably higher computational complexity. A modification of the best-basis method, the so-called complexity constrained best-basis algorithm (CCBB), is proposed which parameterises the complexity gap between the fast (standard) wavelet transform and the best wavelet packet basis of a maximal WP library. This new approach allows a `suboptimal' best basis to be found with respect to a given budget of computational complexity or, in other words, it offers an instrument to control the trade-off between compression speed and, coding efficiency. Experimental results are presented for image coding applications showing a highly nonlinear relationship between the rate-distortion performance and the computational complexity in such a way that a relatively small increase in complexity with respect to the standard wavelet basis results in a relatively high rate distortion gain  相似文献   

13.
Three-dimensional (3-D) subband/wavelet coding with motion compensation has been demonstrated to be an efficient technique for video coding applications in some recent research works. When motion compensation is performed with half-pixel accuracy, images need to be interpolated in both temporal subband analysis and synthesis stages. The resulting subband filter banks developed in these former algorithms were not invertible due to image interpolation. In this paper, an invertible temporal analysis/synthesis system with half-pixel-accurate motion compensation is presented. We look at temporal decomposition of image sequences as a kind of down-conversion of the sampling lattices. The earlier motion-compensated (MC) interlaced/progressive scan conversion scheme is extended for temporal subband analysis/synthesis. The proposed subband/wavelet filter banks allow perfect reconstruction of the decomposed video signal while retaining high energy compaction of subband transforms. The invertible filter banks are then utilized in our 3-D subband video coder. This video coding system does not contain the temporal DPCM loop employed in the conventional hybrid coder and the earlier MC 3-D subband coders. The experimental results show a significant PSNR improvement by the proposed method. The generalization of our algorithm for MC temporal filtering at arbitrary subpixel accuracy is also discussed.  相似文献   

14.
张佳岩  周廷显  于淼 《红外技术》2007,29(5):291-296
针对军用探测中红外导引头得到的红外图像需要快速、安全、高效地传输的问题,研究了红外图像的压缩问题.通过各种算法比较,发现空间数据系统咨询委员会(CCSDS)推荐的基于小波变换的图像压缩算法有更易于硬件实现的特点.并且当图像压缩比大于30倍时,图像的失真也是能够接受的.并根据红外图像及其小波变换后系数的各子带直方图分布的特点,提出了一种改进的压缩算法.该算法是基于CCSDS推荐的压缩算法,并通过重新分配小波变换后各子带重要性因子及选择RICE中的预测器使其更适合于红外图像的压缩.同时对该算法产生的压缩数据流在无线传输过程中的容错性问题,提出了的建议及方法.经过仿真,该算法压缩效果同JPEG2000相近,且经过改进后的算法与原算法相比,PSNR提高了0.2dB.  相似文献   

15.
一种嵌入可读水印的自适应盲水印算法   总被引:19,自引:2,他引:19       下载免费PDF全文
张冠男  王树勋  温泉 《电子学报》2005,33(2):308-312
本文提出了一种基于DWT的嵌入可读水印的自适应盲水印算法,通过分析图像经离散小波变换后细节子带系数的特性,把细节子带系数的均值和方差作为水印信息的一部分来自适应地修改图像小波分解后某些细节子带的系数值,在满足水印不可感知性的条件下自适应地嵌入水印信息,实现了水印不可感知性和鲁棒性之间的折衷.同时,水印的提取无须求助于原图像,很好的实现了水印的盲检测.这里的水印是一幅有实际意义的二值图像.实验结果和攻击测试表明,本文提出的算法对JPEG/JPEG2000压缩、添加噪声、剪切、像素移位等多种攻击有较强的鲁棒性,同时对直方图均衡化、对比度调整和高斯滤波等图像处理操作也具有一定的抵抗能力.  相似文献   

16.
吴家骥  吴成柯  吴振森 《电子学报》2006,34(10):1828-1832
感兴趣区(ROI)编码是在JPEG2000中提出的一种重要的技术,然而JPEG2000算法却无法同时支持任意形状ROI和任意提升因子.本文提出了一种基于任意形状ROI和3D提升小波零块编码的3D体数据图像压缩算法.新的算法支持ROI内外从有损到无损的编码.一种简单的任意形状无损ROI掩码(Mask)生成方法被提出.考虑到3D子带的特点,我们采用改进的3DSPECK零块算法对变换后的系数进行编码.一些其它支持任意形状ROI编码的算法也在本文中被评估,试验显示本文算法具有更好的编码性能.  相似文献   

17.
提出一种基于提升方案的双正交小波变换结合SPIHT编码的图像压缩方法.小波提升方案是继多分辨分析之后,另一种非常有效的构造小波滤波器的方法,在双正交条件下按所需的小波性能自由构造双正交小波基,并能加快小波变换的执行速度.分析了应用提升方案构造双正交小波的算法,选用性能优良的双正交小波,结合SPIHT编码,进行图像压缩.实验表明,通过该方法进行图像压缩,图像重建质量较高.  相似文献   

18.
In this paper, an object-oriented digital watermarking technique is proposea in the wavelet domain for still images. According to the difference of recognition degree of the human eye to the different region of the image, the image is divided into the interested region and uninterested region of human eye vision in this scheme. Using the relativity of position and the difference to ocular sensitivity of the multiresolution wavelet among each subband, the image is processed with layered watermarking append technique. Experimental results show that the proposed technique successfully survives image processing operations, additive noise and JPEG compression.  相似文献   

19.
雷蕾  岑翼刚  崔丽鸿  赵瑞珍  岑丽辉 《信号处理》2013,29(11):1519-1525
作为压缩感知理论的前提,稀疏表示要求信号本身是稀疏的或者在某种正交基下可以稀疏表示。本文针对信号本身及小波变换后均不够稀疏的情况,提出一种基于模极大值点的信号稀疏表示算法。该算法在小波变换的基础上,利用小波分解的结构,对各层高频小波系数通过寻找其模极大值点的方法进行稀疏化,然后通过测量矩阵得到它的测量值,对测量点数进行熵编码以实现数据压缩传输。解码时,采用正交匹配追踪算法得到模极大值点的估计值,最后通过交替投影法重构出原信号。仿真结果表明,与经典压缩感知算法相比,该算法恢复信号的质量有较大提高,且由于稀疏度增大,所以信号具有更好的可压缩性,实验表明本文算法对复杂信号效果更明显。   相似文献   

20.
To use wavelet packets for lossy data compression, the following issues must be addressed: quantization of the wavelet subbands, allocation of bits to each subband, and best-basis selection. We present an algorithm for wavelet packets that systematically identifies all bit allocations/best-basis selections on the lower convex hull of the rate-distortion curve. We demonstrate the algorithm on tree-structured vector quantizers used to code image subbands from the wavelet packet decomposition.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号