共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
基于小波变换的图像压缩算法,在含噪图像和较低码率时出现的边缘模糊现象多年来一直未能得到很好的解决.为了解决这一问题,提出一种具有边缘保持特性的零树小波图像压缩方法.首先,对图像进行小波边缘检测,确定哪些小波系数是图像的边缘特征,将其保护起来;然后,对小波变换域系数采用改进软阈值收缩方法实现去噪;最后,利用SPIHT(Set Partitioning in Hierarchical Trees)算法对图像进行压缩编码.实验结果表明,该方法不仅能获得较高的图像压缩率、较好地去除噪声,而且能在一定程度上解决边缘模糊问题,能较高地恢复图像质量. 相似文献
3.
4.
一种改进的零树小波图像压缩算法 总被引:5,自引:0,他引:5
基于小波变换的图像压缩算法在较低码率时出现的Gibbs效应多年来一直未能得到很好的解决,其主要原因是,纯粹基于像素值的MSE(mean square error)准则对于图像边缘对应的小波系数分配了较少的比特数.文章在详细分析了Shapiro和Said & Pearlman等人提出的零树小波压缩算法的基础上,从抑制高频噪声和图像边缘对应系数的自适应量化等方面对原来的算法进行了改进,给出了相应的对比实验结果.文章的主要意义在于,提出了识别与压缩相结合的思想,由于小波变换的空间局部化特性,因此可以进行非常灵活 相似文献
5.
由于三维物体的自遮挡,浏览三维网格是一个典型的随机访问问题.如果在将网格压缩后,只传输与解码当前视点下可见区域的数据,就可以节约网络带宽和解码资源.目前的网格压缩方法基本上没有考虑随机访问.提出一种基于小波变换且支持随机访问的渐进几何压缩方法,基本思想是将网格表面分成很多块,对每块的细节信息独立进行编码,然后只传输当前视点下可见块的细节信息.代表细节信息的小波系数被组织成零树,设计了一种修正的SPIHT算法来对每棵小波零树独立进行压缩.实验结果表明,该方法取得了与PGC方法相当的压缩效率,但如果只传输可见区域的细节信息,该方法需要传输的数据量只是PGC方法的60%左右. 相似文献
6.
7.
基于小波变换的图象压缩是图象压缩的一项成功技术 ,并且具有越来越重要的作用 ,但基于小波变换的图象压缩算法在比特率较低时出现的边缘模糊现象仍然是一个公认的难题 .为了在一定程度上减少比特率较低时 ,出现的边缘模糊现象 ,提出了一种基于势函数模糊聚类量化的新方法 ,用于对经过小波变换分解后所形成的数字图象高频子带小波系数进行量化 .在量化过程中还考虑了高频子带的小波系数的分布特性和高频子带的小波系数对保存边缘、纹理等信息的重要性程度 ,也利用了模糊集合的特性 .实验证明 ,在低比特率下 ,这种方法能较好地保存图象边缘和纹理等信息 ,从而在一定程度上提高了重构图象的主观质量 .该方法在小波图象压缩的模糊聚类量化领域进行了一定的尝试 . 相似文献
8.
将曲面重构看作是一种信号重构过程,针对大量散乱数据点,借助成熟的三角网格划分和网格化简算法,利用提升小波变换实现曲面重构,可以快速地构造出复杂拓扑结构的Calmull-Clark曲面;给出了小波系数估算方法以及基于网格拓扑结构的局部最优路径搜索算法.通过运行实例证明了文中算法的有效性. 相似文献
9.
为解决海量医学数据与有限存储空间和传输带宽之间的矛盾,提出一种适用于PACS(picture archiving and communication system)系统的医学图像近无损压缩算法。首先对病变区域和背景区域分别进行剪切波变换和小波变换;其次,选取一些能够近似逼近病变区域图像的重要系数达到去噪和初步压缩的目的;然后,对病变区域所选取的重要系数进行无损Huffman编码,同时对背景区域所得小波系数进行量化和多级树集合分裂算法(SPIHT)编码实现压缩;最后,融合各区域经解码和逆变换得到的图像获得整幅重构图像。实验结果表明,新算法在与小波有损压缩方法设置同样压缩比的情况下,所获取的病变区域重构图像和原病变区域的平均结构相似度(MSSIM)提高了6%,峰值信噪比(PSNR)是小波有损压缩方法的2.54倍,而整幅重构图像与原图像的MSSIM提高了2%,PSNR提高了13%。 相似文献
10.
We propose a new algorithm for image compression based on compressive sensing (CS). The algorithm starts with a traditional multilevel 2-D Wavelet decomposition, which provides a compact representation of image pixels. We then introduce a new approach for rearranging the wavelet coefficients into a structured manner to formulate sparse vectors. We use a Gaussian random measurement matrix normalized with the weighted average Root Mean Squared energies of different wavelet subbands. Compressed sampling is finally performed using this normalized measurement matrix. At the decoding end, the image is reconstructed using a simple ?1-minimization technique. The proposed wavelet-based CS reconstruction, with the normalized measurement matrix, results in performance increase compared to other conventional CS-based techniques. The proposed approach introduces a completely new framework for using CS in the wavelet domain. The technique was tested on different natural images. We show that the proposed technique outperforms most existing CS-based compression methods. 相似文献
11.
12.
针对小波变换图像压缩编码方法在高压缩比下得到的重构图像质量往往较差的问题,提出了一种基于谱图小波变换的编码方法.该方法首先将图像转化成图,利用谱图小波变换分解图得到谱图小波系数,这些系数的能量随着尺度的增加而衰减,然后根据谱图小波系数的特性对SPECK算法进行改进,最后对谱图小波系数进行量化,利用改进的SPECK算法对量化后的系数进行压缩编码,并在图像数据量压缩的同时从稀疏系数中恢复原始图像.实验结果表明,该编码方法对自然图像的压缩具有高效性,相比小波变换的压缩方法,重建图像的PSNR有所提高且变化平稳,与此同时还得到更大的压缩比. 相似文献
13.
Ikbel Sayahi Akram Elkefi Chokri Ben Amar 《Multimedia Tools and Applications》2017,76(15):16439-16462
This work is a connecting link between the field of digital transmission and (3 Dimension) 3D watermarking. In fact, we propose in this paper a blind and robust watermarking algorithm for 3D multiresolution meshes. This data type, before being watermarked, is divided into GOTs (Group Of Triangles) using a spiral scanning method. At every instant, only one GOT is loaded into memory. It undergoes a wavelet transform. Embedding modifies the wavelet coefficients vector thus generated after being presented in a cylindrical coordinate system. After being watermarked, the current GOT will be released from memory to upload the next GOT. Information is coded using a turbo encoder to generate the codeword to be inserted. Once the entire mesh is scanned, the watermarked mesh is reconstructed. During extraction, the same steps are applied only on the watermarked mesh: our algorithm is then blind. Extracted data are decoded using Error-Correcting Code (turbocode) to correct errors that occurred. The results show that our algorithm preserves mesh quality even with a very large insertion rate while significantly minimizing used memory. Data extraction was done correctly despite the application of various attacks. Our algorithm is robust against most popular attacks such as similarity transformation, noise addition, smoothing, coordinate quantization, simplification and compression. 相似文献
14.
栅格数据金字塔是空间信息系统中的一类基本组织结构,基于小波的金字塔构建方法需要考虑数据分块导致的边界问题,现有算法或者未考虑边界问题,或者在消除边界缝隙问题时需增加大量的计算。针对分块数据小波变换的边界问题,提出分块数据的小波系数拼接算法,该算法对相邻子块边界系数进行叠加,使拼接后的小波系数等效于直接对大数据进行小波变换。在此基础上,提出无缝栅格数据小波金字塔构建方法:首先对大块数据进行分块多级小波变换,然后利用小波系数拼接算法完成对各块系数的无缝拼接。该金字塔结构消除了边界系数,实现了各子带小波系数的无缝组织。实验结果表明,拼接算法可大幅减少高层金字塔的数据量,且易于实现。 相似文献
15.
This paper describes a new algorithm for electrocardiogram (ECG) compression. The main goal of the algorithm is to reduce the bit rate while keeping the reconstructed signal distortion at a clinically acceptable level. It is based on the compression of the linearly predicted residuals of the wavelet coefficients of the signal. In this algorithm, the input signal is divided into blocks and each block goes through a discrete wavelet transform; then the resulting wavelet coefficients are linearly predicted. In this way, a set of uncorrelated transform domain signals is obtained. These signals are compressed using various coding methods, including modified run-length and Huffman coding techniques. The error corresponding to the difference between the wavelet coefficients and the predicted coefficients is minimized in order to get the best predictor. The method is assessed through the use of percent root-mean square difference (PRD) and visual inspection measures. By this compression method, small PRD and high compression ratio with low implementation complexity are achieved. Finally, we have compared the performance of the ECG compression algorithm on data from the MIT-BIH database. 相似文献
16.
17.
Infrared and visible image fusion using fuzzy logic and population-based optimization 总被引:1,自引:0,他引:1
Jamal Saeedi Author VitaeKarim FaezAuthor Vitae 《Applied Soft Computing》2012,12(3):1041-1054
This paper presents a new wavelet-based algorithm for the fusion of spatially registered infrared and visible images. Wavelet-based image fusion is the most common fusion method, which fuses the information from the source images in the wavelet transform domain according to some fusion rules. We specifically propose new fusion rules for fusion of low and high frequency wavelet coefficients of the source images in the second step of the wavelet-based image fusion algorithm. First, the source images are decomposed using dual-tree discrete wavelet transform (DT-DWT). Then, a fuzzy-based approach is used to fuse high frequency wavelet coefficients of the IR and visible images. Particularly, fuzzy logic is used to integrate the outputs of three different fusion rules (weighted averaging, selection using pixel-based decision map (PDM), and selection using region-based decision map (RDM)), based on a dissimilarity measure of the source images. The objective is to utilize the advantages of previous pixel- and region-based methods in a single scheme. The PDM is obtained based on local activity measurement in the DT-DWT domain of the source images. A new segmentation-based algorithm is also proposed to generate the RDM using the PDM. In addition, a new optimization-based approach using population-based optimization is proposed for the low frequency fusion rule instead of simple averaging. After fusing low and high frequency wavelet coefficients of the source images, the final fused image is obtained using the inverse DT-DWT. This new method provides improved subjective and objectives results as compared to previous image fusion methods. 相似文献
18.
3维网格数据是不规则采样的数据集,传统的图像变换域水印算法不能直接应用到网格上.提出了一种小波变换域的鲁棒网格水印算法.该算法对半规则的多分辨率网格进行小波变换,得到一个基网格和一系列小波系数.在对小波系数的统计特性进行实验分析的基础上,选择在低频子带的小波系数中嵌入水印,并对小波系数的切向和法向分量设计了不同的嵌入强度,以减小几何失真.实验结果表明,该水印算法满足不可见性,并具有较强的鲁棒性. 相似文献
19.
20.
空间树结构SOT(Spatial-Orientation Tree)在基于小波的SAR图像压缩中扮演着及其重要的角色,包括EZW(Embedded Zero-tree Wavelet)和SPIHT(Set Partitioning in Hierarchical Trees)的图像压缩编码方法,都利用了SOT中的父子关系。斑点噪声的存在,严重降低了SAR图像的质量和可压缩性。作为研究不同分辨率小波系数的空间相关性的非常有效的数据结构,SOT在斑点噪声去除中并没有得到很好的利用。提出一种新的SAR图像压缩方法,该方法结合基于SOT结构的斑点噪声去除和EZW嵌入式零树编码算法,对机载合成孔径雷达图像压缩实验的结果显示,该方法优于JPEG和标准EZW算法。 相似文献