首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper we focus our attention on the second step algorithms of the Burrows–Wheeler compression algorithm, which in the original version is the Move To Front transform. We discuss many of its replacements presented so far, and compare compression results obtained using these replacements. Then we propose a new algorithm that yields a better compression ratio than the previous algorithms. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

2.
Jürgen Abel 《Software》2010,40(9):751-777
The lossless Burrows–Wheeler compression algorithm has received considerable attention over recent years for both its simplicity and effectiveness. It is based on a permutation of the input sequence—the Burrows–Wheeler transformation (BWT)—which groups symbols with a similar context close together. In the original version, this permutation was followed by a Move‐To‐Front transformation and a final entropy coding stage. Later versions used different algorithms, placed after the BWT, since the following stages have a significant influence on the compression rate. This paper describes different algorithms and improvements for these post BWT stages including a new context‐based approach. The results for compression rates are presented together with compression and decompression times on the Calgary corpus, the Canterbury corpus, the large Canterbury corpus and the Lukas 2D 16‐bit medical image corpus. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

3.
Jürgen Abel 《Software》2007,37(3):247-265
The stage after the Burrows–Wheeler transform (BWT) has a key function inside the Burrows–Wheeler compression algorithm as it transforms the BWT output from a local context into a global context. This paper presents the Incremental Frequency Count stage, a post‐BWT stage. The new stage is paired with a run length encoding stage between the BWT and the entropy coding stage of the algorithm. It offers high throughput similar to a Move To Front stage, and at the same time good compression rates like the strong but slow Weighted Frequency Count stage. The properties of the Incremental Frequency Count stage are compared to the Move To Front and Weighted Frequency Count stages by their compression rates and speeds on the Calgary and large Canterbury corpora. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

4.
Peter Fenwick 《Software》2002,32(13):1307-1316
The final coder in Burrows–Wheeler compression is usually either an adaptive Huffman coder (for speed) or a complex of arithmetic coders for better compression. This article describes the use of conventional pre‐defined variable length codes or universal codes and shows that they too can give excellent compression. The paper also describes a ‘sticky Move‐to‐Front’ modification which gives a useful improvement in compression for most files. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

5.
The Burrows–Wheeler Transform (BWT ) produces a permutation of a string X, denoted X?, by sorting the n cyclic rotations of X into full lexicographical order and taking the last column of the resulting n×n matrix to be X?. The transformation is reversible in time. In this paper, we consider an alteration to the process, called k‐BWT , where rotations are only sorted to a depth k. We propose new approaches to the forward and reverse transform, and show that the methods are efficient in practice. More than a decade ago, two algorithms were independently discovered for reversing k‐BWT , both of which run in time. Two recent algorithms have lowered the bounds for the reverse transformation to and, respectively. We examine the practical performance for these reversal algorithms. We find that the original approach is most efficient in practice, and investigates new approaches, aimed at further speeding reversal, which store precomputed context boundaries in the compressed file. By explicitly encoding the context boundaries, we present an reversal technique that is both efficient and effective. Finally, our study elucidates an inherently cache‐friendly – and hitherto unobserved – behavior in the reverse k‐BWT , which could lead to new applications of the k‐BWT transform. In contrast to previous empirical studies, we show that the partial transform can be reversed significantly faster than the full transform, without significantly affecting compression effectiveness. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

6.
7.
Peter Fenwick 《Software》1998,28(5):547-559
Methods used by Shannon in 1951 when investigating the information content of English text are shown to lead to a class of ‘symbol ranking’ text compressors. Included in this class are some recent compressors which combine high performance with completely novel techniques. These are described under the aegis of the symbol ranking classification. Some new compressors, deliberately designed as symbol ranking, are then described. One of these has operated at over 1 Mbyte per second in software, and should be suitable for hardware implementation at tens of megabytes per second, or higher. © 1998 John Wiley & Sons, Ltd.  相似文献   

8.
We introduce a combinatorial optimization framework that naturally induces a class of optimal word permutations with respect to a suitably defined cost function taking into account various measures of relatedness between words. The Burrows and Wheeler transform (bwt) (cf. [M. Burrows, D. Wheeler, A block sorting lossless data compression algorithm, Technical Report 124, Digital Equipment Corporation, 1994]), and its analog for labelled trees (cf. [P. Ferragina, F. Luccio, G. Manzini, S. Muthukrishnan, Structuring labeled trees for optimal succinctness, and beyond, in: Proc. of the 45th Annual IEEE Symposium on Foundations of Computer Science, 2005, pp. 198–207]), are special cases in the class. We also show that the class of optimal word permutations defined here is identical to the one identified by Ferragina et al. for compression boosting [P. Ferragina, R. Giancarlo, G. Manzini, M. Sciortino, Boosting textual compression in optimal linear time, Journal of the ACM 52 (2005) 688–713]. Therefore, they are all highly compressible. We also provide, by using techniques from Combinatorics on Words, a fast method to compute bwt without using any end-of-string symbol. We also investigate more general classes of optimal word permutations, where relatedness of symbols may be measured by functions more complex than context length. For this general problem we provide an instance that is MAX-SNP hard, and therefore unlikely to be solved or approximated efficiently. The results presented here indicate that a key feature of the Burrows and Wheeler transform seems to be, besides compressibility, the existence of efficient algorithms for its computation and inversion.  相似文献   

9.
通过对CCSDS(国际空间数据系统咨询委员会)建议的无损数据压缩标准的研究,以及对目前常用压缩算法的调查,它阐述了一种具有延迟小速度快抗差错能力强等特点的无损数据压缩算法,即Rice压缩算法,压缩率超过50%以上,而且对多种类型的数据都会达到满意的效果.它对算法中的零值块部分作了较为详细地阐述,因为经过预处理过的数据通常都很小,对于图像来说有相当多的零值.因此,对零值较多的情况下采取零值块压缩处理,效果很好,经过软件测试,结果符合CCSDS的要求标准.  相似文献   

10.
An attractive way to increase text compression is to replace words with references to a text dictionary given in advance. Although there exist a few works in this area, they do not fully exploit the compression possibilities or consider alternative preprocessing variants for various compressors in the latter phase. In this paper, we discuss several aspects of dictionary‐based compression, including compact dictionary representation, and present a PPM/BWCA‐oriented scheme, word replacing transformation, achieving compression ratios higher by 2–6% than the state‐of‐the‐art StarNT (2003) text preprocessor, working at a greater speed. We also present an alternative scheme designed for LZ77 compressors, with the advantage over StarNT of reaching up to 14% in combination with gzip. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

11.
In this paper we present a new lossless image compression algorithm. To achieve the high compression speed we use a linear prediction, modified Golomb–Rice code family, and a very fast prediction error modeling method. We compare the algorithm experimentally with others for medical and natural continuous tone grayscale images of depths of up to 16 bits. Its results are especially good for large images, for natural images of high bit depths, and for noisy images. The average compression speed on Intel Xeon 3.06 GHz CPU is 47 MB/s. For large images the speed is over 60 MB/s, i.e. the algorithm needs less than 50 CPU cycles per byte of image. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

12.
抖动半调图像跳黑块无损压缩算法   总被引:1,自引:1,他引:0  
提出了一种有序抖动半调图像跳黑块无损压缩方法。通过分析抖动半调图像特性,对半调图进行分块及块间异或预处理,使其转变为较大面积黑色区域中夹杂着零星白点的二值图像,接着运用跳黑块编码法对其无损压缩。在跳黑块编码中,对只有一个白像素的非黑块采用特定的短码字编码;对其余类型的非黑块采用直接编码。实验结果表明,改进的跳黑块编码能在一定程度上对非黑块进行有效压缩,且新算法不但具有较高的压缩效率,时间、空间复杂度也较低。  相似文献   

13.
解成俊  向阳 《计算机应用》2007,27(9):2110-2113
提出了一种新的分段可逆矩阵变换去除谱间冗余算法,结合CDF(2,2)DWT去除空间冗余,去冗余效果好于3D-CDF(2,2)DWT,改进的EBCOT算法进行编码。实验结果表明,无损压缩性能远好于JPEG-LS、WinZip、ARJ、DPCM、中国科学院一小组、NMST、MST的结果,以JPL的Canal测试图像为例,平均而言无损压缩比分别比上述算法提高了43%、38%、36%、31%、17%、13%、10%左右。该算法运算速度快,便于硬件实现。  相似文献   

14.
无损(也叫可逆)水印主要用于数据(如图象)的完整性认证,其特点之一是当加水印图象没有被修改时,所嵌入的水印可以完整地从加水印图象中去除,从而可以在接收端无损地恢复出原始图象。本文提出一种新的基于可逆整数变换的无损水印算法,利用整数变换后图象象素对均值和差值之间所存在的冗余,嵌入认证信息。文献中已有的方法只利用了差值冗余,而我们的算法则进一步考虑了均值和差值的相关性。利用算术编码压缩由二者所得的相关位流,可以获得较大的存储空间和较高的加水印图象质量。  相似文献   

15.
阵列声波测井是目前一种重要的测井方法.但由于它产生的数据量较大,加之电缆传输速度的限制,测井效率会因此受到影响.为提升测井效率,有必要对原始声波测井数据进行压缩处理.通过分析声波信号的特点,提出了采用DCT+适当量化+算术编码的压缩方法,并根据实际波形的测试结果对压缩方法进行了一系列改进.通过对660组真实测井数据的测试,得到了对于672个点的单极声波数据平均4. 56的压缩比,对于1 200个点的偶极声波数据平均9. 18的压缩比,均方根误差在2%左右,证实了该方法的有效性.  相似文献   

16.
传感器网络中分布式最优小波压缩算法   总被引:1,自引:0,他引:1       下载免费PDF全文
研究传感器网络中的小波变换问题,提出了一种基于最优小波变换的分布式数据压缩算法。主要工作有:(1)提出基于混合分解的分布式小波变换,利用节点的计算能力减少节点间交换数据产生小波系数的通信开销;(2)提出自适应小波变换,最优变换级根据小波变换的压缩增益和由此产生的网络开销自适应确定。仿真实验表明,和现有的小波数据压缩算法以及非分布式方式相比,提出的算法具有更优的网络性能。  相似文献   

17.
现有基于可缩放矢量图形SVG的空间矢量数据网络发布往往存在着网络延时大、浏览器资源占用率高等问题。根据窗视变换原理,提出了视觉无损的SVG空间矢量数据压缩算法。该算法通过重新设定SVG的地图范围,根据新的范围对原始空间数据进行变换,从而实现空间数据的约减。实验结果表明,该算法优于常用的Douglas-Peucker算法,在提高压缩率的同时也减少了计算时间。  相似文献   

18.
The limitations of frequency-domain filtering methods have motivated the development of alternative techniques, in which a filter is applied to a time–frequency distribution instead of the Fourier spectrum. One such distribution is the S-transform, a modified short-time Fourier transform whose window scales with frequency, as in wavelets. Recently it has been shown that the S-transform's local spectra have time-domain equivalents. Since each of these is associated with a particular window position on the time axis, collectively they give a time–time distribution. This distribution, called the TT-transform, exhibits differential concentration of different frequency components, with higher frequencies being more strongly concentrated around the localization position than lower frequencies. This leads to the idea of filtering on the time–time plane, in addition to the time–frequency plane. Examples of time–frequency filtering and time–time filtering are presented.  相似文献   

19.
提出了一种类似于字典索引的编码压缩方法,该方案将与参考数据块相容、反相容的测试数据块用“0”、“1”标记来压缩数据,并用定长的数据来标识与参考数据块相关的数据块个数。通过分析可知方案的解压电路结构简单,所需的硬件开销很小,对ISCAS’89基准电路的实验结果表明,该编码方法能有效地压缩测试数据。  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号