首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
《国际计算机数学杂志》2012,89(10):1213-1222
A recent development in data compression area is Burrows–Wheeler Compression algorithm (BWCA). Introduced by Burrows and Wheeler, the BWCA achieves compression ratio closer to the best compression techniques, such as partial pattern matching (PPM) techniques, but with a faster execution speed. In this paper, we analyze the combinatorial properties of the Burrows–Wheeler transformation (BWT), which is a block-sorting transformation and an essential part of the BWCA, introduce a new transformation, and delineate the new transformation with the BWT based on the multiset permutations.  相似文献   

2.
Sebastian Deorowicz 《Software》2000,30(13):1465-1483
In 1994 Burrows and Wheeler presented a new algorithm for lossless data compression. The compression ratio that can be achieved using their algorithm is comparable with the best known other algorithms, whilst its complexity is relatively small. In this paper we explain the internals of this algorithm and discuss its various modifications that have been presented so far. Then we propose new improvements for its effectiveness. They allow us to obtain a compression ratio equal to 2.271 bpc for the Calgary Corpus files, which is the best result in the class of Burrows–Wheeler transform based algorithms. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

3.
Jürgen Abel 《Software》2007,37(3):247-265
The stage after the Burrows–Wheeler transform (BWT) has a key function inside the Burrows–Wheeler compression algorithm as it transforms the BWT output from a local context into a global context. This paper presents the Incremental Frequency Count stage, a post‐BWT stage. The new stage is paired with a run length encoding stage between the BWT and the entropy coding stage of the algorithm. It offers high throughput similar to a Move To Front stage, and at the same time good compression rates like the strong but slow Weighted Frequency Count stage. The properties of the Incremental Frequency Count stage are compared to the Move To Front and Weighted Frequency Count stages by their compression rates and speeds on the Calgary and large Canterbury corpora. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

4.
In this paper, we present a new technique for worst-case analysis of compression algorithms which are based on the Burrows–Wheeler Transform. We mainly deal with the algorithm proposed by Burrows and Wheeler in their first paper on the subject [M. Burrows, D.J. Wheeler, A block sorting lossless data compression algorithm, Technical Report 124, Digital Equipment Corporation, Palo Alto, California, 1994], called bw0. This algorithm consists of the following three essential steps: (1) Obtain the Burrows–Wheeler Transform of the text, (2) Convert the transform into a sequence of integers using the move-to-front algorithm, (3) Encode the integers using Arithmetic code or any order-0 encoding (possibly with run-length encoding).  相似文献   

5.

In this paper, an approach has been made to produce a compressed audio without losing any information. The proposed scheme is fabricated with the help of dynamic cluster quantization followed by Burrows Wheeler Transform (BWT) and Huffman coding. The encoding algorithm has been designed in two phases, i.e., dynamic cluster selection (of sampled audio) followed by dynamic bit selection for determining quantization level of individual cluster. Quantization level of each cluster is selected dynamically based on mean square quantization error (MSQE). Bit stream is further compressed by applying Burrows Wheeler Transform (BWT) and Huffman code respectively. Experimental results are supported with current state-of-the-art in audio quality analysis (like statistical parameters (compression ratio, space savings, SNR, PSNR) along with other parameters (encoding time, decoding time, Mean Opinion Score (MOS) and entropy) and compared with other existing techniques.

  相似文献   

6.
The key design challenge in capsule endoscopic system is to reduce the area and power consumption while maintaining acceptable video quality. In this paper, a subsample-based image compressor for such endoscopic application is presented. The algorithm is developed around some special features of endoscopic images. It consists of a differential pulse coded modulation followed by Golomb-rice coding. Based on the nature of endoscopic images, several subsampling schemes on the chrominance components are applied. This scheme is particularly suitable to work with any commercial low-power image sensors that outputs image pixels in a raster scan fashion, eliminating the need of memory buffer, as well as temporary storage (as needed in transform coding schemes). An image corner clipping algorithm is also presented. The reconstructed images have been verified by medical doctors for acceptability. The proposed algorithm has a very low complexity and is suitable for the VLSI implementation. Compared to other transform-based algorithms targeted to capsule endoscopy, the proposed raster-scan-based scheme performs very strongly with a compression ratio of 80% and a very high reconstruction PSNR (over 45 dB).  相似文献   

7.
Jürgen Abel 《Software》2010,40(9):751-777
The lossless Burrows–Wheeler compression algorithm has received considerable attention over recent years for both its simplicity and effectiveness. It is based on a permutation of the input sequence—the Burrows–Wheeler transformation (BWT)—which groups symbols with a similar context close together. In the original version, this permutation was followed by a Move‐To‐Front transformation and a final entropy coding stage. Later versions used different algorithms, placed after the BWT, since the following stages have a significant influence on the compression rate. This paper describes different algorithms and improvements for these post BWT stages including a new context‐based approach. The results for compression rates are presented together with compression and decompression times on the Calgary corpus, the Canterbury corpus, the large Canterbury corpus and the Lukas 2D 16‐bit medical image corpus. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

8.
Peter Fenwick 《Software》2002,32(13):1307-1316
The final coder in Burrows–Wheeler compression is usually either an adaptive Huffman coder (for speed) or a complex of arithmetic coders for better compression. This article describes the use of conventional pre‐defined variable length codes or universal codes and shows that they too can give excellent compression. The paper also describes a ‘sticky Move‐to‐Front’ modification which gives a useful improvement in compression for most files. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

9.
In this paper we focus our attention on the second step algorithms of the Burrows–Wheeler compression algorithm, which in the original version is the Move To Front transform. We discuss many of its replacements presented so far, and compare compression results obtained using these replacements. Then we propose a new algorithm that yields a better compression ratio than the previous algorithms. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

10.
We introduce a combinatorial optimization framework that naturally induces a class of optimal word permutations with respect to a suitably defined cost function taking into account various measures of relatedness between words. The Burrows and Wheeler transform (bwt) (cf. [M. Burrows, D. Wheeler, A block sorting lossless data compression algorithm, Technical Report 124, Digital Equipment Corporation, 1994]), and its analog for labelled trees (cf. [P. Ferragina, F. Luccio, G. Manzini, S. Muthukrishnan, Structuring labeled trees for optimal succinctness, and beyond, in: Proc. of the 45th Annual IEEE Symposium on Foundations of Computer Science, 2005, pp. 198–207]), are special cases in the class. We also show that the class of optimal word permutations defined here is identical to the one identified by Ferragina et al. for compression boosting [P. Ferragina, R. Giancarlo, G. Manzini, M. Sciortino, Boosting textual compression in optimal linear time, Journal of the ACM 52 (2005) 688–713]. Therefore, they are all highly compressible. We also provide, by using techniques from Combinatorics on Words, a fast method to compute bwt without using any end-of-string symbol. We also investigate more general classes of optimal word permutations, where relatedness of symbols may be measured by functions more complex than context length. For this general problem we provide an instance that is MAX-SNP hard, and therefore unlikely to be solved or approximated efficiently. The results presented here indicate that a key feature of the Burrows and Wheeler transform seems to be, besides compressibility, the existence of efficient algorithms for its computation and inversion.  相似文献   

11.
12.
Secure online communication is a necessity in today’s digital world. This paper proposes a novel reversible data hiding technique based on side match vector quantization (SMVQ). The proposed scheme classifies SMVQ indices as Case 1 or 2 based on the value of the first state codeword’s side match distortion (SMD) and a predefined threshold t. The proposed scheme uses this classification to switch between compression codes designed for Cases 1 and 2 SMVQ indices. The length of these compression codes is controlled by the parameter ?. Thus, with the selection of appropriate ? and t values, the proposed scheme achieves good compression, creating spaces to embed secret information. The embedding algorithm can embed n secret bits into each SMVQ index, where n = 1, 2, 3, or 4. The experimental results show that the proposed scheme obtains the embedding rates of 1, 2, 3, or 4 bit per index (bpi) at the average bit rates of 0.340, 0.403, 0.465, or 0.528 bit per pixel (bpp) for the codebook size 256. This improves the performance of recent VQ and SMVQ-based data hiding schemes.  相似文献   

13.
Hiding a message in compression codes can reduce transmission costs and simultaneously make the transmission more secure. In this paper, we propose a high-performance, data-hiding Lempel–Ziv–Welch (HPDH-LZW) scheme, which reversibly embeds data in LZW compression codes by modifying the value of the compression codes, where the value of the LZW code either remains unchanged or is changed to the original value of the LZW code plus the LZW dictionary size according to the data to be embedded. Compared to other information-hiding schemes based on LZW compression codes, the proposed scheme achieves better hiding capacity by increasing the number of symbols available to hide secrets and also achieves faster hiding and extracting speeds due to the lower computation requirements. Our experimental results with the proposed scheme have confirmed both its high embedding capacity and its high speed when hiding and extracting data.  相似文献   

14.
In the framework of maximum-likelihood detection for image watermarking schemes, the conventional Generalized Gaussian Distribution (GGD), Cauchy and Student’s t distributions often fail to model the pulse-like distributions, such as Discrete Cosine Transform (DCT) coefficient distribution. Meanwhile DCT DC coefficients are often neglected in the image watermarking schemes. In this paper an improved full band image watermarking algorithm with utilization of Weibull distribution modeling the DCT AC and DC coefficients is proposed. Experiments indicate that compared with other popluar distributions such as the GGD, the Weibull model gives a closer fit on the distribution of AC coefficients in absolute domain with a smaller Kullback-Leibler (KL) divergence and lower Mean Square Error (MSE). The watermarking scheme with Weibull modeling the DCT AC coefficients (Weibull-AC) exhibits strong robustness under the attack of scaling and median filtering. The watermarking scheme with Weibull modeling the DCT DC coefficients (Weibull-DC) yields a better detection accuracy for bright and more detailed images. Combining the above two advantages, the proposed Weibull based full band watermarking in DCT domain (Weibull-FB) further improves its robustness under the attack of JPEG compression and achieves 10.47 % overall increment in the detection accuracy compared with the baseline system while maintaining good invisibility in the view of structural similarity (SSIM).  相似文献   

15.
Effect of Severe Image Compression on Iris Recognition Performance   总被引:1,自引:0,他引:1  
We investigate three schemes for severe compression of iris images in order to assess what their impact would be on recognition performance of the algorithms deployed today for identifying people by this biometric feature. Currently, standard iris images are 600 times larger than the IrisCode templates computed from them for database storage and search; but it is administratively desired that iris data should be stored, transmitted, and embedded in media in the form of images rather than as templates computed with proprietary algorithms. To reconcile that goal with its implications for bandwidth and storage, we present schemes that combine region-of-interest isolation with JPEG and JPEG2000 compression at severe levels, and we test them using a publicly available database of iris images. We show that it is possible to compress iris images to as little as 2000 bytes with minimal impact on recognition performance. Only some 2% to 3% of the bits in the IrisCode templates are changed by such severe image compression, and we calculate the entropy per code bit introduced by each compression scheme. Error tradeoff curve metrics document very good recognition performance despite this reduction in data size by a net factor of 150, approaching a convergence of image data size and template size.  相似文献   

16.
In this paper, we deal with those applications of textual image compression where high compression ratio and maintaining or improving the visual quality and readability of the compressed images are of main concern. In textual images, most of the information exists in the edge regions; therefore, the compression problem can be studied in the framework of region-of-interest (ROI) coding. In this paper, the Set Partitioning in Hierarchical Trees (SPIHT) coder is used in the framework of ROI coding along with some image enhancement techniques in order to remove the leakage effect which occurs in the wavelet-based low-bit-rate compression. We evaluated the compression performance of the proposed method with respect to some qualitative and quantitative measures. The qualitative measures include the averaged mean opinion scores (MOS) curve along with demonstrating some outputs in different conditions. The quantitative measures include two proposed modified PSNR measures and the conventional one. Comparing the results of the proposed method with those of three conventional approaches, DjVu, JPEG2000, and SPIHT coding, showed that the proposed compression method considerably outperformed the others especially from the qualitative aspect. The proposed method improved the MOS by 20 and 30 %, in average, for high- and low-contrast textual images, respectively. In terms of the modified and conventional PSNR measures, the proposed method outperformed DjVu and JPEG2000 up to 0.4 dB for high-contrast textual images at low bit rates. In addition, compressing the high contrast images using the proposed ROI technique, compared to without using this technique, improved the average textual PSNR measure up to 0.5 dB, at low bit rates.  相似文献   

17.
We describe a method for streaming compression of hexahedral meshes. Given an interleaved stream of vertices and hexahedra our coder incrementally compresses the mesh in the presented order. Our coder is extremely memory efficient when the input stream also documents when vertices are referenced for the last time (i.e. when it contains topological finalization tags). Our coder then continuously releases and reuses data structures that no longer contribute to compressing the remainder of the stream. This means in practice that our coder has only a small fraction of the whole mesh in memory at any time. We can therefore compress very large meshes—even meshes that do not fit in memory. Compared to traditional, non-streaming approaches that load the entire mesh and globally reorder it during compression, our algorithm trades a less compact compressed representation for significant gains in speed, memory, and I/O efficiency. For example, on the 456k hexahedra “blade” mesh, our coder is twice as fast and uses 88 times less memory (only 3.1 MB) with the compressed file increasing about 3% in size. We also present the first scheme for predictive compression of properties associated with hexahedral cells.  相似文献   

18.
Devices using single sensors to capture colour images are cheaper due to high cost of Charge Couple Device (CCD) sensors or Complementary Metal-Oxide Semiconductor (CMOS) sensors. Single sensor devices use Colour Filter Array (CFA) to sample one colour band at every pixel location. Demosaicking process is applied to interpolate the two missing colours from the surrounding. Typically compression is done on the demosaicked images which may not be efficient due to the individual compression of the different colour space. This work investigated compression of raw data before demosaicking and performs demosaicking to reconstruct the R, G, B bands later. A novel Vector Quantization (VQ) technique for encoding the wavelet decomposed image using Modified Artificial Bee Colony (ABC) optimization algorithm is proposed. The proposed technique is compared with Genetic Algorithm based VQ and ABC based quantization and with standard LBG and Lloyd algorithm. Results show higher Peak Signal-to-Noise Ratio (PSNR) indicating better reconstruction.  相似文献   

19.
当前的网格压缩算法在编码顶点和连通信息之前,需要按照某种顺序遍历网格模型,而且需要构造支持拓扑结构访问的临时数据结构,不利于处理大型网格模型.从根源上抛弃了传统网格数据组织形式,给出了一种新的网格顶点和连通信息交错的网格数据组织方法,提出了以网格数据读入顺序渐进编码算法.实验结果表明,算法的处理速度和所占用的内存空间都优于传统的压缩方法.  相似文献   

20.
This paper presents a lossless data hiding method for an absolute moment block truncation coding (AMBTC) images, which is a compressed grayscale image. It is not easy to hide secret data in an AMBTC-compressed image because it is composed of bit planes. Thus, it is very sensitive to change some pixels. Nevertheless, to improve the hiding capacity, we present an efficient extension of the histogram modification technique by counting the coefficients of the bit planes in each 4 × 4 block. In addition, our proposed scheme has low complexity and achieves a high embedding capacity with the good perceptual quality compared to the prior arts. In an experiment, we verified our proposed data hiding scheme through the comparison with previous methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号