首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 405 毫秒
1.
Jürgen Abel 《Software》2007,37(3):247-265
The stage after the Burrows–Wheeler transform (BWT) has a key function inside the Burrows–Wheeler compression algorithm as it transforms the BWT output from a local context into a global context. This paper presents the Incremental Frequency Count stage, a post‐BWT stage. The new stage is paired with a run length encoding stage between the BWT and the entropy coding stage of the algorithm. It offers high throughput similar to a Move To Front stage, and at the same time good compression rates like the strong but slow Weighted Frequency Count stage. The properties of the Incremental Frequency Count stage are compared to the Move To Front and Weighted Frequency Count stages by their compression rates and speeds on the Calgary and large Canterbury corpora. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

2.
《国际计算机数学杂志》2012,89(10):1213-1222
A recent development in data compression area is Burrows–Wheeler Compression algorithm (BWCA). Introduced by Burrows and Wheeler, the BWCA achieves compression ratio closer to the best compression techniques, such as partial pattern matching (PPM) techniques, but with a faster execution speed. In this paper, we analyze the combinatorial properties of the Burrows–Wheeler transformation (BWT), which is a block-sorting transformation and an essential part of the BWCA, introduce a new transformation, and delineate the new transformation with the BWT based on the multiset permutations.  相似文献   

3.
In this paper, we present a new technique for worst-case analysis of compression algorithms which are based on the Burrows–Wheeler Transform. We mainly deal with the algorithm proposed by Burrows and Wheeler in their first paper on the subject [M. Burrows, D.J. Wheeler, A block sorting lossless data compression algorithm, Technical Report 124, Digital Equipment Corporation, Palo Alto, California, 1994], called bw0. This algorithm consists of the following three essential steps: (1) Obtain the Burrows–Wheeler Transform of the text, (2) Convert the transform into a sequence of integers using the move-to-front algorithm, (3) Encode the integers using Arithmetic code or any order-0 encoding (possibly with run-length encoding).  相似文献   

4.
5.
Jürgen Abel 《Software》2010,40(9):751-777
The lossless Burrows–Wheeler compression algorithm has received considerable attention over recent years for both its simplicity and effectiveness. It is based on a permutation of the input sequence—the Burrows–Wheeler transformation (BWT)—which groups symbols with a similar context close together. In the original version, this permutation was followed by a Move‐To‐Front transformation and a final entropy coding stage. Later versions used different algorithms, placed after the BWT, since the following stages have a significant influence on the compression rate. This paper describes different algorithms and improvements for these post BWT stages including a new context‐based approach. The results for compression rates are presented together with compression and decompression times on the Calgary corpus, the Canterbury corpus, the large Canterbury corpus and the Lukas 2D 16‐bit medical image corpus. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

6.
This paper presents a lossy compression technique for encrypted images using Discrete Wavelet Transform (DWT), Singular Value Decomposition (SVD) and Huffman coding. The core idea of the proposed technique lies in the selection of significant and less significant coefficients in the wavelet domain. Significant and less significant coefficients are encrypted using pseudo-random number sequence and coefficient permutation respectively. Furthermore, encrypted significant data is compressed by quantization and entropy coding while, less significant encrypted data is efficiently compressed by discarding irrelevant information using SVD and Huffman coding techniques. At receiver side, a reliable decompression and decryption technique is used to reconstruct the original image content with the help of compressed bit streams and secret keys. The performance of proposed technique is evaluated using parameters such as Compression Ratio (CR) and Peak-Signal-to-Noise Ratio (PSNR). Experimental results demonstrate the effectiveness of proposed work over prior work on compression of encrypted images and obtain the compression performance comparable to state of art work on compression of unencrypted images i.e. JPEG standard.  相似文献   

7.
This research paper demonstrates the robustness of Bi-level Burrows Wheeler Compression Algorithm (BBWCA) in terms of the compression efficiency for different types of image data. The scheme was designed to take advantage of the increased inter-pixel redundancies resulting from a two pass Burrows Wheeler Transformation (BWT) stage and the use of Reversible Colour Transform (RCT). In this research work, BBWCA was evaluated for raster map images, Colour Filter Array (CFA) images as well as 2-D ElectroEncephaloGraphy (EEG) data and compared against benchmark schemes. Validation has been carried out on various examples and they show that BBWCA is capable of compressing 2-D data effectively. The proposed method achieves marked improvement over the existing methods in terms of compression size. BBWCA is 18.8 % better at compressing images as compared to High Efficiency Video Codec (HEVC) and 21.2 % more effective than LZ4X compressor for CFA images. For the EEG data, BBWCA is 17 % better at compressing images as compared to WINRK and 25.2 % more effective than NANOZIP compressor. However, for the Raster images PAQ8 supersedes BBWCA by 11 %. Among the different schemes compared, the proposed scheme achieves overall best performance and is well suited to small and large size image data compression. The parallelization process reduces the execution time particularly for large size images. The parallelized BBWCA scheme reduces the execution time by 31.92 % on average as compared to the non-parallelized BBWCA.  相似文献   

8.
The deep connection between the Burrows–Wheeler transform (BWT) and the so-called rank and select data structures for symbol sequences is the basis of most successful approaches to compressed text indexing. Rank of a symbol at a given position equals the number of times the symbol appears in the corresponding prefix of the sequence. Select is the inverse, retrieving the positions of the symbol occurrences. It has been shown that improvements to rank/select algorithms, in combination with the BWT, turn into improved compressed text indexes.  相似文献   

9.
Peter Fenwick 《Software》2002,32(13):1307-1316
The final coder in Burrows–Wheeler compression is usually either an adaptive Huffman coder (for speed) or a complex of arithmetic coders for better compression. This article describes the use of conventional pre‐defined variable length codes or universal codes and shows that they too can give excellent compression. The paper also describes a ‘sticky Move‐to‐Front’ modification which gives a useful improvement in compression for most files. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

10.
The Burrows–Wheeler Transform (BWT ) produces a permutation of a string X, denoted X?, by sorting the n cyclic rotations of X into full lexicographical order and taking the last column of the resulting n×n matrix to be X?. The transformation is reversible in time. In this paper, we consider an alteration to the process, called k‐BWT , where rotations are only sorted to a depth k. We propose new approaches to the forward and reverse transform, and show that the methods are efficient in practice. More than a decade ago, two algorithms were independently discovered for reversing k‐BWT , both of which run in time. Two recent algorithms have lowered the bounds for the reverse transformation to and, respectively. We examine the practical performance for these reversal algorithms. We find that the original approach is most efficient in practice, and investigates new approaches, aimed at further speeding reversal, which store precomputed context boundaries in the compressed file. By explicitly encoding the context boundaries, we present an reversal technique that is both efficient and effective. Finally, our study elucidates an inherently cache‐friendly – and hitherto unobserved – behavior in the reverse k‐BWT , which could lead to new applications of the k‐BWT transform. In contrast to previous empirical studies, we show that the partial transform can be reversed significantly faster than the full transform, without significantly affecting compression effectiveness. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

11.
The vision processor (VP) and vision controller (VC), two integrated products dedicated to video compression, are discussed. The chips implement the P×64, JPEG, and MPEG image compression standards. The VP forms the heart of the image compression system. It performs discrete cosine transform (DCT), quantization, and motion estimation, as well as inverse DCT, and inverse quantization. The highly parallel and microcode-based processor performs all of the JPEG, MPEG, and P×64 algorithms. The VC smart microcontroller controls the compression process and provides the interface to the host system. It captures pixels from a video source, performs video preprocessing, supervises pixel compression by the VP, performs Huffman encoding, and passes the compressed data to the host over a buffered interface. It takes compressed data from the host, performs coder decoding, supervises decompression via the VP, performs postprocessing, and generates digital pixel output for a video destination such as a monitor  相似文献   

12.
Human motion capture (MoCap) data can be used for animation of virtual human-like characters in distributed virtual reality applications and networked games. MoCap data compressed using the standard MPEG-4 encoding pipeline comprising of predictive encoding (and/or DCT decorrelation), quantization, and arithmetic/Huffman encoding, entails significant power consumption for the purpose of decompression. In this paper, we propose a novel algorithm for compression of MoCap data, which is based on smart indexing of the MoCap data by exploiting structural information derived from the skeletal virtual human model. The indexing algorithm can be fine-controlled using three predefined quality control parameters (QCPs). We demonstrate how an efficient combination of the three QCPs results in a lower network bandwidth requirement and reduced power consumption for data decompression at the client end when compared to standard MPEG-4 compression. Since the proposed algorithm exploits structural information derived from the skeletal virtual human model, it is observed to result in virtual human animation of visually acceptable quality upon decompression  相似文献   

13.

Communication is termed as exchanging the information (audio, video, text and image) from one end (transmitter) to another end (receiver). When video data are compressed and transmitted to another side, compression reduces the bandwidth size and memory required to transmit the video. Some traditional techniques are used in video transmission but it includes drawbacks, such as more compression time and low quality due to compression. To overcome these drawbacks the MPEG7-MBBMC (Modified Block Based Motion Compensated) technique is developed. Here the input video signals are collected from the dataset and the signals are splitted into three bands. Discrete Wavelet Transform (DWT) is applied for each band and quantization process occurs. The DWT and quantization process are applied in the MPEG7 compression, which offers high compression factors. Next, encoder is applied to convert the packets into small packets by using modified block based motion compensated (MBBMC) technique. The Motion compensation establishes a correspondence between elements of nearby images in the video sequence. The Forward Error Correction (FEC) is used to reduce the distortion in the encoder video packet. Then the Channel Pattern Integration (CPI) is applied to find the best available channel. The encoded video packets are transmitted by the best available channel. In receiver side the error correction code is applied to decode the video packets and reconstructs the decoded packet by decompression. It improves the quality of the video and in future it will help for much development in the field of multimedia.

  相似文献   

14.
Hui  Zheng  Zhou  Quan 《Multimedia Tools and Applications》2020,79(33-34):24241-24264

In this paper, we propose an efficient steganography method in the compressed codes of absolute moment block truncation coding (AMBTC). Many recent related schemes focus on implementing reversible data hiding in compressed AMBTC bit stream. However, the reconstructed image of AMBTC is already lossy and the strict reversibility severely limits embedding capacity. Due to the simplicity and regularity of AMBTC codes, implementing irreversible hiding scheme causes very slight loss visual distortion of reconstructed image in exchange of significant improve in embedding capacity. In proposed scheme, smoothness of AMBTC compressed trio is firstly detected, which is then indicated by substituting the LSB of high quantity level with flag bit. For smooth trios, the differences between both quantity levels are firstly encoded by Huffman coding and then concatenated with secret data to generate modified low quantity levels. Meanwhile, all bits in bit planes of smooth trios are substituted with secret data as well. For complex trio, secret bits are only embedded into quantity levels, which is similar to smooth trio except for the differences are encoded by Lloyd-Max quantization. Experimental results indicate that proposed scheme outperforms prior methods both in imperceptivity and embedding capacity, which confirms the effectiveness and superiority of our work.

  相似文献   

15.
Sebastian Deorowicz 《Software》2000,30(13):1465-1483
In 1994 Burrows and Wheeler presented a new algorithm for lossless data compression. The compression ratio that can be achieved using their algorithm is comparable with the best known other algorithms, whilst its complexity is relatively small. In this paper we explain the internals of this algorithm and discuss its various modifications that have been presented so far. Then we propose new improvements for its effectiveness. They allow us to obtain a compression ratio equal to 2.271 bpc for the Calgary Corpus files, which is the best result in the class of Burrows–Wheeler transform based algorithms. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

16.
Raw audios contain significant amount of redundancy which can be used for steganographic purposes. But practically most audios are stored and transmitted in the compressed formats. In this paper, we present a MPEG-1 Layer III (MP3) audio CODEC based steganographic method to embed secret message during encoding. The Huffman tables in MP3 standard are first partitioned into three groups. The secret message is then embedded by Huffman table swapping strategy. Instead of fully decoding the stego-audio, the extraction of the secret message can be done just by parsing the side information. The proposed method is designed under the restrictions of the MP3 compression standard without any modifications or additions to the existing standard. Experimental results show that the proposed method can provide much higher capacity than other approaches, while satisfying the low distortion and security requirements for steganography on MP3 audios.  相似文献   

17.
Binary wavelet transform (BWT) has several distinct advantages over the real wavelet transform (RWT), such as the conservation of alphabet size of wavelet coefficients, no quantization introduced during the transform and the simple Boolean operations involved. Thus, less coding passes are engaged and no sign bits are required in the compression of transformed coefficients. However, the use of BWT for the embedded grayscale image compression is not well established. This paper proposes a novel Context-based Binary Wavelet Transform Coding approach (CBWTC) that combines the BWT with a high-order context-based arithmetic coding scheme to embedded compression of grayscale images. In our CBWTC algorithm, BWT is applied to decorrelate the linear correlations among image coefficients without expansion of the alphabet size of symbols. In order to match up with the CBWTC algorithm, we employ the gray code representation (GCR) to remove the statistical dependencies among bi-level bitplane images and develop a combined arithmetic coding scheme. In the proposed combined arithmetic coding scheme, three highpass BWT coefficients at the same location are combined to form an octave symbol and then encoded with a ternary arithmetic coder. In this way, the compression performance of our CBWTC algorithm is improved in that it not only alleviate the degradation of predictability caused by the BWT, but also eliminate the correlation of BWT coefficients in the same level subbands. The conditional context of the CBWTC is properly modeled by exploiting the characteristics of the BWT as well as taking advantages of non-causal adaptive context modeling. Experimental results show that the average coding performance of the CBWTC is superior to that of the state-of-the-art grayscale image coders, and always outperforms the JBIG2 algorithm and other BWT-based binary coding technique for a set of test images with different characteristics and resolutions.  相似文献   

18.
何亚玲 《微计算机信息》2007,23(36):289-290,299
由于MPEG系统在帧内压缩部分有看很大的相关性,所以本文首先对MPEG-2视频压缩标准的特点和新技术进行了简单阐述。然后,详细介绍了MPEG-2图像帧内编码中用到的关键算法:DCT变换、量化,通过分析提取出其中比较耗时的算法进行改进。并采用改进的Loeffler算法进行一维DCT快速变换,对MPEG量化技术也进行了优化,构造出一个可以避免除法运算的改进的帧内量化矩阵。  相似文献   

19.

In this paper, a novel speech encryption algorithm based on hybrid-hyper chaotic system has presented. Instead of using normal chaotic system a hybrid-hyper chaotic system has used for improving the security level of speech communication models. Hyper-chaotic system is highly complex and dynamic system than normal chaotic system where it has more than one positive Lyapunov exponents. Hybrid chaotic system has designed by a disturbed discrete system by another one discrete system. In this algorithm, the input speech signal has compressed by Discrete Cosine Transform (DCT) to reduce the residual intelligibility. The compressed speech signal has permuted by hybrid chaotic system, which has designed using Zaslavsky and Zigzag maps. For substitution process, a reference speech signal has generated by Hidden Markov Model (HMM) speech synthesizer and permuted by using hyper-chaotic system. Masking of encryption signal has done by a masking sequence, which has obtained from the hyper-chaotic system. Our proposed work provides high security for the audio and speech signal over an insecure public network than other traditional speech encryption algorithms based on normal chaotic systems. The betterment of proposed algorithm is proven using the following metrics: key space analysis, key sensitivity analysis, information entropy measure, correlation coefficient analysis, Signal to Noise Ratio (SNR) analysis, subjective evaluation of speech quality, Perceptual Evaluation of Speech Quality (PESQ) analysis, NSCR (Number of Samples Changing Rate) and UACI (Unified Averaged Changed Intensity) analysis have carried out from cryptographic point of view and presented in this paper. The results proof that the proposed speech encryption algorithm ensures appreciable security system with robust encryption and decryption quality.

  相似文献   

20.
基于小波包和心理声学模型的音频编码算法   总被引:6,自引:1,他引:5  
文中提出了一种新的适用于实时多媒体应用领域的音频编码算法.该算法首先对音频信号进行小波包分解,然后在小波域中计算掩蔽阈值,最后根据从心理声学模型得到的信号-掩蔽比来对各子带小波系数进行动态比特分配、量化和编码.实验结果表明该算法将 CD 音频信号压缩到 64 Kbps 时,恢复信号的分段信噪比为 32.32 dB,主观上感觉无失真.该算法计算简单,可在不需任何附加硬件的 Pentium 133 MHz 个人计算机上实现实时音频编码.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号