首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Recently, arithmetic coding has attracted the attention of many scholars because of its high compression capability. Accordingly, in this paper, a method that adds secrecy to this well-known source code is proposed. Finite state arithmetic code is used as source code to add security. Its finite state machine characteristic is exploited to insert some random jumps during source coding process. In addition, a Huffman code is designed for each state to make decoding possible even in jumps. Being prefix-free, Huffman codes are useful in tracking correct states for an authorized user when he/she decodes with correct symmetric pseudo-random key. The robustness of our proposed scheme is further reinforced by adding another extra uncertainty by swapping outputs of Huffman codes in each state. Several test images are used for inspecting the validity of the proposed Huffman finite state arithmetic coding (HFSAC). The results of several experimental key space analyses, statistical analyses, key and plaintext sensitivity tests show that HFSAC with a little effect on compression efficiency provides an efficient and secure method for real-time image encryption and transmission.  相似文献   

2.
现在广泛使用的压缩编码方法都要通过哈夫曼树来实现,这样围绕着哈夫曼树就存在着许多运算过程.为了化简编码过程,提出了一种无需哈夫曼树就能实现的变长最佳编码方法,通过一个概率补偿的过程,可以直接得到所有信源的最佳码长.知道码长和概率后也无需通过哈夫曼树就可以确定最后的编码,并且可以证明结果满足变长最佳编码定理和前缀编码.经测试,该方法可以快速有效得到变长最佳编码,并简化了变长编码的运算存储过程.  相似文献   

3.
Arithmetic coding is a powerful lossless data compression technique that has attracted much attention. It provides more flexibility and better efficiency than Huffman coding does. However, the multiplications needed in its encoding and decoding algorithms are very undesirable. Rissanen and Mohiuddin (1989) have proposed a simple scheme to avoid the multiplications. The present authors found that the performance of their proposed scheme might degrade significantly in some cases. They propose a multiplication-free multialphabet arithmetic code which can be shown to have minor performance degradation in all cases. In the proposed scheme, each multiplication is replaced by a single shift-and-add. The authors prove, by both theoretical analysis and simulation results, that the degradation of the proposed multiplication-free scheme is always several times (2-7 times in the present experiments) smaller than that of the Rissanen-Mohiuddin's scheme  相似文献   

4.
Modified JPEG Huffman coding   总被引:3,自引:0,他引:3  
It is a well observed characteristic that when a DCT block is traversed in the zigzag order, the AC coefficients generally decrease in size and the run-length of zero coefficients increase in number. This article presents a minor modification to the Huffman coding of the JPEG baseline compression algorithm to exploit this redundancy. For this purpose, DCT blocks are divided into bands so that each band can be coded using a separate code table. Three implementations are presented, which all move the end-of-block marker up in the middle of DCT block and use it to indicate the band boundaries. Experimental results are presented to compare reduction in the code size obtained by our methods with the JPEG sequential-mode Huffman coding and arithmetic coding methods. The average code reduction to the total image code size of one of our methods is 4%. Our methods can also be used for progressive image transmission and hence, experimental results are also given to compare them with two-, three-, and four-band implementations of the JPEG spectral selection method.  相似文献   

5.
This paper reports the effect of compression by applying delta encoding and Huffman coding schemes together on speech signals of American-English and Hindi from International Phonetic Alphabet database. First of all, these speech signals have been delta encoded and then compressed by Huffman coding. By doing so, it has been observed here that the Huffman coding gives high compression ratio for this delta encoded speech signals as compared to the compression on the input speech signals only incorporating Huffman coding.  相似文献   

6.
In this paper, we present two multistage compression techniques to reduce the test data volume in scan test applications. We have proposed two encoding schemes namely alternating frequency-directed equal-run-length (AFDER) coding and run-length based Huffman coding (RLHC). These encoding schemes together with the nine-coded compression technique enhance the test data compression ratio. In the first stage, the pre-generated test cubes with unspecified bits are encoded using the nine-coded compression scheme. Later, the proposed encoding schemes exploit the properties of compressed data to enhance the test data compression. This multistage compression is effective especially when the percentage of do not cares in a test set is very high. We also present the simple decoder architecture to decode the original data. The experimental results obtained from ISCAS'89 benchmark circuits confirm the average compression ratio of 74.2% and 77.5% with the proposed 9C-AFDER and 9C-RLHC schemes respectively.  相似文献   

7.
本文阐述了直接用Huffman编码对Freeman链码数据进行压缩的不足,利用Markov链多步状态转移矩阵重建链码序列模型,给出了一种更合理的Freeman链码数据压缩方法,它在压缩较复杂平面线条图像即较长链码的时候,表现出比直接Huffman编码更优越的性能。  相似文献   

8.
Semistatic minimum-redundancy prefix (MRP) coding is fast compared with rival coding methods, but requires two passes during encoding. Its adaptive counterpart, dynamic Huffman coding, requires only one pass over the input message for encoding and decoding, and is asymptotically efficient. Dynamic Huffman (1952) coding is, however, notoriously slow in practice. By removing the restriction that the code used for each message symbol must have minimum-redundancy and thereby admitting some compression loss, it is possible to improve the speed of adaptive MRP coding. This paper presents a controlled method for trading compression loss for coding speed by approximating symbol frequencies with a geometric distribution. The result is an adaptive MRP coder that is asymptotically efficient and also fast in practice  相似文献   

9.
Digital video compression-an overview   总被引:1,自引:0,他引:1  
Some of the more commonly used video compression schemes are reviewed. They are differential pulse code modulation (DPCM), Huffman coding, transform coding, vector quantization (VQ), and subband coding. The use of motion compensation to improve compression is also discussed  相似文献   

10.
A compression-decompression scheme, Modified Selective Huffman (MS-Huffman) scheme based on Huffman code is proposed in this paper. This scheme aims at optimization of the parameters that influence the test cost reduction: the compression ratio, on-chip decoder area overhead and overall test application time. Theoretically, it is proved that the proposed scheme gives the better test data compression compared to very recently proposed encoding schemes for any test set. It is clearly demonstrated with a large number of experimental results that the proposed scheme improves the test data compression, reduces overall test application time and on-chip area overhead compared to other Huffman code based schemes.  相似文献   

11.
An adaptive pel location coding scheme is proposed for document encoding. Operating in one or two dimensions, it achieves high compression ratios at the expense of slight image degradation. Comparison with autoadaptive block coding, run-length coding using the B1 code, and modified Huffman coding are presented.  相似文献   

12.
Block arithmetic coding for source compression   总被引:3,自引:0,他引:3  
We introduce “Block Arithmetic Coding” (BAC), a technique for entropy coding that combines many of the advantages of ordinary stream arithmetic coding with the simplicity of block codes. The code is variable length in to fixed out (V to F), unlike Huffman coding which is fixed in to variable out (F to V). We develop two versions of the coder: 1) an optimal encoder based on dynamic programming arguments, and 2) a suboptimal heuristic based on arithmetic coding. The optimal coder is optimal over all V to F complete and proper block codes. We show that the suboptimal coder achieves compression that is within a constant of a perfect entropy coder for independent and identically distributed inputs. BAC is easily implemented, even with large codebooks, because the algorithms for coding and decoding are regular. For instance, codebooks with 232 entries are feasible. BAC also does not suffer catastrophic failure in the presence of channel errors. Decoding errors are confined to the block in question. The encoding is in practice reasonably efficient. With i.i.d. binary inputs with P(1)=0.95 and 16 bit codes, entropy arguments indicate at most 55.8 bits can be encoded; the BAC heuristic achieves 53.0 and the optimal BAC achieves 53.5. Finally, BAC appears to be much faster than ordinary arithmetic coding  相似文献   

13.
Algra  T. 《Electronics letters》1992,28(15):1399-1401
Variable-to-fixed-length coding implementations, according to the Tunstall algorithm, are shown to be significantly less complex than fixed-to-variable-length coding schemes such as the Huffman scheme. A modified version of Tunstall coding is presented, with improved compression ratio.<>  相似文献   

14.
It is well-known that variable-length coding schemes can be employed in entropy encoding of finite-alphabet sources. To transmit these codes over a synchronous channel, however, requires a buffer. Since in practice this buffer is of finite size, it is subject to both overflow and undertow. The buffer behavior is studied with particular application to Huffman coding of the outputs of an optimum uniform-threshold quantizer driven by a memoryless Gaussian source. Fairly general upper and lower bounds on the average terminal time are developed. Under certain conditions, the tightness of these bounds is verified, and asymptotic formulas are developed. As an example, an encoding scheme employing Huffman codes in conjunction with uniform quantization of memoryless Gaussian sources is considered, and the buffer behavior as a function of the buffer size and output rate is studied.  相似文献   

15.
International digital facsimile coding standards   总被引:3,自引:0,他引:3  
Recently Study Group XIV of CCITT has drafted a new Recommendation (T.4) with the aim of achieving compatibility between digital facsimile apparatus connected to general switched telephone networks. A one-dimensional coding scheme is used in which run lengths are encoded using a modified Huffman code. This allows typical A4 size documents in the form of black and white images scanned at normal resolution (3.85 lines/mm, 1728 pels/line) to be transmitted in an average time of about a minute at a rate of 4800 bit/s. The Recommendation also includes a two-dimensional code, known as the modified relative element address designate (READ) code, which is in the form of an optional extension to the one-dimensional code. This extension allows typical documents scanned at high (twice normal) resolution (with every fourth line one dimensionally coded) to be transmitted in an average time of about 75 s at 4800 bit/s. This paper describes the coding schemes in detail and discusses the factors which led to their choice. In addition, this paper assesses the performance of the codes, particularly in relation to their compression efficiency and vulnerability to transmission errors, making use of 8 CCITT reference documents.  相似文献   

16.
Huffman编码是一种广泛使用的,非常有效的数据压缩技术。为了取得高压缩率,讨论了规范Huffman树的性质,研究了一种基于浓缩Huffman表的Huffman算法并加以实现。新的浓缩Huffman表可以减少Huffman编码表的开销,与传统Huffman表和其他改进的浓缩Huffman表相比,其最大的优点是空间大小显著减少。  相似文献   

17.
Minimum redundancy coding (also known as Huffman coding) is one of the enduring techniques of data compression. Many efforts have been made to improve the efficiency of minimum redundancy coding, the majority based on the use of improved representations for explicit Huffman trees. In this paper, we examine how minimum redundancy coding can be implemented efficiently by divorcing coding from a code tree, with emphasis on the situation when n is large, perhaps on the order of 10 6. We review techniques for devising minimum redundancy codes, and consider in detail how encoding and decoding should be accomplished. In particular, we describe a modified decoding method that allows improved decoding speed, requiring just a few machine operations per output symbol (rather than for each decoded bit), and uses just a few hundred bytes of memory above and beyond the space required to store an enumeration of the source alphabet  相似文献   

18.
In this paper, we propose an entropy minimization histogram mergence (EMHM) scheme that can significantly reduce the number of grayscales with nonzero pixel populations (GSNPP) without visible loss to image quality. We proved in theory that the entropy of an image is reduced after histogram mergence and that the reduction in entropy is maximized using our EMHM. The reduction in image entropy is good for entropy encoding considering that the minimum average code word length per source symbol is the entropy of the source signal according to Shannon’s first theorem. Extensive experimental results show that our EMHM can significantly reduce the code length of entropy coding, such as Huffman, Shannon, and arithmetic coding, by over 20% while preserving the image subjective and objective quality very well. Moreover, the performance of some classic lossy image compression techniques, such as the Joint Photographic Experts Group (JPEG), JPEG2000, and Better Portable Graphics (BPG), can be improved by preprocessing images using our EMHM.  相似文献   

19.
An efficient coding system for long source sequences   总被引:2,自引:0,他引:2  
The Elias source coding scheme is modified to permit a source sequence of practically unlimited length to be coded as a single codeword using arithmetic of only limited precision. The result is shown to be a nonblock arithmetic code of the first in, first out (FIFO) type-- source symbols are decoded in the same order as they were encoded. Codeword lengths which are near optimum for the specified statistical properties of the source can be achieved. Explicit encoding and decoding algorithms are Provided which effectively implement the coding scheme. Applications to data compression and cryptography are suggested.  相似文献   

20.
The authors describe a design approach, called 2-D entropy-constrained subband coding (ECSBC), based upon recently developed 2-D entropy-constrained vector quantization (ECVQ) schemes. The output indexes of the embedded quantizers are further compressed by use of noiseless entropy coding schemes, such as Huffman or arithmetic codes, resulting in variable-rate outputs. Depending upon the specific configurations of the ECVQ and the ECPVQ over the subbands, many different types of SBC schemes can be derived within the generic 2-D ECSBC framework. Among these, the authors concentrate on three representative types of 2-D ECSBC schemes and provide relative performance evaluations. They also describe an adaptive buffer instrumented version of 2-D ECSBC, called 2-D ECSBC/AEC, for use with fixed-rate channels which completely eliminates buffer overflow/underflow problems. This adaptive scheme achieves performance quite close to the corresponding ideal 2-D ECSBC system.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号