首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Some properties of Huffman codes are presented. It is shown that knowing the probabilityP_{1}of the most likely source letter, there exist new lower and upper bounds on the redundancy of the Huffman code which are tighter forP_{1} geq 0.4than those given by Shannon's first theorem or by the more recent results of Gallager. It is also shown that the new bounds are the tightest possible forP_{1} geq 0.4when it is supposed that PI is the only known probability.  相似文献   

2.
New upper bounds on the redundancy of Huffman codes are provided. A bound that for2/9 leq P_{1} leq 0.4is sharper than the bound of Gallager, when the probability of the most likely source letterP_{1}is the only known probability is presented. The improved bound is the tightest possible for1/3 leq P_{1} leq 0.4. Upper bounds are presented on the redundancy of Huffman codes when the extreme probabilitiesP_{1}andP_{N}are known.  相似文献   

3.
Tight bounds on the redundancy of Huffman codes   总被引:2,自引:0,他引:2  
A method for deriving optimal upper bounds on the redundancy of binary Huffman codes in terms of the probability p1 of the most likely source letter is presented. This method will be used to compute bounds for all p1⩾1/127, which were previously known only for a few special cases. Furthermore, the known optimal lower bound for binary Huffman codes is generalized to arbitrary code alphabets and some upper bounds for D-ary Huffman codes, 2⩽D<∞, are given, which are the tightest possible for all p1⩾1/2  相似文献   

4.
Bounds on the redundancy of Huffman codes in terms of the probability p1 of the most likely source letter are provided. In particular, upper bounds are presented that are sharper than the bounds given recently by R.G. Gallager (ibid., vol.IT-24, no.6, p.668-74, Nov.1978) and by R.M. Capocelli et al. (ibid., vol. IT-32, no.6, p.854-857, Nov. 1986) for an interval 2/(2l+1+1)<p1<1/(2l-1), l⩾2. It is shown that the new bounds are the tightest possible for these intervals  相似文献   

5.
A necessary and sufficient condition for the most likely letter of any discrete source to be coded by a single symbol with aD-ary Huffman code,2 leq D < infty, is derived. As a consequence, a lower bound on the redundancy of aD-ary Huffman code is established.  相似文献   

6.
In this paper the connection between the self-information of a source letter from a finite alphabet and its code-word length in a Huffman code is investigated. Consider the set of all independent finite alphabet sources which contain a source letter a of probabilityp. The maximum over this set of the length of a Huffman codeword for a is determined. This maximum remains constant aspvaries between the reciprocal values of two consecutive Fibonacci numbers. For the smallpthis maximum is approximately equal toleft[ log_{2} frac{1+ sqrt{5}}{2} right]^{-1} approx 1.44times the self-information.  相似文献   

7.
A problem associated with the use of variable-length source codes is that loss of synchronization may lead to extended errors in the decoded text. In this correspondence it is shown that some binary Huffman codes contain a codeword that resynchronizes the decoder regardless of the synchronization slippage preceding that codeword. Such codes are self-synchronizing in a probabilistic sense, yet require no additional system overhead. Some sufficient conditions are found for the existence or nonexistence of self-synchronizing Huffman codes for many classes of source probabilities. One of our results shows that many common languages can be encoded with self-synchronizing Huffman codes.  相似文献   

8.
Chen  Z. Vucetic  B. Yuan  J. 《Electronics letters》2005,41(9):538-540
The asymptotic bit error performance of the Alamouti scheme with transmit antenna selection is investigated for imperfect selection of antenna subset. It is shown that the transmit diversity order is equal to the largest ordinal number of the antenna within the selected antenna subset.  相似文献   

9.
Let X be a discrete random variable drawn according to a probability mass function p(x), and suppose p(x), is dyadic, i.e., log(1/p(x)) is an integer for each x. It is shown that the binary code length assignment l(x)=log(1/p(x)) dominates any other uniquely decodable assignment l'(x) in expected length in the sense that El(X)<El'(X), indicating optimality in long run performance (which is well known), and competitively dominates l'(x), in the sense that Pr{ l (X)<l'(X)}>Pr{l ( X)>l'(X)}, which indicates l is also optimal in the short run. In general, if p is not dyadic then l=[log 1/p] dominates l'+1 in expected length and competitivity dominates l'+1, where l' is any other uniquely decodable code  相似文献   

10.
A classification of all probability distributions over the finite alphabet of an information source is given, where the classes are the sets of distributions sharing the same binary Huffman code. Such a classification can be used in noiseless coding, when the distribution of the finite memoryless source varies in time or becomes gradually known. Instead of applying the Huffman algorithm to each new estimate of the probability distribution, if a simple test based on the above classification is passed, then the Huffman code used previously is optimal also for the new distribution.  相似文献   

11.
A new construction for n-track (d, k) codes with redundancy r, referred to as (d, k; n, r) codes, is presented. This construction applies single-track (d, k+Δk) codes (with certain extra constraints and appropriate amounts of delay) on each of the n tracks. This construction achieves a large part of the capacity increases possible when using (d, k; n, r) codes, has simple encoders and decoders, and exhibits considerable robustness to faulty tracks. It is shown that under this construction, (d, k; n, r) codes can achieve at least (n-r-1:)/n*100% of the gap in capacity between conventional (d, k) and (d, ∞) codes. Several practical examples of (d, k; n, r) codes under this construction are presented  相似文献   

12.
A sliding block encoding scheme easily derivable from block encoding is presented. It is shown to perform arbitrarily close to the rate distortion function when the source is stationary and memoryless.  相似文献   

13.
Binary cyclic redundancy codes for feedback communication over noisy digital links are considered. The standard 16-bit ADCCPt polynomial is designed for digital links that already have a low input bit error probability. For file transfer between personal computers over telephone circuits, the quality of the resulting digital circuit may be much lower. This leads to the consideration of 3-byte (24-bit) and 4-byte (32-bit) polynomials. Generator polynomials of a certain class are found that have minimum weight and yet achieve the bound on minimum distance for arbitrary codes. Particular polynomials for 24-bit and 32-bit redundancies are exhibited, of weight and distance 6 in the 24-bit case and weight 10 and distance 8 in the 32-bit case.  相似文献   

14.
The authors present a new scheme for joint source and channel coding is introduced in which both Huffman coding and convolutional coding are used in a concatenated form. This is a reduction in overall complexity, in comparison with a hard decision decoder followed by a Huffman decoder.<>  相似文献   

15.
We consider several issues in the analysis and design of turbo coded systems for (O, κ) input-constrained channels. These constraints commonly arise in magnetic recording channels. This system is characterized by a high-rate turbo code driving a high-rate (n-1)/n, small-length (O, κ) block code. We discuss the properties of the (O, κ) code that affect its performance on both an additive white Gaussian noise (AWGN) and a precoded dicode channel. We address soft-in soft-out (SISO) decoding of linear and nonlinear (O, κ) codes and show that good (O, κ) codes exist even when dmin=1. For the (O, κ) constrained AWGN channel, we present several rate (n-1)/n block codes that optimally tradeoff bit-error-rate performance with κ. For the precoded dicode channel, we show that the systematic (O, n-1) modulation codes are superior to most other rate (n-1)/n block codes in terms of error-rate performance, and their attractiveness is increased by the fact that they do not contribute any significant complexity to the overall system  相似文献   

16.
17.
The data structure of Huffman codes and its application to efficient encoding and decoding of Huffman codes are studied in detail. The tree structure is presented by a two-dimensional array which can be applied for the decoding of Huffman codes as a state transition table of the finite-state decoding automaton. Inversion produces a one-dimensional state transition table of the semiautonomous finite-state sequential machine which can be used as a Huffman encoder with a push-down stack. The encoding and decoding procedures are simple and efficient. It is not only possible to implement by simple hardware but is also applicable to software implementation.  相似文献   

18.
Minimum redundancy coding (also known as Huffman coding) is one of the enduring techniques of data compression. Many efforts have been made to improve the efficiency of minimum redundancy coding, the majority based on the use of improved representations for explicit Huffman trees. In this paper, we examine how minimum redundancy coding can be implemented efficiently by divorcing coding from a code tree, with emphasis on the situation when n is large, perhaps on the order of 10 6. We review techniques for devising minimum redundancy codes, and consider in detail how encoding and decoding should be accomplished. In particular, we describe a modified decoding method that allows improved decoding speed, requiring just a few machine operations per output symbol (rather than for each decoded bit), and uses just a few hundred bytes of memory above and beyond the space required to store an enumeration of the source alphabet  相似文献   

19.
An alphabetical code is a code in which the numerical binary order of the codewords corresponds to the alphabetical order of the encoded symbols. A necessary and sufficient condition for the existence of a binary alphabetical code is presented. The redundancy of the optimum binary alphabetical code is given in comparison with the Huffman code and its upper bound, which is tighter than bounds previously reported, is presented. The redundancy of the optimal alphabetical code is about 5% in comparison with the Huffman coding, which shows the usefulness of the alphabetical code  相似文献   

20.
In this paper, we settle a long-standing open problem concerning the average redundancy rn of the Lempel-Ziv'78 (LZ78) code. We prove that for a memoryless source the average redundancy rate attains asymptotically Ern=(A+δ(n))/log n+ O(log log n/log2 n), where A is an explicitly given constant that depends on source characteristics, and δ(x) is a fluctuating function with a small amplitude. We also derive the leading term for the kth moment of the number of phrases. We conclude by conjecturing a precise formula on the expected redundancy for a Markovian source. The main result of this paper is a consequence of the second-order properties of the Lempel-Ziv algorithm obtained by Jacquet and Szpankowski (1995). These findings have been established by analytical techniques of the precise analysis of algorithms. We give a brief survey of these results since they are interesting in their own right, and shed some light on the probabilistic behavior of pattern matching based data compression  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号