首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
LetP(i)= (1 - theta)theta^ibe a probability assignment on the set of nonnegative integers wherethetais an arbitrary real number,0 < theta < 1. We show that an optimal binary source code for this probability assignment is constructed as follows. Letlbe the integer satisfyingtheta^l + theta^{l+1} leq 1 < theta^l + theta^{l-1}and represent each nonnegative integeriasi = lj + rwhenj = lfloor i/l rfloor, the integer part ofi/l, andr = [i] mod l. Encodejby a unary code (i.e.,jzeros followed by a single one), and encoderby a Huffman code, using codewords of lengthlfloor log_2 l rfloor, forr < 2^{lfloor log l+1 rfloor} - l, and lengthlfloor log_2 l rfloor + 1otherwise. An optimal code for the nonnegative integers is the concatenation of those two codes.  相似文献   

2.
In this paper the connection between the self-information of a source letter from a finite alphabet and its code-word length in a Huffman code is investigated. Consider the set of all independent finite alphabet sources which contain a source letter a of probabilityp. The maximum over this set of the length of a Huffman codeword for a is determined. This maximum remains constant aspvaries between the reciprocal values of two consecutive Fibonacci numbers. For the smallpthis maximum is approximately equal toleft[ log_{2} frac{1+ sqrt{5}}{2} right]^{-1} approx 1.44times the self-information.  相似文献   

3.
Some properties of Huffman codes are presented. It is shown that knowing the probabilityP_{1}of the most likely source letter, there exist new lower and upper bounds on the redundancy of the Huffman code which are tighter forP_{1} geq 0.4than those given by Shannon's first theorem or by the more recent results of Gallager. It is also shown that the new bounds are the tightest possible forP_{1} geq 0.4when it is supposed that PI is the only known probability.  相似文献   

4.
New upper bounds on the redundancy of Huffman codes are provided. A bound that for2/9 leq P_{1} leq 0.4is sharper than the bound of Gallager, when the probability of the most likely source letterP_{1}is the only known probability is presented. The improved bound is the tightest possible for1/3 leq P_{1} leq 0.4. Upper bounds are presented on the redundancy of Huffman codes when the extreme probabilitiesP_{1}andP_{N}are known.  相似文献   

5.
It is known that the expected codeword lengthL_{UD}of the best uniquely decodable (UD) code satisfiesH(X)leqL_{UD}. LetXbe a random variable which can take on n values. Then it is shown that the average codeword lengthL_{1:1}for the best one-to-one (not necessarily uniquely decodable) code forXis shorter than the average codeword lengthL_{UD}for the best uniquely decodable code by no more than(log_{2} log_{2} n)+ 3. LetYbe a random variable taking on a finite or countable number of values and having entropyH. Then it is proved thatL_{1:1}geq H-log_{2}(H + 1)-log_{2}log_{2}(H + 1 )-cdots -6. Some relations are established among the Kolmogorov, Chaitin, and extension complexities. Finally it is shown that, for all computable probability distributions, the universal prefix codes associated with the conditional Chaitin complexity have expected codeword length within a constant of the Shannon entropy.  相似文献   

6.
Assuming the conventional divisions of the semiconductor into depleted and neutral regions, it is shown that for an abrupt p-n junction with nondegenerate carriers a relation exists between the open circuit photovoltage and the PN product at the junction(PN)_{0}, which is valid for all signal levels. In the small-signal case this leads to the standard result. At intermediate levels a new relationV = KT/q (1 pm m) log_{e} ([(PN)_{0}]^{1/2}/n_{i})holds, the upper sign for p+-n junctions, the lower for n+-p junctions;m = (micro_{e}-micro_{h})/(micro_{e}+micro_{h}). At very high levels the photovoltage saturates toV = kT/q[log_{e}(M_{p}M_{n}/n_{i^{2}}) + m log_{e}(micro_{h}M_{p}/micro_{e}M_{N})]. Since Mpand MNare the doping levels in the p and n regions, the first term is the diffusion potential and the second term will be positive for p+-n junctions and negative for n+-p junctions. These results compare satisfactorily with the available experimental data.  相似文献   

7.
Tight bounds on the redundancy of Huffman codes   总被引:2,自引:0,他引:2  
A method for deriving optimal upper bounds on the redundancy of binary Huffman codes in terms of the probability p1 of the most likely source letter is presented. This method will be used to compute bounds for all p1⩾1/127, which were previously known only for a few special cases. Furthermore, the known optimal lower bound for binary Huffman codes is generalized to arbitrary code alphabets and some upper bounds for D-ary Huffman codes, 2⩽D<∞, are given, which are the tightest possible for all p1⩾1/2  相似文献   

8.
Using earlier methods a combinatorial upper bound is derived for|C|. cdot |D|, where(C,D)is adelta-decodable code pair for the noisy two-access binary adder channel. Asymptotically, this bound reduces toR_{1}=R_{2} leq frac{3}{2} + elog_{2} e - (frac{1}{2} + e) log_{2} (1 + 2e)= frac{1}{2} - e + H(frac{1}{2} - e) - frac{1}{2}H(2e),wheree = lfloor (delta - 1)/2 rfloor /n, n rightarrow inftyandR_{1}resp.R_{2}is the rate of the codeCresp.D.  相似文献   

9.
The optimum single standard run lengths for a binary first-order Markov source are derived and extended to multilevel first-order Markov sources. Maximization of the compression ratio is used as the criterion of optimality. When the output symbols are block coded, the optimal single standard run lengthn_ifor each symbol is shown to satisfy an implicit equation of the form(n_i - 1)(-ln q_{ii}) = 1 - q_{ii}^ {n_i}, whereq_{ii}is a transition probability. An expression for the overall compression ratio is derived for the binary case, and a comparison is made with enumerative source encoding. Compression ratio maxima are found by computer search for the binary independent source when the output symbols are subsequently Huffman coded, and a comparison of this scheme with ordinary run-length and source-extension coding is given.  相似文献   

10.
The distribution of the ratio of minimum distance to code length of a random linear code approaches a step distribution as the code length becomes arbitrarily large at fixed code rate. The location of the step is at the smaller value ofpsatisfying1 + p log_{2}p + (1 - p) log_{2} (1 - p) = k/n.  相似文献   

11.
An infinite sequence ofk-dimensional binary linear block codes is constructed with parametersn=2^{k}+2^{k-2}-15,d=2^{k-1}+2^{k-3}-8,k geq 7. Fork geq 8these codes are unique, while there are five nonisomorphic codes fork=7. By shortening these codes in an appropriate way, one finds codes meeting the Griesmer bound for2^{k-1}+2^{k-3}-15 leq d leq 2^{k-1}+2^{k-3}-8; k geq 7.  相似文献   

12.
Asymptotic average redundancy of Huffman (and other) block codes   总被引:3,自引:0,他引:3  
We study asymptotically the redundancy of Huffman (and other) codes. It has been known from the inception of the Huffman (1952) code that in the worst case its redundancy-defined as the excess of the code length over the optimal (ideal) code length-is not more than one. However, to the best of our knowledge no precise asymptotic results have been reported in literature thus far. We consider here a memoryless binary source generating a sequence of length n distributed as binomial (n, p) with p being the probability of emitting 0. Based on the results of Stubley (1994), we prove that for p<1/2 the average redundancy R¯nH of the Huffman code becomes as n→∞: R¯nH={(3/2-(1/ln2+o(1))=0.057304…, α irrational); (3/2-(1/M)(〈βMn〉-½)); (-(1/M(1-2-1M/))2-〈nβM〉M/); (+O(ρn), α=N/M rational); where α=log2 (1-p)/p and β=-log2(1-p), ρ<1, M, N are integers such that gcd (N, M)=1, and 〈x〉=x-[x] is the fractional part of x. The appearance of the fractal-like function 〈βMn〉 explains the erratic behavior of the Huffman redundancy, and its “resistance” to succumb to a precise analysis. As a side result, we prove that the average redundancy of the Shannon block code is as n→∞: R¯nS{(½+o(1), α irrational); (½-1/M (〈Mnβ〉-½)); (+O(ρn), α=N/M rational); where ρ<1. Finally, we derive the redundancy of the Golomb (1966) code (for the geometric distribution) which can be viewed as a special case of the Huffman and Shannon codes, Golomb's code redundancy has only oscillating behavior (i.e., there is not convergent mode). These findings are obtained by analytic methods such as theory of distribution of sequences modulo 1 and Fourier series  相似文献   

13.
LetCbe the cyclic product code ofpsingle parity check codes of relatively prime lengthsn_{1}, n_{2},cdots , n_{p} (n_{1} < n_{2} < cdots < n_{p}). It is proven thatCcan correct2^{P-2}+2^{p-3}-1bursts of lengthn_{1}, andlfloor(max{p+1, min{2^{p-s}+s-1,2^{p-s}+2^{p-s-1}}}-1)/2rfloorbursts of lengthn_{1}n_{2} cdots n_{s} (2leq s leq p-2). Forp=3this means thatCis double-burst-n_{1}-correcting. An efficient decoding algorithm is presented for this code.  相似文献   

14.
现在广泛使用的压缩编码方法都要通过哈夫曼树来实现,这样围绕着哈夫曼树就存在着许多运算过程.为了化简编码过程,提出了一种无需哈夫曼树就能实现的变长最佳编码方法,通过一个概率补偿的过程,可以直接得到所有信源的最佳码长.知道码长和概率后也无需通过哈夫曼树就可以确定最后的编码,并且可以证明结果满足变长最佳编码定理和前缀编码.经测试,该方法可以快速有效得到变长最佳编码,并简化了变长编码的运算存储过程.  相似文献   

15.
An upper bound on the redundancy of D-ary Huffman codes in terms of the probability p1 of the most likely source letter is provided. For large values of p1, the bound improves the one given by R.G. Gallager (1978). Additionally, some results known for the binary case (D=2) are extended to arbitrary D-ary Huffman codes. As a consequence, a tight lower bound that corrects a bound recently proposed by J.D. Golic and M.M. Obradovic (1987) is derived  相似文献   

16.
The execution performances of the Sweeney, Robertson, Tocher (SRT) division algorithm depend on two parameters: the radix- $r$ and the redundancy factor $rho$. In this paper, a study of the effect of these parameters on the division performances is presented. At each iteration, the SRT algorithm performs a multiplication by the quotient digit ${q}_{{i}+1}$ . This last can be just a simple shift, if the digit ${q}_{{i}+1}$ is a power of two $({q}_{{i}+1}=2^{k})$ , otherwise, the SRT iteration needs a multiplier. We propose, in this work, an approach to circumvent this multiplication by decomposing the quotient digit ${q}_{{i}+1}$ into two or three terms multiples of 2. Then, the multiplication is carried out by simple shifts and a carry save addition. The implementation of this approach on Virtex-II field-programmable gate-array (FPGA) circuits gives best performances than the approach which uses the embedded multipliers 18 $times$ 18 bits. The iterations delays are operands sizes independent. The reduction tree delays are at most equivalent to the delay of two Virtex-II slices. This approach was tested for the 4, 8, and 16 radixes in the two cases of minimum and maximum redundancy factors. By this study, we conclude that the use of the radix-8 with a maximum redundancy factor gives the best performances by using our approach for the double precision computation of the SRT division.   相似文献   

17.
A trellis code is {em homogeneous} if the number of branches emanating from each node (or state) in the trellis diagram is constant. For example, convolutional codes are linear homogeneous trellis codes. A trellis code is {em nonhomogeneous} if the number of branches emanating from each node in the trellis diagram is not the same. The two-user binary adder channel is a multiple-access channel with two binary inputs,x_{1}andx_{2}, and one ternary output,y = x_{1} + x_{2}, where the addition is done in the real number field. The adder channel is synchronous if both encoders and the decoder maintain block (frame) synchronism. It is quasi-synchronous if the encoders do not start their blocks at the same time, but the decoder knows the position of each block. The difference between the starting times of the blocks is called the slippage. The channel is asynchronous if no block synchronism exists among the encoders and the decoder. Some uniquely decodable code pairs(C_{1}, C_{2})are presented that can be used to transmit information reliably over the quasi-synchronous binary adder channel with two users. One of the codes is a nonhomogeneous trellis code, the other is a common block code. Our code rates are better than Deaett-Wolf codes and are close to or equal to the asymptotic rates of Kasami {em et al}. A method for calculating the rates of nonhomogeneous trellis codes is described. An algorithm for finding more uniquely decodable code pairs for the quasi-synchronous binary adder channel is formulated.  相似文献   

18.
A linear ensemble of codes is defined as one over which the informationK-tupleproptois encoded asproptoG oplus_{z}whereGis equally likely to assume any matrix in a linear spacecal BofKbyNbinary matrices and wherezis independent ofGand equally likely to assume any binaryN-tuple. A technique for upperbounding the ensemble averageP(E)of the probability of error, when the codes ofcal Bare used on the binary symmetric channel with maximum likelihood decoding, is presented which reduces to overbounding a deterministic integer-valued function defined on the space of binaryN-tuples. This technique is applied to the ensemble of K by N binary matrices having for/th row the (i- 1) right cyclic shift of the first, i= 1,2,. . . ,K, and where the first row is equally likely to he any binaryN-tuple. For this ensemble it is shown thatP(E) leq mu(N) exp_{2}-NE_{r}(K/N)whereE_{r}( cdot)is the random coding exponent for the binary symmetric channel and_{ mu}(N)is the number of divisors ofX^{N}+ 1. Ifcal Bis pairwise independent it is shown that the above technique yields the random coding bound for block codes and that moreover there exists at least one code in the ensemblecal Bwhose minimum Hamming distance meets a Gilbert-type lower bound.  相似文献   

19.
20.
For a given finite set of messages and their assigned probabilities, Huffman's procedure gives a method of computing a length set (a set of codeword lengths) that is optimal in the sense that the average word length is minimized. Corresponding to a particular length set, however, there may be more than one code. LetL(n)consist of all length sets with largest termn, and, for anyell in L(n), let{cal S}( ell)be the smallest number of states in any finite-state acceptor which accepts a set of codewords with length setell. It is shown that, for alln > 1,n^{2}/(16 log_{2} n) leq max {cal S}(ell) leq 0(n^{2}).ell in L(n)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号