首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Tight bounds on the redundancy of Huffman codes   总被引:2,自引:0,他引:2  
A method for deriving optimal upper bounds on the redundancy of binary Huffman codes in terms of the probability p1 of the most likely source letter is presented. This method will be used to compute bounds for all p1⩾1/127, which were previously known only for a few special cases. Furthermore, the known optimal lower bound for binary Huffman codes is generalized to arbitrary code alphabets and some upper bounds for D-ary Huffman codes, 2⩽D<∞, are given, which are the tightest possible for all p1⩾1/2  相似文献   

2.
Upper bounds on the entropy of a countable integer-valued random variable are furnished in terms of the expectation of the logarithm function. In particular, an upper bound is derived that is sharper than that of P. Elias (ibid., vol.IT-21, no.2, p.194-203, 1975), for all values of Ep(log). Bounds that are better only for large values of Ep than the previous known upper bounds are also provided  相似文献   

3.
Expressions are obtained for specifying the optimal error probability (minimum Pe) thresholds λ01 and λ02 for the traditional and modified sign detectors, respectively. These thresholds are shown to depend on the parameters p, P1, and M where: M is the number of observations zi used in the test statistic; P1=P(H1 ) is the prior probability for hypothesis H1 that signal s1 is present and 1-P1 =P(H0) corresponds to the hypothesis H0 that signal s0 is present; and p=Pr{zi⩾0|H1} with s0=0 for the traditional sign detector and p=Pr{zi⩾λ|H1 }=Pr{zi<λ|H0} with λ =(s0+s1)/2 for the modified sign detector. The expressions for λ01 and λ02, are given explicitly, and shown to be independent of P1 for sufficiently large M. Optimal Pe versus M performance curves, corresponding to both versions of the sign detector, are obtained for a representative range of values for p and P1  相似文献   

4.
Determination of the merit factor of Legendre sequences   总被引:3,自引:0,他引:3  
M.J.E. Golay (ibid., vol.IT-23, no.1, p.43-51, 1977) has used the ergodicity postulate to calculate that the merit factor F of a Legendre sequence offset by a fraction f of its length has an asymptotic value given by 1/F=(2/3)-4|f|+8f 2, |f|⩽1/2, which gives F=6 for |f |=1/4. Here this is proved without using the ergodicity postulate  相似文献   

5.
G.D. Chen et al. (ibid., vol.IT-32, p.680-94, 1986) presented two new lower bounds for K(n,R), where K(n,R) denotes the minimum cardinality of a binary code of length n and covering radius R. The author shows that a slight modification gives further improvements and some examples are given to confirm the argument. Codes that have a certain partitioning property are considered  相似文献   

6.
The author points out that M.G.M. Hussain (ibid., vol.31, no.3, p.202, May 1989), in his comments on the author's paper (ibid., vol.30, no.4, p.590, Nov. 1988) says that equation (4) is not valid at y=0. He notes that he stated that y>0. Even if the source were a current sheet in the plane y=0, it would still be possible to alloy y to approach zero from positive values  相似文献   

7.
An upper bound on the redundancy of D-ary Huffman codes in terms of the probability p1 of the most likely source letter is provided. For large values of p1, the bound improves the one given by R.G. Gallager (1978). Additionally, some results known for the binary case (D=2) are extended to arbitrary D-ary Huffman codes. As a consequence, a tight lower bound that corrects a bound recently proposed by J.D. Golic and M.M. Obradovic (1987) is derived  相似文献   

8.
A very simple proof of M.H. Costa's result (see ibid., vol.IT-31, p.751-60, 1985) that the entropy power of Xt=X +N(O,tI) is concave in t, is derived as an immediate consequence of an inequality concerning Fisher information. This relationship between Fisher information and entropy is found to be useful for proving the central limit theorem. Thus, one who seeks new entropy inequalities should try first to find new equalities about Fisher information, or at least to exploit the existing ones in new ways  相似文献   

9.
The author evaluates the limiting efficiencies e(-S ) of burst-correcting array codes A(n1,n2, -s) for all negative readouts -s as n2 tends to infinity and n1 is properly chosen to maximize the efficiency. Specializing the result to the products of the first i primes donated by si (1⩽i<∞), which are optimal choices for readouts, gives the expression e(-si)=(2pi+1 -2)/(2pi+1-1) where pi+1 is the next prime. Previously, it was known only that e(-2)⩾4/5 and e(-1)⩾2/3. This result reveals the existence of burst-correcting array codes with efficiencies arbitrarily close to 1 and with rates also arbitrarily close to 1  相似文献   

10.
For Pt. I see ibid., vol.37, no.5, p.1327-141 (1991). For a linear, time-invariant, discrete-time channel with a given transfer function H(f), and information rate R bits/ T, where T is the symbol interval, an optimal signal set of length K is defined to be a set of 2RK inputs of length K that maximizes the minimum l2 distance between pairs of outputs. The author studies the minimum distance between outputs, or equivalently, the coding gain of optimal signal sets as K→∞. He shows how to estimate the coding gain, relative to single-step detection, of an optimal signal set length K when K is large  相似文献   

11.
R.C. Bollinger and A.A. Salvia (see ibid., vol. R-34, p. 43-5, Apr. 1985) proposed an approach to study the failure-time distribution of a consecutive-k-out-of-n:F system by studying the number of minimal p-cutsequences. They gave recursive equations to compute this number for independent identically distributed components. It is shown that the recursive equations have an explicit solution  相似文献   

12.
The frequency of an InGaAsP distributed-feedback (DFB) laser was locked to the 2p10-3d5 transition of argon atoms at 1.2960 μm using the optogalvanic signal obtained from a commercial miniature glow lamp. At a discharge current of 500 μA, the signal-to-noise ratio of the optogalvanic signal corresponding to the Ar transition was about 18 dB. The peak-to-peak width of the first derivative signal was 650 MHz. The slope of the signal was 0.32 μV/MHz near the center of the transition. By using the linear portion of the first-derivative signal, the laser frequency was locked to the Ar 2p10-3d5 transition. The peak-to-peak frequency fluctuations in the free-running condition were estimated to be 650 MHz, which is mainly due to laser temperature fluctuations. When the servo-loop was closed, the frequency stability was improved to better than 13 MHz  相似文献   

13.
An error-correction scheme for an M-ary symmetric channel (MSC) characterized by a large error probability pe is considered. The value of pe can be near, but smaller than, 1-1/M, for which the channel capacity is zero, such as may occur in a jamming environment. The coding scheme consists of an outer convolutional code and an inner repetition code of length m that is used for each convolutional code symbol. At the receiving end, the m inner code symbols are used to form a soft-decision metric, which is passed to a soft-decision decoder for the convolutional code. The effect of finite quantization and methods to generate binary metrics for M>2 are investigated. Monte Carlo simulation results are presented. For the binary symmetric channel (BSC), it is shown that the overall code rate is larger than 0.6R0, where R0 is the cutoff rate of the channel. New union bounds on the bit error probability for systems with a binary convolutional code on 4-ary and 8-ary orthogonal channels are presented. For a BSC and a large m, a method is presented for BER approximation based on the central limit theorem  相似文献   

14.
A modified sequential procedure for testing binary hypotheses with different means, proposed by C.C. Lee and J.B. Thomas (ibid., vol.IT-30, no.1, p.16-23, Jan. 1984), is generalized for application to the case of multiple hypotheses with different means/variances of the Gaussian distribution. The method constitutes a two-threshold test for fixed-size packages of samples with a sequential procedure of discarding the package for which no decision is reached and subsequently testing a new package. The objective is to find an optimum package size N0 which leads to the minimum overall average sample number (ASN) for a given overall error probability. An optimization algorithm is developed to extend the application of the Lee-Thomas procedure to the M-ary case. Performance characteristics of the generalized two-threshold (GTT) test procedure are compared with those of conventional sequential as well as fixed-sample-size (FSS) methods. It is shown for the M-ary different means/variances cases that for low error rates the number of samples required by the GTT test is, on the average, approximately half that needed by a FSS test. However, it is somewhat more than the ASN obtained with a conventional sequential test. With decreasing error probabilities the GTT test performance approaches that of conventional sequential methods  相似文献   

15.
Optical orthogonal codes-new bounds and an optimal construction   总被引:9,自引:0,他引:9  
A technique for constructing optimal OOCs (optical orthogonal codes) is presented. It provides the only known family of optimal (with respect to family size) OOCs having λ=2. The parameters (n ,ω,λ) are respectively (p2m-1, pm+1,2), where p is any prime and the family size is pm-2. Three distinct upper bounds on the size of an OOC are presented that, for many values of the parameter set (n,ω,λ), improve upon the tightest previously known bound  相似文献   

16.
The problem of designing nonuniformly spaced arrays is formulated as one of constrained optimization in which the cost function is chosen to select the array with the minimum number of elements. The response of the array is controlled by a set of inequality point response constraints. It is shown that a suitable cost function for this problem is the lp quasi-norm for 0<p<1 and that there exists a number p1 such that for all 0<p<p1 the resulting array is maximally sparse. Furthermore it is shown that a solution to the problem lies at an extreme point of the simplex formed by the point constraints. A simplex search algorithm is described which will converge to a local minimum of the cost function on this simplex. The algorithm is applied to the design of sparse linear and planar arrays  相似文献   

17.
A low-pass and a bandpass additive white Gaussian noise channel with a peak-power constraint imposed on otherwise arbitrary input signals are considered. Upper bounds on the capacity of such channels are derived. They are strictly less than the capacity of the channel when the peak-power constrain is removed and replaced by the average-power constraint, for which the Gaussian inputs are optimum. This provides the answer to an often-posed question: peak-power limiting in the case of bandlimited channels does reduce capacity, whereas in infinite bandwidth channels it does not, as is well known. For an ideal low-pass filter of bandwidth B, the upper bound is Blog 0.934P/(N0B) for P/( N0B)≫1, where P is the peak power of the input signal and N0/2 is the double-sided power spectral density of the additive white Gaussian noise  相似文献   

18.
Limited search trellis decoding of convolutional codes   总被引:1,自引:0,他引:1  
The least storage and node computation required by a breadth-first tree or trellis decoder that corrects t errors over the binary symmetric channels is calculated. Breadth-first decoders work with code paths of the same length, without backtracking. The Viterbi algorithm is an exhaustive trellis decoder of this type; other schemes look at a subset of the tree or trellis paths. For random tree codes, theorems about the asymptotic number of paths required and their depth are proved. For concrete convolutional codes, the worst case storage for t error sequences is measured. In both cases the optimal decoder storage has the same simple dependence on t. The M algorithm and algorithms proposed by G.J. Foschini (ibid., vol.IT-23, p.605-9, Sept. 1977) and by S.J. Simmons (PhD. diss., Queens Univ., Kingston, Ont., Canada) are optimal, or nearly so; they are all far more efficient than the Viterbi algorithm  相似文献   

19.
Consider a channel with inputs and outputs in the field F q(q>2). It is said that the channel is skewed on a set BFq* if the additive noise generated by the channel is likely to lie in B, i.e. B is a set of common errors. The concern is the construction of focused codes that are appropriate for such channels. It is said that a code is (t1,t2)-focused on B if it can correct up to t1+t2 errors provided at most t1 of those errors lie outside of B; the strategy is to offer different levels of protection against common and uncommon errors and so provide novel tradeoffs between performance and rate. Techniques for constructing focused codes and bounds on their rates are described  相似文献   

20.
The entropy theorem (also known as the Shannon-McMillan-Breiman theory or the asymptotic equipartition theorem) asserts that, for a stationary ergodic finite alphabet process, the sequence-(1/n)log p(x1n) converges almost surely to the entropy-rate H of the process. The entropy theorem has been used to establish asymptotic bounds on the performance of noiseless codes. Here, the coding theorems are established without using the entropy theorem, and the coding theorems are then used to prove the entropy theorem. The principle feature is the direct use of coding ideas to obtain the entropy theorem  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号