首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
If pi(i=1,···, N) is the probability of the ith letter of a memoryless source, the length li of the corresponding binary Huffman codeword can be very different from the value -log pi. For a typical letter, however, li≈-logpi. More precisely, Pm -=Σ/sub j∈{i|l<-logpj-m}/pj<2-m and Pm +=Σ/sub j∈{i|li>-logpi+m/}pj<2-c(m-2)+2, where c≈2.27  相似文献   

2.
Unlike block codes, n-dimensional lattices can have minimal trellis diagrams with an arbitrarily large number of states, branches, and paths. In particular, we show by a counterexample that there is no f(n), a function of n, such that all rational lattices of dimension n have a trellis with less than f(n) states. Nevertheless, using a theorem due to Hermite, we prove that every integral lattice Λ of dimension n has a trellis T, such that the total number of paths in T is upper-bounded by P(T)⩽n!(2/√3)n2(n-1/2)V(Λ) n-1 where V(n) is the volume of Λ. Furthermore, the number of states at time i in T is upper-bounded by |Si|⩽(2/√3)i2(n-1)V(Λ)2i2 n/. Although these bounds are seldom tight, these are the first known general upper bounds on trellis complexity of lattices  相似文献   

3.
A group code C over a group G is a set of sequences of group elements that itself forms a group under a component-wise group operation. A group code has a well-defined state space Σk at each time k. Each code sequence passes through a well-defined state sequence. The set of all state sequences is also a group code, the state code of C. The state code defines an essentially unique minimal realization of C. The trellis diagram of C is defined by the state code of C and by labels associated with each state transition. The set of all label sequences forms a group code, the label code of C, which is isomorphic to the state code of C. If C is complete and strongly controllable, then a minimal encoder in controller canonical (feedbackfree) form may be constructed from certain sets of shortest possible code sequences, called granules. The size of the state space Σk is equal to the size of the state space of this canonical encoder, which is given by a decomposition of the input groups of C at each time k. If C is time-invariant and ν-controllable, then |Σk|=Π1⩽j⩽v|Fj/F j-1|j, where F0 ⊆···⊆ Fν is a normal series, the input chain of C. A group code C has a well-defined trellis section corresponding to any finite interval, regardless of whether it is complete. For a linear time-invariant convolutional code over a field G, these results reduce to known results; however, they depend only on elementary group properties, not on the multiplicative structure of G. Moreover, time-invariance is not required. These results hold for arbitrary groups, and apply to block codes, lattices, time-varying convolutional codes, trellis codes, geometrically uniform codes and discrete-time linear systems  相似文献   

4.
5.
On ternary complementary sequences   总被引:1,自引:0,他引:1  
A pair of real-valued sequences A=(a1,a2,...,aN) and B=(b1,b 2,...,bN) is called complementary if the sum R(·) of their autocorrelation functions RA(·) and RB(·) satisfies R(τ)=RA(τ)+R B(τ)=Σi=1N$ -τaiai+τj=1 N-τbjbj+τ=0, ∀τ≠0. In this paper we introduce a new family of complementary pairs of sequences over the alphabet α3=+{1,-1,0}. The inclusion of zero in the alphabet, which may correspond to a pause in transmission, leads both to a better understanding of the conventional binary case, where the alphabet is α2={+1,-1}, and to new nontrivial constructions over the ternary alphabet α3. For every length N, we derive restrictions on the location of the zero elements and on the form of the member sequences of the pair. We also derive a bound on the minimum number of zeros necessary for the existence of a complementary pair of length N over α3. The bound is tight, as it is met by some of the proposed constructions, for infinitely many lengths  相似文献   

6.
For the discrete memoryless channel (χ, y, W) we give characterizations of the zero-error erasure capacity Cer and the zero-error average list size capacity Cal in terms of limits of suitable information (respectively, divergence) quantities (Theorem 1). However, they do not “single-letterize.” Next we assume that χ⊂y and W(x|x)>0 for all x∈χ, and we associate with W the low-noise channel Wϵ, where for y +(x)={y:W(y|x)>0} Wϵ(y|x)={1, if y=x and |y+(x)|=1 1-ϵ, if y=x and |y+(x)|>1 e/|y +(x)|-1, if y≠x. Our Theorem-2 says that as ε tends to zero the capacities Cer(Wε) and Cal (Wε) relate to the zero-error detection capacity C de(W). Our third result is a seemingly basic contribution to the theory of identification via channels. We introduce the (second-order) identification capacity Coid for identification codes with zero misrejection probability and misacceptance probability tending to zero. Our Theorem 3 says that Coid equals the zero-error erasure capacity for transmission Cer  相似文献   

7.
Let (X,Y) be a pair of discrete random variables with X taking one of M possible values, Suppose the value of X is to be determined, given the value of Y, by asking questions of the form “Is X equal to x?” until the answer is “Yes”. Let G(x|y) denote the number of guesses in any such guessing scheme when X=x, Y=y. We prove that E[G(X|Y)ρ]⩾(1+lnM)Σy xPX,Y(x,y)1/1+ρ] 1+ρ for any ρ⩾0. This provides an operational characterization of Renyi's entropy. Next we apply this inequality to the estimation of the computational complexity of sequential decoding. For this, we regard X as the input, Y as the output of a communication channel. Given Y, the sequential decoding algorithm works essentially by guessing X, one value at a time, until the guess is correct. Thus the computational complexity of sequential decoding, which is a random variable, is given by a guessing function G(X|Y) that is defined by the order in which nodes in the tree code are hypothesized by the decoder. This observation, combined with the above lower bound on moments of G(X|Y), yields lower bounds on moments of computation in sequential decoding. The present approach enables the determination of the (previously known) cutoff rate of sequential decoding in a simple manner; it also yields the (previously unknown) cutoff rate region of sequential decoding for multiaccess channels. These results hold for memoryless channels with finite input alphabets  相似文献   

8.
Multicast connections are used in broad-band switching networks as well as in parallel processing. We consider wide-sense and strict-sense nonblocking conditions for multi-log2 N switching networks with multicast connections. We prove that such networks are wide-sense nonblocking if they are designed by vertically stacking at least t · 2n-t-1 + 2 n-2t-1 planes of a log2 N networks together, where 1 ⩽ t ⩽ [n/2] and t defines the size of a blocking window K = 2t. For t = [n/2] and n even, and for [n/2] ⩽ t ⩽ n the number of planes must be at least t · 2n-t-1 + 1 and 2t + (n - t - 1) · 2n-t-1 - 22t-n-1 + 1, respectively. In the case of strict-sense nonblocking switching networks, the number of planes is at least N/2. The results obtained in this paper show that in many cases number of planes in wide-sense nonblocking switching networks is less than those for t = [n/2] considered by Tscha and Lee (see ibid., vol.47, p.1425-31, Sept. 1999). The number of planes given in the paper is the minimum number of planes needed for wide-sense nonblocking operation provided that Algorithm 1 is used for setting up connections. The minimum number of planes for such operation in general is still open issue  相似文献   

9.
Informally, an error-correcting code has "nice" list-decodability properties if every Hamming ball of "large" radius has a "small" number of codewords in it. We report linear codes with nontrivial list-decodability: i.e., codes of large rate that are nicely list-decodable, and codes of large distance that are not nicely list-decodable. Specifically, on the positive side, we show that there exist codes of rate R and block length n that have at most c codewords in every Hamming ball of radius H-1(1-R-1/c)·n. This answers the main open question from the work of Elias (1957). This result also has consequences for the construction of concatenated codes of good rate that are list decodable from a large fraction of errors, improving previous results of Guruswami and Sudan (see IEEE Trans. Inform. Theory, vol.45, p.1757-67, Sept. 1999, and Proc. 32nd ACM Symp. Theory of Computing (STOC), Portland, OR, p. 181-190, May 2000) in this vein. Specifically, for every ε > 0, we present a polynomial time constructible asymptotically good family of binary codes of rate Ω(ε4) that can be list-decoded in polynomial time from up to a fraction (1/2-ε) of errors, using lists of size O(ε-2). On the negative side, we show that for every δ and c, there exists τ < δ, c1 > 0, and an infinite family of linear codes {Ci}i such that if ni denotes the block length of Ci, then C i has minimum distance at least δ · ni and contains more than c1 · nic codewords in some Hamming ball of radius τ · ni. While this result is still far from known bounds on the list-decodability of linear codes, it is the first to bound the "radius for list-decodability by a polynomial-sized list" away from the minimum distance of the code  相似文献   

10.
A function h(w) is said to be useful for the coding theorem if the coding theorem remains to be true when the lengths |w| of codewords w in it are replaced with h(w). For a codeword w=a0a1...am-1 of length m and an infinite sequence Q=(q0, q1, q2, ...) of real numbers such that 0n⩽½, let |w|Q denote the value Σn=0m-1 (if an =0 then -log2qn, else -log2(1-q n)), that is, -log2, (the probability that flippings of coins generate x) assuming that the (i+1)th coin generates a tail (or 0) with probability qi. It is known that if 0n→∞ qn then |w|Q is useful for the coding theorem and if limn→∞ q n/(1/(2n))=0 then |w|Q is not useful. We introduce several variations of the coding theorem and show sufficient conditions for h(w) not to be useful for these variations. The usefulness is also defined for the expressions that define the Kolmogorov complexity and the universal distribution. We consider the relation among the usefulness for the coding theorem, that for the Kolmogorov complexity, and that for the universal distribution  相似文献   

11.
Source coding problems are treated for Shannon's (1949) cipher system with correlated source outputs (X,Y). Several cases are considered based on whether both X and Y, only X, or only Y must be transmitted to the receiver, whether both X and Y, only X, or only Y must be kept secret, or whether the security level is measured by (1/KH(XK|W), (1/KH(YK|W)) or 1/K H(XKYK|W) where W is a cryptogram. The admissible region of cryptogram rate and key rate for a given security level is derived for each case. Furthermore, two new kinds of common information of X and Y, say C1(X;Y) and C2(X;Y), are considered. C1(X;Y) is defined as the rate of the attainable minimum core of (XK,YK) by removing each private information from (XK,YK) as much as possible, while C2(X;Y) is defined as the rate of the attainable maximum core VC such that if one loses VC , then each uncertainty of XK and YK becomes H(VC). It is proved that C1(X;Y)=I(X;Y) and C2(X;Y)=min {H(X), H(Y)}. C1(X;Y) justifies the author's intuitive feeling that the mutual information represents a common information of X and Y  相似文献   

12.
Let {Xt} be a real-valued time series. The best nonlinear predictor of X0 given the infinite past X-∞-1 in the least squares sense, is equal to the conditional mean E{X0|X-∞-1}. Previously, it has been shown that certain predictors based on growing segments of past observations converge to the best predictor given the infinite past whenever {Xt} is a stationary process with values in a bounded interval. The present paper deals with universal prediction schemes for stationary processes with finite mean. We also discuss universal schemes for learning the conditional mean E{X0|X -∞-1Y-∞-1Y0 } from past observations of a stationary pair process {(Xt , Yt)}, and schemes for learning the repression function m(y)=E{X|Y=y} from independent samples of (X, Y)  相似文献   

13.
A fast evaluation procedure for the integral Im,n,p=1/2πj∯|z|=1Hm,n(z)H m,n(z-1)zp-1dz for arbitrary nonnegative integer-valued m, n, and p, is presented, where Hm,n (z)=Σk=0mbm,kz-k l=0nan,lz-1,a n,0≠0 is the transfer function of an arbitrary digital filter. Evaluation of this integral frequently appears in control, communication, and digital filtering. A notable result is the one-term recursion on p, for arbitrary but fixed nonnegative integers m and n. The computational complexity is analyzed, and two illustrative examples demonstrate some of the advantages of this approach  相似文献   

14.
We report on ohmic contact measurements of Al, Au, and W metallizations to p-type epitaxial Ge0.9983C0.0017 grown on a (100) Si substrate by molecular beam epitaxy (MBE). Contacts were annealed at various temperatures, and values of specific contact resistance have been achieved which range from 10-5Ω·cm2 to as low as 5.6×10 -6Ω·cm2. Theoretical calculations of the contact resistance of metals on Ge1-xCx with small percentages of carbon, based on the thermionic field emission mechanism of conduction, result in good agreement with the experimental data. We conclude that Al and Au are suitable ohmic contacts to p-Ge0.9983C0.0017 alloys  相似文献   

15.
The conventional problem of searching the shortest vector z of a N-dimensional lattice L(H) with generating matrix HisinRNtimesM is considered in a more general setting. There are P generating matrices HiisinRNtimesM(i=1,2,...,P) of the P lattices L(Hi). For a (bounded) integer vector bisinZM , we obtain P lattice points Hib. Let di be the Euclidean norm of Hib. The problem of interest is how to search for a vector b so that the maximum of di is minimized. We propose a new sphere decoder called combinatorial sphere decoder (CSD) to solve this problem. One of the applications of the new CSD is presented in detail to show its effectiveness  相似文献   

16.
A universal predictor based on pattern matching   总被引:2,自引:0,他引:2  
We consider a universal predictor based on pattern matching. Given a sequence X1, ..., Xn drawn from a stationary mixing source, it predicts the next symbol Xn+1 based on selecting a context of Xn+1. The predictor, called the sampled pattern matching (SPM), is a modification of the Ehrenfeucht-Mycielski (1992) pseudorandom generator algorithm. It predicts the value of the most frequent symbol appearing at the so-called sampled positions. These positions follow the occurrences of a fraction of the longest suffix of the original sequence that has another copy inside X1X2···Xn ; that is, in SPM, the context selection consists of taking certain fraction of the longest match. The study of the longest match for lossless data compression was initiated by Wyner and Ziv in their 1989 seminal paper. Here, we estimate the redundancy of the SPM universal predictor, that is, we prove that the probability the SPM predictor makes worse decisions than the optimal predictor is O(n) for some 0<ν<½ as n→∞. As a matter of fact, we show that we can predict K=O(1) symbols with the same probability of error  相似文献   

17.
Let {wij} be the weights of the connections of a neural network with n nodes, calculated from m data vectors v1, ···, vm in {1,-1}n, according to the Hebb rule. The author proves that if m is not too large relative to n and the vk are random, then the wij constitute, with high probability, a perfect representation of the vk in the sense that the v k are completely determined by the wij up to their sign. The conditions under which this is established turn out to be less restrictive than those under which it has been shown that the vk can actually be recovered by letting the network evolve until equilibrium is attained. In the specific case where the entries of the vk are independent and equal to 1 or -1 with probability 1/2, the condition on m is that m should not exceed n/0.7 log n  相似文献   

18.
It is shown how the Zak transform can be used to find nontrivial examples of functions f, gL2(R) with f×g≡0≡F×G, where F, G are the Fourier transforms of f, g, respectively. This is then used to exhibit a nontrivial pair of functions h, k∈L2(R), hk, such that |h|=|k|, |H |=|K|. A similar construction is used to find an abundance of nontrivial pairs of functions h, k∈L2 (R), hk, with |Ah |=|Ak| or with |Wh|=|W k| where Ah, Ak and Wh, Wk are the ambiguity functions and Wigner distributions of h, k, respectively. One of the examples of a pair of h, kL2(R), hk , with |Ah|=|Ak| is F.A. Grunbaum's (1981) example. In addition, nontrivial examples of functions g and signals f1f2 such that f1 and f2 have the same spectrogram when using g as window have been found  相似文献   

19.
A conjecture is proven on the number of points on shells for the shifted Z4 and the shifted Z8 lattices. Furthermore, the results are extended to any shifted Z4n lattice (n=0, 1, . . .). These results provide an easy way to compute the number of points on shells for the type of lattices used in the design of multidimensional signal sets or in vector coding  相似文献   

20.
An estimator Eˆ(dn,n) of the conditional expectation E[Xn+1|Xn,...,X(n-dn+1)] in a centered, stationary, and ergodic Gaussian process {Xi}i with absolutely summable Wold coefficients is constructed on the basis of having observed X1,...,Xn . For a suitable choice of the length dn→∞ (n→∞) of the past covered by the conditional expectation, it is established that |Eˆ(dn,n)-E[Xn+1|Xn ,...,X(n-dn+1)]|→0 with probability 1. In addition, sufficient conditions for |E[Xn+1|Xn,X n-1,...]-E[Xn+1|Xn,...,X(n-dn +1)]| →0 to hold with probability 1 are given, that is, conditions under which Eˆ(dn,n) can be used as a strongly consistent forecaster for |E[Xn+1|Xn,X n-1,...]  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号