共查询到20条相似文献,搜索用时 31 毫秒
1.
《IEEE transactions on information theory / Professional Technical Group on Information Theory》1976,22(3):372-375
LetF(x) be an absolutely continuous distribution having a density functionf(x) with respect to the Lebesgue measure. The Shannon entropy is defined asH(f) = -int f(x) ln f(x) dx . In this correspondence we propose, based on a random sampleX_{1}, cdots , X_{n} generated fromF , a nonparametric estimate ofH(f) given byhat{H}(f) = -(l/n) sum_{i = 1}^{n} In hat{f}(x) , wherehat{f}(x) is the kernel estimate off due to Rosenblatt and Parzen. Regularity conditions are obtained under which the first and second mean consistencies ofhat{H}(f) are established. These conditions are mild and easily satisfied. Examples, such as Gamma, Weibull, and normal distributions, are considered. 相似文献
2.
《IEEE transactions on information theory / Professional Technical Group on Information Theory》1978,24(4):422-432
In 1964 the author proposed as an explication of {em a priori} probability the probability measure induced on output strings by a universal Turing machine with unidirectional output tape and a randomly coded unidirectional input tape. Levin has shown that iftilde{P}'_{M}(x) is an unnormalized form of this measure, andP(x) is any computable probability measure on strings,x , thentilde{P}'_{M}geqCP(x) whereC is a constant independent ofx . The corresponding result for the normalized form of this measure,P'_{M} , is directly derivable from Willis' probability measures on nonuniversal machines. If the conditional probabilities ofP'_{M} are used to approximate those ofP , then the expected value of the total squared error in these conditional probabilities is bounded by-(1/2) ln C . With this error criterion, and when used as the basis of a universal gambling scheme,P'_{M} is superior to Cover's measurebast . WhenHastequiv -log_{2} P'_{M} is used to define the entropy of a rmite sequence, the equationHast(x,y)= Hast(x)+H^{ast}_{x}(y) holds exactly, in contrast to Chaitin's entropy definition, which has a nonvanishing error term in this equation. 相似文献
3.
Achievable rates for multiple descriptions 总被引:12,自引:0,他引:12
《IEEE transactions on information theory / Professional Technical Group on Information Theory》1982,28(6):851-857
4.
Capacity theorems for the relay channel 总被引:28,自引:0,他引:28
《IEEE transactions on information theory / Professional Technical Group on Information Theory》1979,25(5):572-584
A relay channel consists of an inputx_{l} , a relay outputy_{1} , a channel outputy , and a relay senderx_{2} (whose transmission is allowed to depend on the past symbolsy_{1} . The dependence of the received symbols upon the inputs is given byp(y,y_{1}|x_{1},x_{2}) . The channel is assumed to be memoryless. In this paper the following capacity theorems are proved. 1)Ify is a degraded form ofy_{1} , thenC : = : max !_{p(x_{1},x_{2})} min ,{I(X_{1},X_{2};Y), I(X_{1}; Y_{1}|X_{2})} . 2)Ify_{1} is a degraded form ofy , thenC : = : max !_{p(x_{1})} max_{x_{2}} I(X_{1};Y|x_{2}) . 3)Ifp(y,y_{1}|x_{1},x_{2}) is an arbitrary relay channel with feedback from(y,y_{1}) to bothx_{1} and x_{2} , thenC: = : max_{p(x_{1},x_{2})} min ,{I(X_{1},X_{2};Y),I ,(X_{1};Y,Y_{1}|X_{2})} . 4)For a general relay channel,C : leq : max_{p(x_{1},x_{2})} min ,{I ,(X_{1}, X_{2};Y),I(X_{1};Y,Y_{1}|X_{2}) . Superposition block Markov encoding is used to show achievability ofC , and converses are established. The capacities of the Gaussian relay channel and certain discrete relay channels are evaluated. Finally, an achievable lower bound to the capacity of the general relay channel is established. 相似文献
5.
《IEEE transactions on information theory / Professional Technical Group on Information Theory》1987,33(1):69-76
Consider separate encoding of correlated sourcesX^{n}=(X_{l}, cdots ,X_{n}), Y^{n} = (Y_{l}, cdots ,Y_{n}) for the decoder to reliably reproduce a function{F(X_{i}, Y_{i})}^{n}_{i=1} . We establish the necessary and sufficient condition for the set of all achievable rates to coincide with the Slepian-Wolf region whenever the probability densityp(x,y) is positive for all(x,y) . 相似文献
6.
《IEEE transactions on information theory / Professional Technical Group on Information Theory》1987,33(3):443-448
For a joint distribution{rm dist}(X,Y) , the functionT(t)=min { H(Y|U): I(U wedge Y|X)=O, H(X|U)geq t} is an important characteristic. It equals the asymptotic minimum of(1/n)H(Y^{n}) for random pairs of sequences(X^{n}, Y^{n}) , wherefrac{1}{n} sum ^{n}_{i=1}{rm dist} X_{i} sim {rm dist} X, {rm dist} Y^{n}|X^{n} = ({rm dist} Y|X)^{n}, frac{1}{n}H(X^{n})geq t. We show that if, for(X^{n}, Y^{n}) as given, the rate pair[(1/n)H(X^{n}) ,(1/n)H(Y^{n})] approaches the nonlinear part of the curve(t,T(t)) , then the sequenceX^{n} is virtually memoryless. Using this, we determine some extremal sections of the rate region of entropy characterization problems and find a nontrivial invariant for weak asymptotic isomorphy of discrete memoryless correlated sources. 相似文献
7.
《IEEE transactions on information theory / Professional Technical Group on Information Theory》1972,18(5):659-661
Recently Mazo and Salz proved that if{ Y(t), t in T } is a stationary random process with mean-square derivative{ dot{Y}(t), t in T }, then the conditional expectation ofdot{Y} (t) givenY(t) is zero almost everywhere with respect to the distribution ofY(t) . We extend this property and obtain a characterization of stationary processes differentiable in mean square. 相似文献
8.
《IEEE transactions on information theory / Professional Technical Group on Information Theory》1968,14(3):514-515
Recursive relations are given for updating the conditional densityp(theta_{k} | X_{k-1}, cdots X_{1}) (also forp(theta_{k} | X_{k}, cdots, X_{1}) ), wheretheta_{k} is a parameter of the density ofX_{k} . The observationsX_{1}, X_{2}, cdots are assumed to be conditionally independent (i.e., for known parameters), and the sequence of time-varying parameterstheta_{1}, theta_{2}, cdots constitutes a Markov-M sequence. The result requires the storage of an intermediate function of(theta_{k-1}, cdots , theta_{k-M}) . 相似文献
9.
10.
《IEEE transactions on information theory / Professional Technical Group on Information Theory》1979,25(5):601-604
In the discrimination problem the random variabletheta , known to take values in{1, cdots ,M} , is estimated from the random vectorX . All that is known about the joint distribution of(X, theta) is that which can be inferred from a sample(X_{1}, theta_{1}), cdots ,(X_{n}, theta_{n}) of sizen drawn from that distribution. A discrimination nde is any procedure which determines a decisionhat{ theta} fortheta fromX and(X_{1}, theta_{1}) , cdots , (X_{n}, theta_{n}) . For rules which are determined by potential functions it is shown that the mean-square difference between the probability of error for the nde and its deleted estimate is bounded byA/ sqrt{n} whereA is an explicitly given constant depending only onM and the potential function. TheO(n ^{-1/2}) behavior is shown to be the best possible for one of the most commonly encountered rules of this type. 相似文献
11.
《IEEE transactions on information theory / Professional Technical Group on Information Theory》1983,29(3):390-395
The encoding of a discrete memoryless multiple source{( X_{i}, Y_{i})}_{i=1}^{infty} for reconstruction of a sequence{Z_{i}}_{i=1}^{infty}} , withZ_{i} = F( X_{i}, Y_{i}); i = 1,2, cdots is considered. We require that the encoding should be such that{X_{i}}_{i=1}^{infty} is encoded first without any consideration of{Y_{i}}_{i=1}^{infty} , while in a second part of the encoding, this latter sequence is encoded based on knowledge of the outcome of the first encoding. The resulting scheme is called successive encoding. We find general outer and inner bounds for the corresponding set of achievable rates along with a complete single letter characterization for the special caseH( X_{i}|Z_{i}, Y_{i}) = 0 . Comparisons with the Slepian-Wolf problem and the Ahlswede-Korner-Wyner side information problem are carried out. 相似文献
12.
《IEEE transactions on information theory / Professional Technical Group on Information Theory》1983,29(6):930-931
Hadamard's inequality follows immediately from inspection of both sides of the entropy inequalityh(X_{1}, X_{2},, cdots, X_{n})leq sum h(X_{i}) , when(X_{l}, X_{2},cdots, X_{n}) is multivariate normal. 相似文献
13.
《IEEE transactions on information theory / Professional Technical Group on Information Theory》1987,33(1):47-51
A stochastic bivariate process{(T_{n}, X_{n}): n=0,1,2, cdots } is considered. TheT_{n} are the occurrence times of a random event generated by a Poisson stochastic point process. EachX_{n} is the amplitude associated with then th event at random timeT_{n} and is constructed fromX_{n}-1 and the interarrival timeT_{n}-T_{n-1} , according to the first-order autoregressive exponential time series model (EAR(l)). Moments and joint distributions for the bivariate process are obtained, as well as the distribution of some extreme values related to the bivariate process. 相似文献
14.
An algorithm for maximizing expected log investment return 总被引:3,自引:0,他引:3
《IEEE transactions on information theory / Professional Technical Group on Information Theory》1984,30(2):369-373
Let the random (stock market) vectorX geq 0 be drawn according to a known distribution functionF(x), x in R^{m} . A log-optimal portfoliob^{ast} is any portfoliob achieving maximal expectedlog returnW^{ast}=sup_{b} E ln b^{t}X , where the supremum is over the simplexb geq 0, sum_{i=1}^{m} b_{i} = 1 . An algorithm is presented for findingb^{ast} . The algorithm consists of replacing the portfoliob by the expected portfoliob^{'}, b_{i}^{'} = E(b_{i}X_{i}/b^{t}X) , corresponding to the expected proportion of holdings in each stock after one market period. The improvement inW(b) after each iteration is lower-bounded by the Kullback-Leibler information numberD(b^{'}|b) between the current and updated portfolios. Thus the algorithm monotonically improves the returnW . An upper bound onW^{ast} is given in terms of the current portfolio and the gradient, and the convergence of the algorithm is established. 相似文献
15.
《IEEE transactions on information theory / Professional Technical Group on Information Theory》1978,24(2):254-256
An upper bound is derived for the mean-square error involved when a non-band-limited, wide-sense stationary random processx(t) (possessing an integrable power spectral density) is approximated by a cardinal series expansion of the formsum^{infty}_{-infty}x(n/2W) sinc2W(t-n/2W) , a sampling expansion based on the choice of some nominal bandwidthW > 0 . It is proved thatlim_{N rightarrow infty} E {|x(t) - x_{N}(t)|^{2}} leq frac{2}{pi}int_{| omega | > 2 pi W}S_{x}( omega) d omega, wherex_{N}(t) = sum_{-N}^{N}x(n/2W) sinc2W(t-n/2W) , andS_{x}(omega) is the power spectral density forx(t) . Further, the constant2/ pi is shown to be the best possible one if a bound of this type (involving the power contained in the frequency region lying outside the arbitrarily chosen band) is to hold uniformly int . Possible reductions of the multiplicative constant as a function oft are also discussed, and a formula is given for the optimal value of this constant. 相似文献
16.
《IEEE transactions on information theory / Professional Technical Group on Information Theory》1975,21(4):453-458
For a nondecreasing distortion characteristicphi(cdot) and a given signalx(cdot) , the "cross correlation" function defined byR_{phi} (tau) triangleq int_{-infty}^{infty} phi[x(t)]x(t - tau) dt is shown to satisfy the inequalityR_{phi}(tau) leq R_{phi}(0) , for alltau , generalizing an earlier result of Richardson that requiredphi(cdot) to be continuous and strictly increasing. The methods of the paper also show that, under weak conditions, begin{equation} R_{phi,psi}(tau) triangleq int_{-infty}^{infty} phi[x(t)]psi[x(t - tau)] dt leq R_{phi,psi}(0) end{equation} whenpsi is strictly increasing andphi is nondecreasing. In the case of hounded signals (e.g., periodic functions), the appropriate cross correlation function is begin{equation} mathcal{R}_{phi,psi}(tau} triangleq lim_{T rightarrow infty} (2T)^{-l} int_{-T}^T phi[x(t)]psi[x(t - tau)] dt. end{equation} For this case it is shown thatmathcal{R}_{phi,psi} (tau) leq mathcal{R}_{phi,psi}(0) for any nondecreasing (or nonincreasing) distortion functionsphi andpsi . The result is then applied to generalize an inequality on correlation functions for periodic signals due to Prosser. Noise signals are treated and inequalities of a similar nature are obtained for ensemble-average cross correlation functions under suitable hypotheses on the statistical properties of the noise. Inequalities of this type are the basis of a well-known method of estimating the unknown time delay of an observed signal. The extension to nondecreasing discontinuous distortion functions allows the use of hard limiting or quantization to facilitate the cross correlation calculation. 相似文献
17.
《IEEE transactions on information theory / Professional Technical Group on Information Theory》1978,24(3):331-338
It is known that the expected codeword lengthL_{UD} of the best uniquely decodable (UD) code satisfiesH(X)leqL_{UD}. LetX be a random variable which can take on n values. Then it is shown that the average codeword lengthL_{1:1} for the best one-to-one (not necessarily uniquely decodable) code forX is shorter than the average codeword lengthL_{UD} for the best uniquely decodable code by no more than(log_{2} log_{2} n)+ 3 . LetY be a random variable taking on a finite or countable number of values and having entropyH . Then it is proved thatL_{1:1}geq H-log_{2}(H + 1)-log_{2}log_{2}(H + 1 )-cdots -6 . Some relations are established among the Kolmogorov, Chaitin, and extension complexities. Finally it is shown that, for all computable probability distributions, the universal prefix codes associated with the conditional Chaitin complexity have expected codeword length within a constant of the Shannon entropy. 相似文献
18.
《IEEE transactions on information theory / Professional Technical Group on Information Theory》1980,26(6):648-657
Let{(U_{i},V_{i})}_{i=1}^{n} be a source of independent identically distributed (i.i.d.) discrete random variables with joint probability mass functionp(u,v) and common partw=f(u)=g(v) in the sense of Witsenhausen, Gacs, and Körner. It is shown that such a source can be sent with arbitrarily small probability of error over a multiple access channel (MAC){cal X_{1} times cal X_{2},cal Y,p(y|x_{1},x_{2})}, with allowed codes{x_{l}(u), x_{2}(v)} if there exist probability mass functionsp(s), p(x_{1}|s,u),p(x_{2}|s,v) , such thatH(U|V)H(V|U )H(U,V|W)H(U,V)mbox{where} p(s,u,v,x_{1},x_{2},y), Xl, X2, y)=p(s)p(u,v)p(x_{1}|u,s)p(x_{2}|v,s)p(y|x_{1},x_{2}). lifts region includes the multiple access channel region and the Slepian-Wolf data compression region as special cases. 相似文献
19.
《IEEE transactions on information theory / Professional Technical Group on Information Theory》1974,20(4):517-524
Letxi = {xi(t), 0 leq t leq T} be a process with covariance functionK(s,t) andE int_0^T xi^2(t) dt < infty . It is proved that for everyvarepsilon > 0 thevarepsilon -entropyH_{varepsilon}(xi) satisfies begin{equation} H_{varepsilon}(xi_g) - mathcal{H}_{xi_g} (xi) leq H_{varepsilon}(xi) leq H_{varepsilon}(xi_g) end{equation} wherexi_g is a Gaussian process with the covarianeeK(s,t) andmathcal{H}_{xi_g}(xi) is the entropy of the measure induced byxi (in function space) with respect to that induced byxi_g . It is also shown that ifmathcal{H}_{xi_g}(xi) < infty then, asvarepsilon rightarrow 0 begin{equation} H_{varepsilon}(xi) = H_{varepsilon}(xi_g) - mathcal{H}_{xi_g}(xi) + o(1). end{equation} Furthermore, ff there exists a Gaussian processg = { g(t); 0 leq t leq T } such thatmathcal{H}_g(xi) < infty , then the ratio betweenH_{varepsilon}(xi) andH_{varepsilon}(g) goes to one asvarepsilon goes to zero. Similar results are given for the rate-distortion function, and some particular examples are worked out in detail. Some cases for whichmathcal_{xi_g}(xi) = infty are discussed, and asymptotic bounds onH_{varepsilon}(xi) , expressed in terms ofH_{varepsilon}(xi_g) , are derived. 相似文献
20.
《IEEE transactions on information theory / Professional Technical Group on Information Theory》1987,33(1):16-20
The harmonic analysis of certain multiplicative processes of the formg(t)X(t) is considered, whereg is a deterministic function, and the stochastic processX(t) is of the formX(t)=sum X_{n}l_{[n alpha , (n+l) alpha]}(t) , where a is a positive constant and theX_{n}, n=0, pm 1,pm 2, cdots are independent and identically distributed random variables with zero means and finite variances. In particular, we show that if g is Riemann integrable and periodic, with period incommensurate withalpha , theng(t)X(t) has an autocovariance in the Wiener sense equal to the product of the Wiener autocovariances of its factors,C_{gx} = C_{g}C_{x} . Some important cases are examined where the autocovariance of the multiplicative process exists but cannot be obtained multiplicatively. 相似文献