首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The Lempel-Ziv data compression algorithm has the property that for finite-memory sources the redundancy ρn (defined as the difference between the average code rate and the entropy when the memory size is n) is O (log log n/log n). We suggest a new version of the algorithm with redundancy ρn=O (1/log n)  相似文献   

2.
Given n discrete random variables Ω={X1,…,Xn}, associated with any subset α of {1,2,…,n}, there is a joint entropy H(Xα) where Xα={Xi: i∈α}. This can be viewed as a function defined on 2{1,2,…,n} taking values in [0, +∞). We call this function the entropy function of Ω. The nonnegativity of the joint entropies implies that this function is nonnegative; the nonnegativity of the conditional joint entropies implies that this function is nondecreasing; and the nonnegativity of the conditional mutual information implies that this function is two-alternative. These properties are the so-called basic information inequalities of Shannon's information measures. An entropy function can be viewed as a 2n -1-dimensional vector where the coordinates are indexed by the subsets of the ground set {1,2,…,n}. As introduced by Yeng (see ibid., vol.43, no.6, p.1923-34, 1997) Γn stands for the cone in IR(2n-1) consisting of all vectors which have all these properties. Let Γn* be the set of all 2n -1-dimensional vectors which correspond to the entropy functions of some sets of n discrete random variables. A fundamental information-theoretic problem is whether or not Γ¯n*=Γn. Here Γ¯n * stands for the closure of the set Γn*. We show that Γ¯n* is a convex cone, Γ2*=Γ2, Γ3*≠Γ3, but Γ¯3 *=Γ3. For four random variables, we have discovered a conditional inequality which is not implied by the basic information inequalities of the same set of random variables. This lends an evidence to the plausible conjecture that Γ¯n*≠Γn for n>3  相似文献   

3.
We consider joint source-channel coding for a memoryless Gaussian source and an additive white Gaussian noise (AWGN) channel. For a given code defined by an encoder-decoder pair (α, β), its dual code is obtained by interchanging the encoder and decoder: (β, α). It is shown that if a code (α, β) is optimal at rate p channel uses per source sample and if it satisfies a certain uniform continuity condition, then its dual code (β, α) is optimal for rate 1/ρ channel uses per source sample. Further, it is demonstrated that there is a code which is optimal but its dual code is not optimal. Finally, using random coding, we show that there is an optimal code which has an optimal dual. The duality concept is also presented for the cases of (i) binary memoryless equiprobable source and binary-symmetric channel (BSC), and (ii) colored Gaussian source and additive colored Gaussian noise (ACGN) channel  相似文献   

4.
Asymptotic redundancies for universal quantum coding   总被引:1,自引:0,他引:1  
Clarke and Barren (1990, 1994, 1995) have shown that the Jeffreys' invariant prior of Bayesian theory yields the common asymptotic (minimax and maximin) redundancy of universal data compression in a parametric setting. We seek a possible analog of this result for the two-level quantum systems. We restrict our considerations to prior probability distributions belonging to a certain one-parameter family, qu,-∞n×2n (Bayesian density) matrices, ζ n(u). These matrices are the weighted averages (with respect to qu) of all possible tensor products of n identical 2×2 density matrices, representing the two-level quantum systems. We propose a form of universal coding for the situation in which the density matrix describing an ensemble of quantum signal states is unknown. A sequence of n signals would be projected onto the dominant eigenspaces of ζn(u)  相似文献   

5.
The magnitude of the filters associated with Daubechies' wavelets nψ is shown to monotonically converge to an ideal highpass filter when n→∞. The rate of the convergence is also given. The magnitude of each filter is shown to be monotonically increasing from [0,π]. The convergence does not have Gibbs' phenomenon  相似文献   

6.
Values of the electron ionization coefficient αn in 〈100〉 GaAs extending the previously available data by two orders of magnitude, down to 1 cm-1, are presented. The data are directly extracted from the multiplication factor, M-1, measured in lightly doped collector n-p-n AlGaAs/GaAs heterojunction bipolar transistors (HBT's). It is shown that the sensitivity of the technique is limited by the early effect, whose influence can be reduced by driving the device at constant emitter-base bias and by using heavily doped base regions. HBT's can provide simultaneously high base doping and current gain, and represent therefore an excellent tool for these measurements  相似文献   

7.
Given a general source X={Xn}n=1, source coding is characterized by a pair (φn, ψn) of encoder φn, and decoder ψn , together with the probability of error εn≡Pr{ψnn(Xn ))≠Xn}. If the length of the encoder output φ n(Xn) is fixed, then it is called fixed-length source coding, while if the length of the encoder output φn (Xn) is variable, then it is called variable-length source coding. Usually, in the context of fixed-length source coding the probability of error εn is required to asymptotically vanish (i.e., limn→∞εn=0), whereas in the context of variable-length source coding the probability of error εn is required to be exactly zero (i.e., εn =0∀n=1, 2, ...). In contrast to these, we consider the problem of variable-length source coding with asymptotically vanishing probability of error (i.e., limn→∞εn =0), and establish several fundamental theorems on this new subject  相似文献   

8.
For a rational α∈(0,1), let 𝒜n×m,α be the set of binary n×m arrays in which each row has Hamming weight αm and each column has Hamming weight αn, where αm and αn are integers. (The special case of two-dimensional balanced arrays corresponds to α=1/2 and even values for n and m.) The redundancy of 𝒜n×m,α is defined by ρn×m,α=nmH(α)-log2|𝒜 n×m,α| where H(x)=-xlog2x-(1-x)log2(1-x). Bounds on ρn×m,α are obtained in terms of the redundancies of the sets 𝒜ℒ,α of all binary ℒ-vectors with Hamming weight αℒ, ℒ∈{n,m}. Specifically, it is shown that ρn×m,α⩽nρm,α+mρ n,α where ρℒ,α=ℒH(α)-log2|𝒜 ℒ,α| and that this bound is tight up to an additive term O(n+log m). A polynomial-time coding algorithm is presented that maps unconstrained input sequences into 𝒜n×m,α at a rate H(α)-(ρm,α/m)  相似文献   

9.
The redundancy problem of universal lossy source coding at a fixed rate level is considered. Under some condition on the single-letter distortion measure, which implies that the cardinality K of the reproduction alphabet is not greater than the cardinality J of the source alphabet, it is shown that the redundancy of universally coding memoryless sources p by nth-order block codes of rate R goes like |(∂/∂R)d(p,R)|Kln n/2n+o(ln n/n) for all memoryless sources p except a set whose volume goes to 0 as the block length n goes to infinity, where d(p,R) denotes the distortion rate function of p. Specifically, for any sequence {Cn}n=1 of block codes, where Cn is an nth-order block code at the fixed rate R, and any ϵ>0, the redundancy Dn(C n,p) of Cn for p is greater than or equal to |(∂/∂R)d(p,R)|(K-ϵ)ln n/2n for all p satisfying some regular conditions except a set whose volume goes to 0 as n→∞. On the other hand, there exists a sequence {Cn}n=1 of block codes at the rate R such that for any p satisfying some regular conditions, the super limit of Dn(Cn,p)|(ln n/n) is less than or equal to |(∂/∂R)d(p,R)|K/2  相似文献   

10.
The reduction of the current amplification factor of a wide-base transistor, with growing doping concentration in the base region, is investigated. A method for the determination of the minority-carrier lifetime τn in the base region and the emitter Gummel number Ge is developed. The method is based on transistor structures differing only in the base width. It was found that the lifetime τn decreases according to the power law τn~N-0.45A. This result is analyzed for different recombination processes. Good agreement is obtained if shallow impurities acting as recombination centers are assumed. The injection-limited current gain βγ decreases significantly with an increase in the total number of the doping concentration of the base, reaches a broad maximum, and then falls slowly. The maximum value of Ge is found to be 1.1×1014 cm-4-s in good agreement with theoretical results. Finally, the contribution of the injection efficiency γ and the transport factor αT to the current gain α are determined. It is found that α is limited mainly by the injection efficiency γ  相似文献   

11.
Multiterminal hypothesis testing is considered, subject to the exponential-type constraint αn⩽exp(-nr) on the error probability of the first kind. The problem is to determine the minimum β*n of the error probability of the second kind under the given constraint at limited rates R1 and R2 for observing the respective pairs of variables. Good lower bounds on the power exponent for β*n are presented by invoking basic properties of r-divergent sequences. In particular, the one-bit and the zero-rate compression cases are considered. The power exponent for the former and a lower bound for the latter are established  相似文献   

12.
A function h(w) is said to be useful for the coding theorem if the coding theorem remains to be true when the lengths |w| of codewords w in it are replaced with h(w). For a codeword w=a0a1...am-1 of length m and an infinite sequence Q=(q0, q1, q2, ...) of real numbers such that 0n⩽½, let |w|Q denote the value Σn=0m-1 (if an =0 then -log2qn, else -log2(1-q n)), that is, -log2, (the probability that flippings of coins generate x) assuming that the (i+1)th coin generates a tail (or 0) with probability qi. It is known that if 0n→∞ qn then |w|Q is useful for the coding theorem and if limn→∞ q n/(1/(2n))=0 then |w|Q is not useful. We introduce several variations of the coding theorem and show sufficient conditions for h(w) not to be useful for these variations. The usefulness is also defined for the expressions that define the Kolmogorov complexity and the universal distribution. We consider the relation among the usefulness for the coding theorem, that for the Kolmogorov complexity, and that for the universal distribution  相似文献   

13.
2n modified prime codes are designed for all-optical code-division multiple access (CDMA) networks using very simple encoders and decoders. The proposed code is obtained from an original 2n prime code of prime number P. By padding P-1 zeros in each `subsequence' of codewords in the corresponding 2n prime code. The cross-correlation constraint of the resulting 2n modified prime code is equal to one, as opposed to two for a 2n prime code. For a given bit error rate (BER), the proposed code can thus be used to support a larger number of active users in the fibre optic CDMA network than a 2n prime code. Moreover, using the former can also reduce code length and weight compared with employing the latter to achieve the same BER  相似文献   

14.
In the above-named work (see ibid., vol.11, p.113-15, March 1990), Hui et al. proposed a method to measure impact ionization current in GaAs MESFETs and evaluated the impact ionization coefficient αn in GaAs. For electric fields greater than approximately 1.5×105 V-cm-1, αn can be fitted to the equation αn=4.0×10 6×exp (-2.3×106/E). In the present work, the commenters performed careful measurements of gate current Ig in GaAs MESFET devices similar to those used by Hui et al., and they show that the ionization coefficient still fits the above equation down to αn=10-4 cm-1 . These results extend the previous data by three orders of magnitude. In a reply, the original authors affirm that the commenters have significantly improved the accuracy of the data previously presented  相似文献   

15.
The error locations for an algebraic-geometric code C*(D,mP) are exactly the common zeros (that is, a projective variety V(I)) of a set (ideal) I of error-locator functions. The paper gives a one-dimensional Berlekamp-Massey version of the Feng-Rao (1993) algorithm for decoding algebraic-geometric codes C*(D,mP). This produces a generating set for I (as an ideal) of size at most ρ (the smallest positive pole order at P of any function in L(mP)) relative to any error of weight at most e<½δm*, with δm*:=m-2g+2 the designed minimum distance of the code. This algorithm requires at most c(ρm2+Nρm+ρ2m) field multiplications, with c a small constant, and N a small constant function of the curve. The error-positions are then given as exactly the common zeros of generator functions of the error-locator ideal I  相似文献   

16.
Adaptive recovery of a chirped signal using the RLS algorithm   总被引:1,自引:0,他引:1  
This paper studies the performance of the recursive least squares (RLS) algorithm in the presence of a general chirped signal and additive white noise. The chirped signal, which is a moving average (MA) signal deterministically shifted in frequency at rate ψ, can be used to model a frequency shift in a received signal. General expressions for the optimum Wiener-Hopf coefficients, one-step recovery and estimation errors, noise and lag misadjustments, and the optimum adaptation constant (βopt) are found in terms of the parameters of the stationary MA signal. The output misadjustment is shown to be composed of a noise (ξ0Mβ/2) and lag term (κ/(β2ψ2)), and the optimum adaptation constant is proportional to the chirp rate as ψ2/3 . The special case of a chirped first-order autoregressive (AR1) process with correlation (α) is used to illustrate the effect the bandwidth (1/α) of the chirped signal on the adaptation parameters. It is shown that unlike for the chirped tone, where the βopt increases with the filter length (M), the adaptation constant reaches a maximum for M near the inverse of the signal correlation (1/α). Furthermore, there is an optimum filter length for tracking the chirped signal and this length is less than (1/α)  相似文献   

17.
Fast conversion between binary and residue numbers   总被引:1,自引:0,他引:1  
Bi  G. Jones  E.V. 《Electronics letters》1988,24(19):1195-1197
New, efficient hardware implementations are considered which perform code conversions between the three moduli (2n-1, 2 n, 2n+1) residue number systems and their binary representations. Significant hardware saving together with high-speed throughput is achieved by using only n and (n+1)-bit binary adders  相似文献   

18.
The authors study the ability of the exponentially weighted recursive least square (RLS) algorithm to track a complex chirped exponential signal buried in additive white Gaussian noise (power P n). The signal is a sinusoid whose frequency is drifting at a constant rate Ψ. lt is recovered using an M-tap adaptive predictor. Five principal aspects of the study are presented: the methodology of the analysis; proof of the quasi-deterministic nature of the data-covariance estimate R(k); a new analysis of RLS for an inverse system modeling problem; a new analysis of RLS for a deterministic time-varying model for the optimum filter; and an evaluation of the residual output mean-square error (MSE) resulting from the nonoptimality of the adaptive predictor (the misadjustment) in terms of the forgetting rate (β) of the RLS algorithm. It is shown that the misadjustment is dominated by a lag term of order β-2 and a noise term of order β. Thus, a value βopt exists which yields a minimum misadjustment. It is proved that βopt={(M+1)ρΨ2} 1/3, and the minimum misadjustment is equal to (3/4)Pn(M+1)βopt, where ρ is the input signal-to-noise ratio (SNR)  相似文献   

19.
The minimum-redundancy prefix code problem is to determine, for a given list W=[ω1,..., ωn] of n positive symbol weights, a list L=[l1,...,ln] of n corresponding integer codeword lengths such that Σi=1 n 2-li⩽1 and Σi=1n ωili is minimized. Let us consider the case where W is already sorted. In this case, the output list L can be represented by a list M=[m1,..., mH], where ml, for l=1,...,H, denotes the multiplicity of the codeword length l in L and H is the length of the greatest codeword. Fortunately, H is proved to be O(min(log(1/p1),n)), where p1 is the smallest symbol probability, given by ω1i=1n ωi. We present the Fast LazyHuff (F-LazyHuff), the Economical LazyHuff (E-LazyHuff), and the Best LazyHuff (B-LazyHuff) algorithms. F-LazyHuff runs in O(n) time but requires O(min(H2, n)) additional space. On the other hand, E-LazyHuff runs in O(n+nlog(n/H)) time, requiring only O(H) additional space. Finally, B-LazyHuff asymptotically overcomes, the previous algorithms, requiring only O(n) time and O(H) additional space. Moreover, our three algorithms have the advantage of not writing over the input buffer during code calculation, a feature that is very useful in some applications  相似文献   

20.
A neural network that retrieves stored binary vectors, when probed by possibly corrupted versions of them, is presented. It employs sparse ternary internal coding and autocorrelation (Hebbian) storage. It is symmetrically structured and, consequently, can be folded into a feedback configuration. Bounds on the network parameters are derived from probabilistic considerations. It is shown that when the input dimension is n, the proportional activation radius is ρ and the network size is 2νn with ν>1-h2(ρ), the equilibrium capacity is at least 2αn/8nρ(1-ρ) for any α<1-h2(ρ), where h2(·) is the binary entropy. A similar capacity bound is derived for the correction of errors of proportional size ρ or less, when ρ⩽0.3. The performance of a finite-size symmetric network is examined by simulation and found to exceed, at the cost of higher connectivity, that of the Kanerva (1988) model, operating as a content addressable memory  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号