首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 343 毫秒
1.
Extended Fault-Tolerant Cycle Embedding in Faulty Hypercubes   总被引:1,自引:0,他引:1  
We consider fault-tolerant embedding, where an $n$-dimensional faulty hypercube, denoted by $Q_{n}$, acts as the host graph, and the longest fault-free cycle represents the guest graph. Let $F_{v}$ be a set of faulty nodes in $Q_{n}$. Also, let $F_{e}$ be a set of faulty edges in which at least one end-node of each edge is faulty, and let ${cal F}_{e}$ be a set of faulty edges in which the end-nodes of each edge are both fault-free. An edge in $Q_{n}$ is said to be critical if it is either fault-free or in $F_{e}$. In this paper, we prove that there exists a fault-free cycle of length at least $2^{n}-2vert F_{v}vert$ in $Q_{n} (ngeq 3)$ with $vert{cal F}_{e}vertleq 2n-5$, and $vert F_{v}vert+vert{cal F}_{e}vertleq 2n-4$ , in which each node is incident to at least two critical edges. Our result improves on the previously best known results reported in the literature, where only faulty nodes or faulty edges are considered.   相似文献   

2.
The minimum-redundancy prefix code problem is to determine, for a given list W=[ω1,..., ωn] of n positive symbol weights, a list L=[l1,...,ln] of n corresponding integer codeword lengths such that Σi=1 n 2-li⩽1 and Σi=1n ωili is minimized. Let us consider the case where W is already sorted. In this case, the output list L can be represented by a list M=[m1,..., mH], where ml, for l=1,...,H, denotes the multiplicity of the codeword length l in L and H is the length of the greatest codeword. Fortunately, H is proved to be O(min(log(1/p1),n)), where p1 is the smallest symbol probability, given by ω1i=1n ωi. We present the Fast LazyHuff (F-LazyHuff), the Economical LazyHuff (E-LazyHuff), and the Best LazyHuff (B-LazyHuff) algorithms. F-LazyHuff runs in O(n) time but requires O(min(H2, n)) additional space. On the other hand, E-LazyHuff runs in O(n+nlog(n/H)) time, requiring only O(H) additional space. Finally, B-LazyHuff asymptotically overcomes, the previous algorithms, requiring only O(n) time and O(H) additional space. Moreover, our three algorithms have the advantage of not writing over the input buffer during code calculation, a feature that is very useful in some applications  相似文献   

3.
Computation time for various primitive operations, such as broadcasting and global sum, can significantly increase when there are node failures in a hypercube. In this paper we develop nearly optimal algorithms for computing important basic problems on a faulty SIMD hypercube. In an SIMD hypercube, during a communication step, nodes can exchange information with their neighbors only across a specific dimension. Our parallel machine model is an n-dimensional SIMD hypercube Q n with up to n-1 node faults. In an SIMD hypercube, during a communication step, nodes can exchange information with their neighbors only across a specific dimension. We use the concept of free dimension to develop our algorithms, where a free dimension is defined to be a dimension i such that at least one end node of any i-dimension link is nonfaulty. In an n-cube, with f < n faults, it is known that there exist n-f+1 free dimensions. Using free dimensions, we show that broadcasting and global sum can be performed in n+5 steps, thereby improving upon the previously known algorithms for these primitives. The broadcasting algorithms work independent of the location of source node and faulty nodes. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

4.
In distributed database systems, commit protocols are used to ensure the transaction atomicity. In the presence of failures, nonblocking commit protocols can guarantee the transaction atomicity without blocking the transaction execution. A (resilient) decentralized nonblocking commit protocol (RDCP) is proposed for distributed database systems. This protocol is based on the hypercube network topology and is `liub(log2 (N))-2' resilient to node failures (N=number of system-nodes). The number of messages sent among the N nodes is O(N·log22 (N)) which is only a factor of log2 (N) over the message complexity lower bound O(N·log2 (N)) of decentralized commit protocols. Furthermore, RDCP is an optimistic nonblocking protocol. It aborts the transaction only when some nodes want to abort or some nodes fail before they make local decisions  相似文献   

5.
This paper suggests a stochastic model how to determine a fault margin in a computer system, who fails when the total number of hidden faults exceeds a threshold level N of tolerance: A fault occurs at a non-homogeneous Poisson process, and (i) becomes system failure with probability p1, (ii) becomes hidden fault with probability p2 and is accumulated, or (iii) is removed with probability p3. The expected cost rate to system failure is derived, and an optimal number N* to minimize it is discussed. A numerical example is finally given.  相似文献   

6.
This letter suggests a modified priority scheduling policy for the asynchronous transfer mode (ATM) multiplexer, which is called DQLT. In the dual queue length threshold (DQLT) method, there exist two queues: (1) Q1 is for nonreal-time traffic and (2) Q2 is for real-time traffic and each queue has its own threshold to adaptively control the buffer congestion. If Q1 is congested over the threshold T1 one cell at the head of Q1 moves into Q2 in a slot time. It is shown that the DQLT method gives intermediate performance between those of minimum laxity threshold (MLT) and queue length threshold (QLT) policy, but its control method is quite simpler  相似文献   

7.
An estimator Eˆ(dn,n) of the conditional expectation E[Xn+1|Xn,...,X(n-dn+1)] in a centered, stationary, and ergodic Gaussian process {Xi}i with absolutely summable Wold coefficients is constructed on the basis of having observed X1,...,Xn . For a suitable choice of the length dn→∞ (n→∞) of the past covered by the conditional expectation, it is established that |Eˆ(dn,n)-E[Xn+1|Xn ,...,X(n-dn+1)]|→0 with probability 1. In addition, sufficient conditions for |E[Xn+1|Xn,X n-1,...]-E[Xn+1|Xn,...,X(n-dn +1)]| →0 to hold with probability 1 are given, that is, conditions under which Eˆ(dn,n) can be used as a strongly consistent forecaster for |E[Xn+1|Xn,X n-1,...]  相似文献   

8.
Let a q-ary linear (n, k) code C be used over a memoryless channel. We design a decoding algorithm ΨN that splits the received block into two halves in n different ways. First, about √N error patterns are found on either half. Then the left- and right-hand lists are sorted out and matched to form codewords. Finally, the most probable codeword is chosen among at most n√N codewords obtained in all n trials. The algorithm can be applied to any linear code C and has complexity order of n3√N. For any N⩾qn-k, the decoding error probability PN exceeds at most 1+qn-k/N times the probability PΨ (C) of maximum-likelihood decoding. For code rates R⩾1/2, the complexity order qn-k/2 grows as square root of general trellis complexity qmin{n-k,k}. When used on quantized additive white Gaussian noise (AWGN) channels, the algorithm ΨN can provide maximum-likelihood decoding for most binary linear codes even when N has an exponential order of qn-k  相似文献   

9.
This paper develops a Markovian jump model to describe the fault occurrences in a manipulator robot of three joints. This model includes the changes of operation points and the probability that a fault occurs in an actuator. After a fault, the robot works as a manipulator with free joints. Based on the developed model, a comparative study among three Markovian controllers, H2, Hinfin, and mixed H2/Hinfin is presented, applied in an actual manipulator robot subject to one and two consecutive faults.  相似文献   

10.
The blowing-up lemma says that if the probability with respect to a product measure of a setAsubseteq {cal X}^{n} ({cal X}finite,nlarge) is not exponentially small, then itsl_{n}-neighborhood has probability almost one for somel_{n} = O(n). Here an information-theoretic proof of the blowing-up lemma, generalizing it to continuous alphabets, is given.  相似文献   

11.
Replicating sensors is desirable, not only to tolerate sensor failures, but to increase the average accuracy of the ensemble of replicated sensors beyond that obtainable with a single sensor. Such replication is used in a multi-sensor environment or in a distributed-sensor network. Following Marzullo (1990), the authors have modeled a continuous valued sensor as an interval of real numbers containing the physical value of interest. Given n sensors of which at most f can suffer arbitrary failures, this paper presents an efficient O(n·log(n)) fault-tolerant algorithm (J/FTA) whose output is reliable (guaranteed to contain the correct value at all times) and is fairly accurate when f1/2(n+1)). The output of J/FTA can be either a single-interval or a set-of-intervals, depending on the nature of the multi-sensor environment. J/FTA can be used not only to detect all possibly-faulty sensors but to detect all sets (combinations) of possibly-faulty sensors. This paper proves the following results pertaining to the possibly-faulty sensors identified by J/FTA: the number of sets each containing f possibly-faulty sensors is at most (f+1); the number of sets each containing f or fewer faulty sensors is at most (2f+1); and the number of possibly-faulty sensors identified by J/FTA is at most 2f. These results help to: narrow the search to detect faulty sensors; bound the number of intervals needed to construct an accurate and reliable abstract sensor; and identify at least one correct sensor  相似文献   

12.
For a rational α∈(0,1), let 𝒜n×m,α be the set of binary n×m arrays in which each row has Hamming weight αm and each column has Hamming weight αn, where αm and αn are integers. (The special case of two-dimensional balanced arrays corresponds to α=1/2 and even values for n and m.) The redundancy of 𝒜n×m,α is defined by ρn×m,α=nmH(α)-log2|𝒜 n×m,α| where H(x)=-xlog2x-(1-x)log2(1-x). Bounds on ρn×m,α are obtained in terms of the redundancies of the sets 𝒜ℒ,α of all binary ℒ-vectors with Hamming weight αℒ, ℒ∈{n,m}. Specifically, it is shown that ρn×m,α⩽nρm,α+mρ n,α where ρℒ,α=ℒH(α)-log2|𝒜 ℒ,α| and that this bound is tight up to an additive term O(n+log m). A polynomial-time coding algorithm is presented that maps unconstrained input sequences into 𝒜n×m,α at a rate H(α)-(ρm,α/m)  相似文献   

13.
A practical suboptimal (variable source coding) algorithm for lossy data compression is presented. This scheme is based on approximate string matching, and it naturally extends the lossless Lempel-Ziv (1977) data compression scheme. Among others we consider the typical length of an approximately repeated pattern within the first n positions of a stationary mixing sequence where D percent of mismatches is allowed. We prove that there exists a constant r0(D) such that the length of such an approximately repeated pattern converges in probability to 1/r0(D) log n (pr.) but it almost surely oscillates between 1/r-∞(D) log n and 2/r1(D) log n, where r -∞(D)>r0(D)>r1(D)/2 are some constants. These constants are natural generalizations of Renyi entropies to the lossy environment. More importantly, we show that the compression ratio of a lossy data compression scheme based on such an approximate pattern matching is asymptotically equal to r0(D). We also establish the asymptotic behavior of the so-called approximate waiting time Nl which is defined as the time until a pattern of length C repeats approximately for the first time. We prove that log Nl/l→r0(D) (pr.) as l→∞. In general, r0(D)>R(D) where R(D) is the rate distortion function. Thus for stationary mixing sequences we settle in the negative the problem investigated by Steinberg and Gutman by showing that a lossy extension of the Wyner-Ziv (1989) scheme cannot be optimal  相似文献   

14.
Fault tolerance through the incorporation of redundancy and reconfiguration is quite common. The distribution of faults can have severe impact on the effectiveness of any reconfiguration scheme; in fact, patterns of faults occurring at strategic locations may render an entire system unusable regardless of its component redundancy and its reconfiguration capabilities. Testing of catastrophic faults was given for reconfigurable arrays with 2-link redundancy; i.e., a bypass link of fixed length is provided to each element of the array in addition to the regular link.

In this paper, we study the more general case of arbitrary (but regular) link redundancy. In particular, we focus on the problem of deciding whether a pattern of k faults is catastrophic for a k-link redundant system; i.e., in addition to the regular link of length 1 = 1, each element of the array is provided with k −1 bypass links of length 2, 3,… k, respectively.

We study this problem and prove some fundamental properties which any catastrophic fault pattern must satisfy. We then show that these properties together constitute a necessary and sufficient condition for a fault pattern to be catastrophic for a k-link redundant system. As a consequence, we derive a provably correct testing algorithm whose worst-case time complexity is O(k k); this also improves on the previous algorithm for k = 2.  相似文献   


15.
This paper presents a novel application of recurrent neural network (RRN) to fault-tolerant control (FTC) of automated sequential manufacturing systems (ASMS) subject to sensor faults. Two RRNs are employed: the first one acts as an I/O relations recognizer and is able to detect faulty sensors and the latter is used as an inverse model of the AMSM to compute the desired control action in a faulty case according to nominal specifications. The learning process of these networks is carried out based on training data generated from the healthy manufacturing system controlled by a programmable logic controller (PLC). Design of the proposed fault-tolerant control system (FTCS) scheme is based on utilizing the two RNNs, a reconfigurable controller and a fault decision subsystem. The design procedure of the proposed FTCS is introduced. The proposed FTCS has been implemented and tested experimentally for a benchmark industrial ASMS subject to single or multiple faulty sensors. Experimental results show the effectiveness of the procedure for a real simple plant. In addition, the results prove these features of the proposed FTCS: (a) effectively improving the faulty control system behaviors, (b) accomplishing its proper functionality in handling single and multiple sensor faults, (c) identifying the sensor faults, and (d) being advantageous in reducing the complexity of the hardware redundancy.  相似文献   

16.
A processor is any self-contained computer of at least personal-computer capability. The paper explores how much the processor mean time-to-failure can be improved by replacing it with an N-processor module, where each processor in the module consists of a copy of the original processor augmented with a communication protocol unit. The copy of the original processor is faulty with probability, pc, and the protocol unit is faulty with probability, p. The asynchronous N-processor module uses a Byzantine agreement (F-ID-P) algorithm to identify which of its processors disagreed with a module consensus. The identified processors are presumed faulty, and the module replaces them with duplicates from a set of standbys. The F-ID-P algorithm is a modification of Bracha's, which guarantees that in a module of 3t+1 processors, up to t faults can be identified by at least t+1 non-faulty processors. The module fails if faults in more than t of its processors prevent it from: 1) obtaining a correct consensus, or 2) executing the algorithm. The F-ID-P algorithm departs from Bracha's by using a random instead of an adversary scheduler of message delays. Simulation showed that almost always F-ID-P algorithm correctly identified all of a module's faulty processors if more than half of them were nonfaulty. Thus F-ID-P algorithm was about 3/2 more fault tolerant than guaranteed. Also, compared to a single processor's mean number of decisions to failure, the F-ID-P module was 841 times better when N=37, down to 5.1 times better when N=10  相似文献   

17.
The phase transformation and stability of TiSi2 on n + diffusions are investigated. Narrower n+ diffusions require higher anneal temperatures, or longer anneal times, than wider diffusions for complete transitions from the high-resistivity C49 phase to the low-resistivity C54 phase. A model is presented which explains this in terms of the probability of forming C54 nuclei on narrow diffusions and the influence of diffusion width on C54 grain size. The results are that more C49 and C54 nucleation events are required to completely transform narrow lines. For thin TiSi2 (40 nm), there is a narrow process window for achieving complete transformation without causing agglomeration of the TiSi2. The process window decreases with decreasing silicide thickness. A significantly larger process window is achieved with short-time rapid annealing. Similar studies are performed for CoSi2 on n+ and p+ diffusions. No linewidth dependence is observed for the transformation from CoSix to CoSi2. There is a broad process window from 575°C to 850°C using furnace annealing, for which the low-resistivity phase is obtained without causing agglomeration  相似文献   

18.
Informally, an error-correcting code has "nice" list-decodability properties if every Hamming ball of "large" radius has a "small" number of codewords in it. We report linear codes with nontrivial list-decodability: i.e., codes of large rate that are nicely list-decodable, and codes of large distance that are not nicely list-decodable. Specifically, on the positive side, we show that there exist codes of rate R and block length n that have at most c codewords in every Hamming ball of radius H-1(1-R-1/c)·n. This answers the main open question from the work of Elias (1957). This result also has consequences for the construction of concatenated codes of good rate that are list decodable from a large fraction of errors, improving previous results of Guruswami and Sudan (see IEEE Trans. Inform. Theory, vol.45, p.1757-67, Sept. 1999, and Proc. 32nd ACM Symp. Theory of Computing (STOC), Portland, OR, p. 181-190, May 2000) in this vein. Specifically, for every ε > 0, we present a polynomial time constructible asymptotically good family of binary codes of rate Ω(ε4) that can be list-decoded in polynomial time from up to a fraction (1/2-ε) of errors, using lists of size O(ε-2). On the negative side, we show that for every δ and c, there exists τ < δ, c1 > 0, and an infinite family of linear codes {Ci}i such that if ni denotes the block length of Ci, then C i has minimum distance at least δ · ni and contains more than c1 · nic codewords in some Hamming ball of radius τ · ni. While this result is still far from known bounds on the list-decodability of linear codes, it is the first to bound the "radius for list-decodability by a polynomial-sized list" away from the minimum distance of the code  相似文献   

19.
A universal predictor based on pattern matching   总被引:2,自引:0,他引:2  
We consider a universal predictor based on pattern matching. Given a sequence X1, ..., Xn drawn from a stationary mixing source, it predicts the next symbol Xn+1 based on selecting a context of Xn+1. The predictor, called the sampled pattern matching (SPM), is a modification of the Ehrenfeucht-Mycielski (1992) pseudorandom generator algorithm. It predicts the value of the most frequent symbol appearing at the so-called sampled positions. These positions follow the occurrences of a fraction of the longest suffix of the original sequence that has another copy inside X1X2···Xn ; that is, in SPM, the context selection consists of taking certain fraction of the longest match. The study of the longest match for lossless data compression was initiated by Wyner and Ziv in their 1989 seminal paper. Here, we estimate the redundancy of the SPM universal predictor, that is, we prove that the probability the SPM predictor makes worse decisions than the optimal predictor is O(n) for some 0<ν<½ as n→∞. As a matter of fact, we show that we can predict K=O(1) symbols with the same probability of error  相似文献   

20.
The problem of fixed-rate block quantization of an unbounded real memoryless source is studied. It is proved that if the source has a finite sixth moment, then there exists a sequence of quantizers Qn of increasing dimension n and fixed rate R such that the mean squared distortion Δ(Qn) is bounded as Δ(Qn )⩽D(R)+O(√(log n/n)), where D(R) is the distortion-rate function of the source. Applications of this result include the evaluation of the distortion redundancy of fixed-rate universal quantizers, and the generalization to the non-Gaussian case of a result of Wyner on the transmission of a quantized Gaussian source over a memoryless channel  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号