首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Bivalency argument is a widely-used technique that employs forward induction to show impossibility results and lower bounds related to consensus. However, for a synchronous distributed system of n processes with up to t potential and f actual crash failures, applying bivalency argument to prove the lower bound for reaching uniform consensus is still an open problem. In this paper, we address this problem by presenting a bivalency proof that the lower bound for reaching uniform consensus is (f+2)-rounds where 0?f?t−2.  相似文献   

2.
For a synchronous distributed system of n processes with up to t potential and f   actual crash failures, where (t<n-1,f≤t)(t<n-1,ft), the time lower bound for a protocol to achieve consensus is min(t+1,f+2)min(t+1,f+2) rounds. Currently, most researches in this field focus on the time efficiency of consensus protocols. This paper proposes consensus protocols for synchronous distributed systems that achieve both message and time   efficiency. Based on an early stopping consensus protocol for synchronous distributed system with crash failures, we propose a rotating coordinator scheme that significantly reduces message complexity. However, this protocol is not time efficient because it requires min(t+1,f+3)min(t+1,f+3) rounds to reach consensus. Thus, to achieve both time and message efficiency, we propose another protocol in which (t+1)(t+1) coordinators are used to send messages in each round. Furthermore, we show that the proposed consensus protocol with crash failures can be revised to be more message-efficient with orderly crash failures. When a process is able to send more than one message to another in a round, we propose an optimal message efficient early stopping consensus protocol for synchronous distributed systems with orderly crash failures.  相似文献   

3.
Continuous consensus (CC) is the problem of maintaining up-to-date and identical copies of a “core” of information about the past at all correct processes in the system (Mizrahi and Moses, 2008 [6]). This is a primitive that supports simultaneous coordination among processes, and eliminates the need for issuing separate instances of consensus for different tasks. Recent work has presented new simple and efficient optimum protocols for continuous consensus in the crash and (sending) omissions failure models. For every pattern of failures, these protocols maintain at each and every time point a core that subsumes that maintained by any other continuous consensus protocol. This paper considers the continuous consensus problem in the face of harsher failures: general omissions and authenticated Byzantine failures. Computationally efficient optimum protocols for CC do not exist in these models if PNP. A variety of CC protocols are presented. The first is a simple protocol that enters every interesting event into the core within t+1 rounds (where t is the bound on the number of failures), provided there are a majority of correct processes. The second is a protocol that achieves similar performance so long as n>t (i.e., there is always guaranteed to be at least one correct process). The final protocol makes use of active failure monitoring and failure detection to include events in the core much faster in many runs of interest. Its performance is established based on a nontrivial property of minimal vertex covers in undirected graphs. The results are adapted to the authenticated Byzantine failure model, in which it is assumed that faulty processes are malicious, but correct processes have unforgeable signatures. Finally, the problem of uniform CC is considered. It is shown that a straightforward version of uniform CC is not solvable in the setting under study. A weaker form of uniform CC is defined, and protocols achieving it are presented.  相似文献   

4.
We study f-resilient services, which are guaranteed to operate as long as no more than f of the associated processes fail. We prove three theorems asserting the impossibility of boosting the resilience of such services. Our first theorem allows any connection pattern between processes and services but assumes these services to be atomic (linearizable) objects. This theorem says that no distributed system in which processes coordinate using f-resilient atomic objects and reliable registers can solve the consensus problem in the presence of f+1 undetectable process stopping failures. In contrast, we show that it is possible to boost the resilience of some systems solving problems easier than consensus: for example, the 2-set-consensus problem is solvable for 2n processes and 2n-1 failures (i.e., wait-free) using n-process consensus services resilient to n-1 failures (wait-free). Our proof is short and self-contained.We then introduce the larger class of failure-oblivious services. These are services that cannot use information about failures, although they may behave more flexibly than atomic objects. An example of such a service is totally ordered broadcast. Our second theorem generalizes the first theorem and its proof to failure-oblivious services.Our third theorem allows the system to contain failure-aware services, such as failure detectors, in addition to failure-oblivious services. This theorem requires that each failure-aware service be connected to all processes; thus, f+1 process failures overall can disable all the failure-aware services. In contrast, it is possible to boost the resilience of a system solving consensus using failure-aware services if arbitrary connection patterns between processes and services are allowed: consensus is solvable for any number of failures using only 1-resilient 2-process perfect failure detectors.As far as we know, this is the first time a unified framework has been used to describe both atomic and non-atomic objects, and the first time boosting analysis has been performed for services more general than atomic objects.  相似文献   

5.
The ΔΔ-timed uniform consensus is a stronger variant of the traditional consensus and it satisfies the following additional property: every correct process terminates its execution within a constant time ΔΔΔ-timeliness), and no two processes decide differently (uniformity). In this paper, we consider the ΔΔ-timed uniform consensus problem in presence of fcfc crash processes and ftft timing-faulty processes, and propose a ΔΔ-timed uniform consensus algorithm. The proposed algorithm is adaptive in the following sense: it solves the ΔΔ-timed uniform consensus when at least ft+1ft+1 correct processes exist in the system. If the system has less than ft+1ft+1 correct processes, the algorithm cannot solve the ΔΔ-timed uniform consensus. However, as long as ft+1ft+1 processes are non-crashed, the algorithm solves (non-timed) uniform consensus. We also investigate the maximum number of faulty processes that can be tolerated. We show that any ΔΔ-timed uniform consensus algorithm tolerating up to ftft timing-faulty processes requires that the system has at least ft+1ft+1 correct processes. This impossibility result implies that the proposed algorithm attains the maximum resilience about the number of faulty processes. We also show that any ΔΔ-timed uniform consensus algorithm tolerating up to ftft timing-faulty processes cannot solve the (non-timed) uniform consensus when the system has less than ft+1ft+1 non-crashed processes. This impossibility result implies that our algorithm attains the maximum adaptiveness.  相似文献   

6.
This paper introduces the continuous consensus problem, in which a core M[k] of information is continuously maintained at all correct sites of the system. All local copies of the core must be identical at all times k, and every interesting event should eventually enter the core. The continuous consensus problem is studied in synchronous systems with crash and omission failures, assuming an upper bound of t on the number of failures in any given run of the system. A simple protocol for continuous consensus, called ConCon, is presented. This protocol is knowledge-based: The actions processes take depend explicitly on their knowledge, as well as on their knowledge of what other processes know about failures and about events that occurred in the system. A close connection between continuous consensus and knowledge is established by showing that in every continuous consensus protocol, the information in the core at any given time must be common knowledge. Based on the characterization of common knowledge by Moses and Tuttle, it is shown that ConCon is an optimum protocol for continuous consensus, maintaining the most up-to-date core possible at all times: For every pattern of failures and external inputs and each point in time, the core provided by ConCon contains the cores of all correct protocols for continuous consensus. Indeed, the ConCon protocol can be viewed as a simplification of the Moses and Tuttle construction for computing the common knowledge at a given point. Finally, a uniform version of continuous consensus is considered, in which all processes (faulty and nonfaulty) are guaranteed to maintain the same core at any given time. An algorithm for uniform continuous consensus is presented, and is also shown to be an optimum solution. A preliminary version of this paper appeared in the Proceedings of the TARK X conference, Singapore 2005. Work on this paper was performed in part during a sabbatical at the School of Computer Science and Engineering, The University of New South Wales, Sydney, NSW 2052, Australia, where it was partially supported by ARC Discovery Grant RM02036.  相似文献   

7.
Many problems in distributed computing are impossible to solve when no information about process failures is available. It is common to ask what information about failures is necessary and sufficient to circumvent some specific impossibility, e.g., consensus, atomic commit, mutual exclusion, etc. This paper asks what information about failures is necessary to circumvent any impossibility and sufficient to circumvent some impossibility. In other words, what is the minimal yet non-trivial failure information. We present an abstraction, denoted U{\Upsilon} , that provides very little information about failures. In every run of the distributed system, U{\Upsilon} eventually informs the processes that some set of processes in the system cannot be the set of correct processes in that run. Although seemingly weak, for it might provide random information for an arbitrarily long period of time, and it eventually excludes only one set of processes (among many) that is not the set of correct processes in the current run, U{\Upsilon} still captures non-trivial failure information. We show that U{\Upsilon} is sufficient to circumvent the fundamental wait-free set-agreement impossibility. While doing so, (a) we disprove previous conjectures about the weakest failure detector to solve set-agreement and (b) we prove that solving set-agreement with registers is strictly weaker than solving n + 1-process consensus using n-process consensus. We show that U{\Upsilon} is the weakest stable non-trivial failure detector: any stable failure detector that circumvents some wait-free impossibility provides at least as much information about failures as U{\Upsilon} does. Our results are generalized, from the wait-free to the f-resilient case, through an abstraction Uf{\Upsilon^f} that we introduce and prove minimal to solve any problem that cannot be solved in an f-resilient manner, and yet sufficient to solve f-resilient f-set-agreement.  相似文献   

8.
We study deterministic gossiping in synchronous systems with dynamic crash failures. Each processor is initialized with an input value called rumor. In the standard gossip problem, the goal of every processor is to learn all the rumors. When processors may crash, then this goal needs to be revised, since it is possible, at a point in an execution, that certain rumors are known only to processors that have already crashed. We define gossiping to be completed, for a system with crashes, when every processor knows either the rumor of processor v or that v has already crashed, for any processor v. We design gossiping algorithms that are efficient with respect to both time and communication. Let t<n be the number of failures, where n is the number of processors. If , then one of our algorithms completes gossiping in O(log2t) time and with O(npolylogn) messages. We develop an algorithm that performs gossiping with O(n1.77) messages and in O(log2n) time, in any execution in which at least one processor remains non-faulty. We show a trade-off between time and communication in gossiping algorithms: if the number of messages is at most O(npolylogn), then the time has to be at least . By way of application, we show that if nt=Ω(n), then consensus can be solved in O(t) time and with O(nlog2t) messages.  相似文献   

9.
The k-set agreement problem is a generalization of the consensus problem: considering a system made up of n processes where each process proposes a value, each non-faulty process has to decide a value such that a decided value is a proposed value, and no more than k different values are decided. It has recently be shown that, in the crash failure model, $\min(\lfloor \frac{f}{k}\rfloor+2,\lfloor \frac{t}{k}\rfloor +1)The k-set agreement problem is a generalization of the consensus problem: considering a system made up of n processes where each process proposes a value, each non-faulty process has to decide a value such that a decided value is a proposed value, and no more than k different values are decided. It has recently be shown that, in the crash failure model, min(?\fracfk?+2,?\fractk?+1)\min(\lfloor \frac{f}{k}\rfloor+2,\lfloor \frac{t}{k}\rfloor +1) is a lower bound on the number of rounds for the non-faulty processes to decide (where t is an upper bound on the number of process crashes, and f, 0≤ft, the actual number of crashes).  相似文献   

10.
We consider the gossip problem in a synchronous message-passing system. Participating processors are prone to omission failures, that is, a faulty processor may fail to send or receive a message. The gossip problem in the fault-tolerant setting is defined as follows: every correct processor must learn the initial value of any other processor, unless the other one is faulty; in the latter case either the initial value or the information about the fault must be learned. We develop two efficient algorithms that solve the gossip problem in time O(logn), where n is the number of processors in the system. The first one is an explicit algorithm (i.e., constructed in polynomial time) sending O(nlogn+f2) messages, and the second one reduces the message complexity to O(n+f2), where f is the upper bound on the number of faulty processors.  相似文献   

11.
We prove several results relating injective one-way functions, time-bounded conditional Kolmogorov complexity, and time-bounded conditional entropy. First we establish a connection between injective, strong and weak one-way functions and the expected value of the polynomial time-bounded Kolmogorov complexity, denoted here by?E(K t (x|f(x))). These results are in both directions. More precisely, conditions on?E(K t (x|f(x))) that imply that?f is a weak one-way function, and properties of?E(K t (x|f(x))) that are implied by the fact that?f is a strong one-way function. In particular, we prove a separation result: based on the concept of time-bounded Kolmogorov complexity, we find an interval in which every function?f is a necessarily weak but not a strong one-way function. Then we propose an individual approach to injective one-way functions based on Kolmogorov complexity, defining Kolmogorov one-way functions and prove some relationships between the new proposal and the classical definition of one-way functions, showing that a Kolmogorov one-way function is also a deterministic one-way function. A relationship between Kolmogorov one-way functions and the conjecture of polynomial time symmetry of information is also proved. Finally, we relate?E(K t (x|f(x))) and two forms of time-bounded entropy, the unpredictable entropy?H unp, in which ??one-wayness?? of a function can be easily expressed, and the Yao+ entropy, a measure based on compression/decompression schema in which only the decompressor is restricted to be time-bounded.  相似文献   

12.
We consider the problem of how to schedule t similar and independent tasks to be performed in a synchronous distributed system of p stations communicating via multiple-access channels. Stations are prone to crashes whose patterns of occurrence are specified by adversarial models. Work, defined as the number of the available processor steps, is the complexity measure. We consider only reliable algorithms that perform all the tasks as long as at least one station remains operational. It is shown that every reliable algorithm has to perform work Ω(t+pt) even when no failures occur. An optimal deterministic algorithm for the channel with collision detection is developed, which performs work (t+pt). Another algorithm, for the channel without collision detection, performs work (t+pt+ p min {f,t}), where f < p is the number of failures. This algorithm is proved to be optimal, provided that the adversary is restricted in failing no more than f stations. Finally, we consider the question if randomization helps against weaker adversaries for the channel without collision detection. A randomized algorithm is developed which performs the expected minimum amount (t+pt) of work, provided that the adversary may fail a constant fraction of stations and it has to select failure-prone stations prior to the start of an execution of the algorithm. The work of the first author is supported by the NSF Grant 0310503. A preliminary version of this paper appeared as “The do-all problem in broadcast networks,” in Proceedings, 20th ACM Symposium on Principles of Distributed Computing, Newport, Rhode Island, 2001, pp. 117–126.  相似文献   

13.
Hardness amplification results show that for every Boolean function f, there exists a Boolean function Amp(f) such that if every size s circuit computes f correctly on at most a 1 ? δ fraction of inputs, then every size s′ circuit computes Amp(f) correctly on at most a ${1/2+\epsilon}$ fraction of inputs. All hardness amplification results in the literature suffer from “size loss” meaning that ${s' \leq \epsilon \cdot s}$ . We show that proofs using “non-uniform reductions” must suffer from such size loss. A reduction is an oracle circuit ${R^{(\cdot)}}$ which given oracle access to any function D that computes Amp(f) correctly on a ${1/2+\epsilon}$ fraction of inputs, computes f correctly on a 1 ? δ fraction of inputs. A non-uniform reduction is allowed to also receive a short advice string that may depend on both f and D. The well-known connection between hardness amplification and list-decodable error-correcting codes implies that reductions showing hardness amplification cannot be uniform for ${\epsilon < 1/4}$ . We show that every non-uniform reduction must make at least ${\Omega(1/\epsilon)}$ queries to its oracle, which implies size loss. Our result is the first lower bound that applies to non-uniform reductions that are adaptive, whereas previous bounds by Shaltiel & Viola (SICOMP 2010) applied only to non-adaptive reductions. We also prove similar bounds for a stronger notion of “function-specific” reductions in which the reduction is only required to work for a specific function f.  相似文献   

14.
首先,提出了基于Vague等价关系的(αt,αf)-等价类,并在(αt,αf)-等价类基础上定义了(αt,αf)-粗糙集,得到(αt,αf)-粗糙集是λ-粗糙集的推广,研究了(αt,αf)-等价类和(αt,αf)-粗糙集的性质。其次,给出(αt,αf)-等价类分解、(αt,αf)-粗糙集分解以及(αt,αf)-粗糙集的边界的概念。最后,分别得到等价类、粗糙集以及粗糙集的边界基于Vague等价关系的分解结构。  相似文献   

15.
Due to the multiplicity of loci of control, a main issue distributed systems have to cope with lies in the uncertainty on the system state created by the adversaries that are asynchrony, failures, dynamicity, mobility, etc. Considering message-passing systems, this paper considers the uncertainty created by the net effect of asynchrony and process crash failures in systems where the processes are anonymous (i.e., processes have no identity and locally execute the same algorithm). Trivially, agreement problems such as consensus, that cannot be solved in non-anonymous asynchronous systems prone to process failures, cannot be solved either if the system is anonymous. The paper investigates failure detectors that allow processes to circumvent this impossibility. It has several contributions. It first presents four failure detectors (denoted AP, ${\overline{AP}}$ , , and ) and show that they are the “identity-free” counterparts of perfect failure detectors, eventual leader failure detectors, and quorum failure detectors, respectively. is new and showing that and Σ have the same computability power in a non-anonymous system is not trivial. The paper also shows that the notion of failure detector reduction is related to the computation model. Then, the paper presents and proves correct a uniform anonymous consensus algorithm based on the failure detector pair (, ) (“uniform” means here that not only processes have no identity, but no process is aware of the total number of processes). This new algorithm is not a simple “straightforward extension” of an algorithm designed for non-anonymous systems. To benefit from , it uses a novel broadcast facility which encapsulates an -based message exchange pattern that provides the processes with an interesting intersection property on the set of messages they have exchanged. Finally, the paper discusses the notions of failure detector hierarchy, weakest failure detector for anonymous consensus, and the implementation of identity-free failure detectors in anonymous systems.  相似文献   

16.
Given two linearly independent matrices in so(3), Z1 and Z2, every rotation matrix, XfSO(3), can be written as the product of alternate elements from the one-dimensional subgroups corresponding to Z1 and Z2, namely Xf=eZ1t1eZ2t2eZ1t3?eZ1ts. The parameters ti, i=1,…,s are called Generalized Euler Angles. In this paper, the minimum number of factors required for the factorization of XfSO(3), as a function of Xf, is evaluated. An algorithm is given to determine the generalized Euler angles, in the optimal factorization. The results can be applied to the bang-bang control, with minimum number of switches, of some classical and quantum systems.  相似文献   

17.
We provide efficient constructions and tight bounds for shared memory systems accessed by n processes, up to t of which may exhibit Byzantine failures, in a model previously explored by Malkhi et al. [21]. We show that sticky bits are universal in the Byzantine failure model for n ≥ 3t + 1, an improvement over the previous result requiring n ≥ (2t + 1)(t + 1). Our result follows from a new strong consensus construction that uses sticky bits and tolerates t Byzantine failures among n processes for any n ≥ 3t + 1, the best possible bound on n for strong consensus. We also present tight bounds on the efficiency of implementations of strong consensus objects from sticky bits and similar primitive objects. Research supported in part by a grant from the Israel Science Foundation, and by the Hermann Minkowski Minerva Center for Geometry at Tel Aviv University. This work was partially completed while at AT&T Labs and while visiting the Institute for Advanced Study, Princeton, NJ. Research supported in part by US-Israel Binational Science Foundation Grant 2002246. This work was partially completed while visiting AT&T Labs. This work was partially completed while at AT&T Labs. Research supported in part by the National Science Foundation under Grant No. CCR-0331584. A preliminary version of the results presented in this paper appeared in [23].  相似文献   

18.
Traditional Byzantine consensus in distributed systems requires n ≥ 3f + 1, where n is the number of nodes. In this paper, we present a scalable and leaderless Byzantine consensus implementation based on gossip, requiring only n ≥ 2f + 1 nodes. Unlike conventional distributed systems, the network topology of cloud computing systems is often not fully connected, but loosely coupled and layered. Hence, we revisit the Byzantine consensus problem in cloud computing environments, in which each node maintains some number of neighbors, called local view. The message complexity of our Byzantine consensus scheme is O(n), instead of O(n 2). Experimental results and correctness proof show that our Byzantine consensus scheme can solve the Byzantine consensus problem safely in a scalable way without a bottleneck and a leader in cloud computing environments.  相似文献   

19.
Dr. H. Fischer 《Computing》1989,41(3):261-265
The paper deals with a special problem in Automatic Differentiation. Letf be a rational function ofn variables, let #(f) denote the number of operations to evaluatef(x), letg denote the gradient off. Many algorithms for minimizingf(x) require the scalar productg(u) tv. In the standard method for computingg(u) tv the amount of work grows withn·#(f). In this note a new method for computingg(u) tv is presented. The new method is considerably faster, its amount of work only grows with #(f).  相似文献   

20.
An indulgent algorithm is a distributed algorithm that tolerates asynchronous periods of the network when process crash detection is unreliable. This paper presents a tight bound on the time complexity of indulgent consensus algorithms. We consider a round-based eventually synchronous model, and we show that any t-resilient consensus algorithm in this model, requires at least t+2 rounds for a global decision even in runs that are synchronous. We contrast our lower bound with the well-known t+1 round tight bound on consensus in the synchronous model. We then prove the bound to be tight by exhibiting a new t-resilient consensus algorithm in the eventually synchronous model that reaches a global decision at round t+2 in every synchronous run. Our new algorithm is in this sense significantly faster than the most efficient indulgent algorithm we know of, which requires 2t+2 rounds in synchronous runs. Our lower bound applies to round-based consensus algorithms with unreliable failure detectors such as ⋄ P and ⋄ S, and our matching algorithm can be adapted to such failure detectors. This work is partially supported by the Swiss National Science Foundation (project number 510-207).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号