首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   7篇
  免费   0篇
自动化技术   7篇
  2015年   1篇
  2013年   1篇
  2012年   1篇
  2011年   2篇
  2008年   1篇
  2003年   1篇
排序方式: 共有7条查询结果,搜索用时 406 毫秒
1
1.
We first define the basic notions of local and non-local tasks for distributed systems. Intuitively, a task is local if, in a system with no failures, each process can compute its output value locally by applying some local function on its own input value (so the output value of each process depends only on the process’ own input value, not on the input values of the other processes); a task is nonlocal otherwise. All the interesting distributed tasks, including all those that have been investigated in the literature (e.g., consensus, set agreement, renaming, atomic commit, etc.) are non-local. In this paper we consider non-local tasks and determine the minimum information about failures that is necessary to solve such tasks in message-passing distributed systems. As part of this work, we also introduces weak set agreement—a natural weakening of set agreement—and show that, in some precise sense, it is the weakest nonlocal task in message-passing systems.  相似文献   
2.
At the heart of distributed computing lies the fundamental result that the level of agreement that can be obtained in an asynchronous shared memory model where t processes can crash is exactly t?+?1. In other words, an adversary that can crash any subset of size at most t can prevent the processes from agreeing on t values. But what about all the other ${2^{2^n - 1} - (n+1)}$ adversaries that are not uniform in this sense and might crash certain combination of processes and not others? This paper presents a precise way to classify all adversaries. We introduce the notion of disagreement power: the biggest integer k for which the adversary can prevent processes from agreeing on k values. We show how to compute the disagreement power of an adversary and derive n equivalence classes of adversaries.  相似文献   
3.
We introduce a new model of partial synchrony for read-write shared memory systems. This model is based on the simple notion of set timeliness—a natural generalization of the seminal concept of timeliness in the partially synchrony model of Dwork et?al. (J. ACM 35(2):288–323, 1988). Despite its simplicity, the concept of set timeliness is powerful enough to define a family of partially synchronous systems that closely match individual instances of the t-resilient k-set agreement problem among n processes, henceforth denoted (t, k, n)-agreement. In particular, we use it to give a partially synchronous system that is synchronous enough for solving (t, k, n)-agreement, but not enough for solving two incrementally stronger problems, namely, (t + 1, k, n)-agreement, which has a slightly stronger resiliency requirement, and (t, k ? 1, n)-agreement, which has a slightly stronger agreement requirement. This is the first partially synchronous system that separates these sub-consensus problems. The above results show that set timeliness can be used to study and compare the partial synchrony requirements of problems that are strictly weaker than consensus.  相似文献   
4.
The Global Data Computation problem consists of providing each process with the same vector (with one entry per process) such that each entry is filled by a value provided by the corresponding process. This paper presents a protocol that solves this problem in an asynchronous distributed system where processes can crash, but equipped with a perfect failure detector. This protocol requires that processes execute asynchronous computation rounds. The number of rounds is upper bounded by min(f+2, t+1, n), where n, t, and f represent the total number of processes, the maximum number of processes that can crash, and the number of processes that actually crash, respectively. This value is a lower bound for the number of rounds when t相似文献   
5.
So far, the distributed computing community has either assumed that all the processes of a distributed system have distinct identifiers or, more rarely, that the processes are anonymous and have no identifiers. These are two extremes of the same general model: namely, $n$ processes use $\ell $ different identifiers, where $1 \le \ell \le n$ . In this paper, we ask how many identifiers are actually needed to reach agreement in a distributed system with $t$ Byzantine processes. We show that having $3t+1$ identifiers is necessary and sufficient for agreement in the synchronous case but, more surprisingly, the number of identifiers must be greater than $\frac{n+3t}{2}$ in the partially synchronous case. This demonstrates two differences from the classical model (which has $\ell =n$ ): there are situations where relaxing synchrony to partial synchrony renders agreement impossible; and, in the partially synchronous case, increasing the number of correct processes can actually make it harder to reach agreement. The impossibility proofs use the fact that a Byzantine process can send multiple messages to the same recipient in a round. We show that removing this ability makes agreement easier: then, $t+1$ identifiers are sufficient for agreement, even in the partially synchronous model, assuming processes can count the number of messages with the same identifier they receive in a round.  相似文献   
6.
We study the feasibility and cost of implementing Ω—a fundamental failure detector at the core of many algorithms—in systems with weak reliability and synchrony assumptions. Intuitively, Ω allows processes to eventually elect a common leader. We first give an algorithm that implements Ω in a weak system S where (a) except for some unknown timely process s, all processes may be arbitrarily slow or may crash, and (b) only the output links of s are eventually timely (all other links can be arbitrarily slow and lossy). Previously known algorithms for Ω worked only in systems that are strictly stronger than S in terms of reliability or synchrony assumptions.We next show that algorithms that implement Ω in system S are necessarily expensive in terms of communication complexity: all correct processes (except possibly one) must send messages forever; moreover, a quad-ratic number of links must carry messages forever. This result holds even for algorithms that tolerate at most one crash. Finally, we show that with a small additional assumption to system S—the existence of some unknown correct process whose links can be arbitrarily slow and lossy but fair—there is a communication-efficient algorithm for Ω such that eventually only one process (the elected leader) sends messages. Some recent experimental results indicate that two of the algorithms for Ω described in this paper can be used in dynamically-changing systems and work well in practice [Schiper, Toueg in Proceedings of the 38th International Conference on Dependable Systems and Networks, pp. 207–216 (2008)]. This paper was originally invited to the special issue of Distributed Computing based on selected papers presented at the 22nd ACM Symposium on Principles of Distributed Computing (PODC 2003). It appears separately due to publication delays. Research supported in part by the National Science and Engineering Research Council of Canada.  相似文献   
7.
1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号