首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
So far, the distributed computing community has either assumed that all the processes of a distributed system have distinct identifiers or, more rarely, that the processes are anonymous and have no identifiers. These are two extremes of the same general model: namely, $n$ processes use $\ell $ different identifiers, where $1 \le \ell \le n$ . In this paper, we ask how many identifiers are actually needed to reach agreement in a distributed system with $t$ Byzantine processes. We show that having $3t+1$ identifiers is necessary and sufficient for agreement in the synchronous case but, more surprisingly, the number of identifiers must be greater than $\frac{n+3t}{2}$ in the partially synchronous case. This demonstrates two differences from the classical model (which has $\ell =n$ ): there are situations where relaxing synchrony to partial synchrony renders agreement impossible; and, in the partially synchronous case, increasing the number of correct processes can actually make it harder to reach agreement. The impossibility proofs use the fact that a Byzantine process can send multiple messages to the same recipient in a round. We show that removing this ability makes agreement easier: then, $t+1$ identifiers are sufficient for agreement, even in the partially synchronous model, assuming processes can count the number of messages with the same identifier they receive in a round.  相似文献   

2.
Wireless ad-hoc networks are being increasingly used in diverse contexts, ranging from casual meetings to disaster recovery operations. A promising approach is to model these networks as distributed systems prone to dynamic communication failures. This captures transitory disconnections in communication due to phenomena like interference and collisions, and permits an efficient use of the wireless broadcasting medium. This model, however, is bound by the impossibility result of Santoro and Widmayer, which states that, even with strong synchrony assumptions, there is no deterministic solution to any non-trivial form of agreement if n ? 1 or more messages can be lost per communication round in a system with n processes. In this paper we propose a novel way to circumvent this impossibility result by employing randomization. We present a consensus protocol that ensures safety in the presence of an unrestricted number of omission faults, and guarantees progress in rounds where such faults are bounded by ${f \,{\leq}\,\lceil \frac{n}{2} \rceil (n\,{-}\,k)\,{+}\,k\,{-}\,2}$ , where k is the number of processes required to decide, eventually assuring termination with probability 1.  相似文献   

3.
In this paper we study the problem of building a constant-degree connected dominating set (CCDS), a network structure that can be used as a communication backbone, in the dual graph radio network model (Clementi et al. in J Parallel Distrib Comput 64:89–96, 2004; Kuhn et al. in Proceedings of the international symposium on principles of distributed computing 2009, Distrib Comput 24(3–4):187–206 2011, Proceedings of the international symposium on principles of distributed computing 2010). This model includes two types of links: reliable, which always deliver messages, and unreliable, which sometimes fail to deliver messages. Real networks compensate for this differing quality by deploying low-layer detection protocols to filter unreliable from reliable links. With this in mind, we begin by presenting an algorithm that solves the CCDS problem in the dual graph model under the assumption that every process $u$ is provided with a local link detector set consisting of every neighbor connected to $u$ by a reliable link. The algorithm solves the CCDS problem in $O\left( \frac{\varDelta \log ^2{n}}{b} + \log ^3{n}\right) $ rounds, with high probability, where $\varDelta $ is the maximum degree in the reliable link graph, $n$ is the network size, and $b$ is an upper bound in bits on the message size. The algorithm works by first building a Maximal Independent Set (MIS) in $\log ^3{n}$ time, and then leveraging the local topology knowledge to efficiently connect nearby MIS processes. A natural follow-up question is whether the link detector must be perfectly reliable to solve the CCDS problem. With this in mind, we first describe an algorithm that builds a CCDS in $O(\varDelta $ polylog $(n))$ time under the assumption of $O(1)$ unreliable links included in each link detector set. We then prove this algorithm to be (almost) tight by showing that the possible inclusion of only a single unreliable link in each process’s local link detector set is sufficient to require $\varOmega (\varDelta )$ rounds to solve the CCDS problem, regardless of message size. We conclude by discussing how to apply our algorithm in the setting where the topology of reliable and unreliable links can change over time.  相似文献   

4.
Set agreement is a fundamental problem in distributed computing in which processes collectively choose a small subset of values from a larger set of proposals. The impossibility of fault-tolerant set agreement in asynchronous networks is one of the seminal results in distributed computing. In synchronous networks, too, the complexity of set agreement has been a significant research challenge that has now been resolved. Real systems, however, are neither purely synchronous nor purely asynchronous. Rather, they tend to alternate between periods of synchrony and periods of asynchrony. Nothing specific is known about the complexity of set agreement in such a “partially synchronous” setting. In this paper, we address this challenge, presenting the first (asymptotically) tight bound on the complexity of set agreement in such systems. We introduce a novel technique for simulating, in a fault-prone asynchronous shared memory, executions of an asynchronous and failure-prone message-passing system in which some fragments appear synchronous to some processes. We use this simulation technique to derive a lower bound on the round complexity of set agreement in a partially synchronous system by a reduction from asynchronous wait-free set agreement. Specifically, we show that every set agreement protocol requires at least $\lfloor\frac{t}{k}\rfloor + 2$ synchronous rounds to decide. We present an (asymptotically) matching algorithm that relies on a distributed asynchrony detection mechanism to decide as soon as possible during periods of synchrony. From these two results, we derive the size of the minimal window of synchrony needed to solve set agreement. By relating synchronous, asynchronous and partially synchronous environments, our simulation technique is of independent interest. In particular, it allows us to obtain a new lower bound on the complexity of early deciding k-set agreement complementary to that of Gafni et al. (in SIAM J. Comput. 40(1):63–78, 2011), and to re-derive the combinatorial topology lower bound of Guerraoui et al. (in Theor. Comput. Sci. 410(6–7):570–580, 2009) in an algorithmic way.  相似文献   

5.
Thanks for the precision engineering technology, motor, especially Permanent Magnet Synchronous Motor can be made as Micro-motor. However, any imprecise rotor parts of this meticulous motor could lead to undesirable vibration and acoustic noise (Yu et al. in 3D influence of unbalanced magnetic pull induced by misalignment rotor in PMSM, APMRC2012, 2012; Bi et al. in Influence of axial asymmetrical rotor in PMAC motor operation, ICEMS, 2011; Bi et al. in Influence of rotor eccentricity to unbalanced-magnetic-pull in pm synchronous motor, ICEMS06, 2006). This paper presents five types of rotor faults design and vibration study of these five types of faults in motor is conducted. Based on the vibration pattern, fuzzy mathematics is employed to classify these five types of rotor faults.  相似文献   

6.
We consider the problem of leader election (LE) in single-hop radio networks with synchronized time slots for transmitting and receiving messages. We assume that the actual number n of processes is unknown, while the size u of the ID space is known, but is possibly much larger. We consider two types of collision detection: strong (SCD), whereby all processes detect collisions, and weak (WCD), whereby only non-transmitting processes detect collisions. We introduce loneliness detection (LD) as a key subproblem for solving LE in WCD systems. LD informs all processes whether the system contains exactly one process or more than one. We show that LD captures the difference in power between SCD and WCD, by providing an implementation of SCD over WCD and LD. We present two algorithms that solve deterministic and probabilistic LD in WCD systems with time costs of ${\mathcal{O}(\log \frac{u}{n})}$ and ${\mathcal{O}(\min( \log \frac{u}{n}, \frac{\log (1/\epsilon)}{n}))}$ , respectively, where ${\epsilon}$ is the error probability. We also provide matching lower bounds. Assuming LD is solved, we show that SCD systems can be emulated in WCD systems with factor-2 overhead in time. We present two algorithms that solve deterministic and probabilistic LE in SCD systems with time costs of ${\mathcal{O}(\log u)}$ and ${\mathcal{O}(\min ( \log u, \log \log n + \log (\frac{1}{\epsilon})))}$ , respectively, where ${\epsilon}$ is the error probability. We provide matching lower bounds.  相似文献   

7.
It is conjectured that the only way a failure detector (FD) can help solving n-process tasks is by providing k-set consensus for some ${k\in\{1,\ldots,n\}}$ among all the processes. It was recently shown by Zieli??ski that any FD that allows for solving a given n-process task that is unsolvable read-write wait-free, also solves (n ? 1)-set consensus. In this paper, we provide a generalization of Zieli??ski??s result. We show that any FD that solves a colorless task that cannot be solved read-write k-resiliently, also solves k-set consensus. More generally, we show that every colorless task ${\mathcal{T}}$ can be characterized by its set consensus number: the largest ${k\in\{1,\ldots,n\}}$ such that ${\mathcal{T}}$ is solvable (k ? 1)-resiliently. A task ${\mathcal{T}}$ with set consensus number k is, in the failure detector sense, equivalent to k-set consensus, i.e., a FD solves ${\mathcal{T}}$ if and only if it solves k-set consensus. As a corollary, we determine the weakest FD for solving k-set consensus in every environment, i.e., for all assumptions on when and where failures might occur.  相似文献   

8.
Two identical (anonymous) mobile agents start from arbitrary nodes of an unknown tree and have to meet at some node. Agents move in synchronous rounds: in each round an agent can either stay at the current node or move to one of its neighbors. We consider deterministic algorithms for this rendezvous task. The main result of this paper is a tight trade-off between the optimal time of completing rendezvous and the size of memory of the agents. For agents with $k$ memory bits, we show that optimal rendezvous time is $\Theta (n+n^2/k)$ in $n$ -node trees. More precisely, if $k \ge c\log n$ , for some constant $c$ , we design agents accomplishing rendezvous in arbitrary trees of size $n$ (unknown to the agents) in time $O(n+n^2/k)$ , starting with arbitrary delay. We also show that no pair of agents can accomplish rendezvous in time $o(n+n^2/k)$ , even in the class of lines of known length and even with simultaneous start. Finally, we prove that at least logarithmic memory is necessary for rendezvous, even for agents starting simultaneously in a $n$ -node line.  相似文献   

9.
Replication is a standard technique for fault tolerance in distributed systems modeled as deterministic finite state machines (DFSMs or machines). To correct \(f\) crash or \(\lfloor f/2 \rfloor \) Byzantine faults among \(n\) different machines, replication requires \(nf\) backup machines. We present a solution called fusion that requires just \(f\) backup machines. First, we build a framework for fault tolerance in DFSMs based on the notion of Hamming distances. We introduce the concept of an ( \(f\) , \(m\) )-fusion, which is a set of \(m\) backup machines that can correct \(f\) crash faults or \(\lfloor f/2 \rfloor \) Byzantine faults among a given set of machines. Second, we present an algorithm to generate an ( \(f\) , \(f\) )-fusion for a given set of machines. We ensure that our backups are efficient in terms of the size of their state and event sets. Third, we use locality sensitive hashing for the detection and correction of faults that incurs almost the same overhead as that for replication. We detect Byzantine faults with time complexity \(O(n f)\) on average while we correct crash and Byzantine faults with time complexity \(O(n \rho f)\) with high probability, where \(\rho \) is the average state reduction achieved by fusion. Finally, our evaluation of fusion on the widely used MCNC’91 benchmarks for DFSMs shows that the average state space savings in fusion (over replication) is 38 % (range 0–99 %). To demonstrate the practical use of fusion, we describe its potential application to two areas: sensor networks and the MapReduce framework. In the case of sensor networks a fusion-based solution can lead to significantly fewer sensor-nodes than a replication-based solution. For the MapReduce framework, fusion can reduce the number of map-tasks compared to replication. Hence, fusion results in considerable savings in state space and other resources such as the power needed to run the backups.  相似文献   

10.
Given a distributed system of \(n\) balls and \(n\) bins, how evenly can we distribute the balls to the bins, minimizing communication? The fastest non-adaptive and symmetric algorithm achieving a constant maximum bin load requires \(\varTheta (\log \log n)\) rounds, and any such algorithm running for \(r\in {\mathcal {O}}(1)\) rounds incurs a bin load of \(\varOmega ((\log n/\log \log n)^{1/r})\). In this work, we explore the fundamental limits of the general problem. We present a simple adaptive symmetric algorithm that achieves a bin load of 2 in \(\log ^* n+{\mathcal {O}}(1)\) communication rounds using \({\mathcal {O}}(n)\) messages in total. Our main result, however, is a matching lower bound of \((1-o(1))\log ^* n\) on the time complexity of symmetric algorithms that guarantee small bin loads. The essential preconditions of the proof are (i) a limit of \({\mathcal {O}}(n)\) on the total number of messages sent by the algorithm and (ii) anonymity of bins, i.e., the port numberings of balls need not be globally consistent. In order to show that our technique yields indeed tight bounds, we provide for each assumption an algorithm violating it, in turn achieving a constant maximum bin load in constant time.  相似文献   

11.
In this paper we study gossip based information spreading with bounded message sizes. We use algebraic gossip to disseminate $k$ distinct messages to all $n$ nodes in a network. For arbitrary networks we provide a new upper bound for uniform algebraic gossip of $O((k+\log n + D)\varDelta )$ rounds with high probability, where $D$ and $\varDelta $ are the diameter and the maximum degree in the network, respectively. For many topologies and selections of $k$ this bound improves previous results, in particular, for graphs with a constant maximum degree it implies that uniform gossip is order optimal and the stopping time is $\varTheta (k + D)$ . To eliminate the factor of $\varDelta $ from the upper bound we propose a non-uniform gossip protocol, TAG, which is based on algebraic gossip and an arbitrary spanning tree protocol $\mathcal{S } $ . The stopping time of TAG is $O(k+\log n +d(\mathcal{S })+t(\mathcal{S }))$ , where $t(\mathcal{S })$ is the stopping time of the spanning tree protocol, and $d(\mathcal{S })$ is the diameter of the spanning tree. We provide two general cases in which this bound leads to an order optimal protocol. The first is for $k=\varOmega (n)$ , where, using a simple gossip broadcast protocol that creates a spanning tree in at most linear time, we show that TAG finishes after $\varTheta (n)$ rounds for any graph. The second uses a sophisticated, recent gossip protocol to build a fast spanning tree on graphs with large weak conductance. In turn, this leads to the optimally of TAG on these graphs for $k=\varOmega (\text{ polylog }(n))$ . The technique used in our proofs relies on queuing theory, which is an interesting approach that can be useful in future gossip analysis.  相似文献   

12.
This paper introduces the notion of distributed verification without preprocessing. It focuses on the Minimum-weight Spanning Tree (MST) verification problem and establishes tight upper and lower bounds for the time and message complexities of this problem. Specifically, we provide an MST verification algorithm that achieves simultaneously $\tilde{O}(m)$ messages and $\tilde{O}(\sqrt{n} + D)$ time, where m is the number of edges in the given graph G, n is the number of nodes, and D is G’s diameter. On the other hand, we show that any MST verification algorithm must send $\tilde{\varOmega}(m)$ messages and incur $\tilde{\varOmega}(\sqrt{n} + D)$ time in worst case. Our upper bound result appears to indicate that the verification of an MST may be easier than its construction, since for MST construction, both lower bounds of $\tilde{\varOmega}(m)$ messages and $\tilde{\varOmega}(\sqrt{n} + D)$ time hold, but at the moment there is no known distributed algorithm that constructs an MST and achieves simultaneously $\tilde{O}(m)$ messages and $\tilde{O}(\sqrt{n} + D)$ time. Specifically, the best known time-optimal algorithm (using ${\tilde{O}}(\sqrt {n} + D)$ time) requires O(m+n 3/2) messages, and the best known message-optimal algorithm (using ${\tilde{O}}(m)$ messages) requires O(n) time. On the other hand, our lower bound results indicate that the verification of an MST is not significantly easier than its construction.  相似文献   

13.
We investigate a metric facility location problem in a distributed setting. In this problem, we assume that each point is a client as well as a potential location for a facility and that the opening costs for the facilities and the demands of the clients are uniform. The goal is to open a subset of the input points as facilities such that the accumulated cost for the whole point set, consisting of the opening costs for the facilities and the connection costs for the clients, is minimized. We present a randomized distributed algorithm that computes in expectation an ${\mathcal {O}}(1)$ -approximate solution to the metric facility location problem described above. Our algorithm works in a synchronous message passing model, where each point is an autonomous computational entity that has its own local memory and that communicates with the other entities by message passing. We assume that each entity knows the distance to all the other entities, but does not know any of the other pairwise distances. Our algorithm uses three rounds of all-to-all communication with message sizes bounded to $\mathcal{O}(\log(n))$ bits, where n is the number of input points. We extend our distributed algorithm to constant powers of metric spaces. For a metric exponent ?≥1, we obtain a randomized ${\mathcal {O}}(1)$ -approximation algorithm that uses three rounds of all-to-all communication with message sizes bounded to $\mathcal{O}(\log(n))$ bits.  相似文献   

14.
Roughly speaking, a simplicial complex is shellable if it can be constructed by gluing a sequence of n-simplexes to one another along $(n-1)$ ( n ? 1 ) -faces only. Shellable complexes have been widely studied because they have nice combinatorial properties. It turns out that several standard models of concurrent computation can be constructed from shellable complexes. We consider adversarial schedulers in the synchronous, asynchronous, and semi-synchronous message-passing models, as well as asynchronous shared memory. We show how to exploit their common shellability structure to derive new and remarkably succinct tight (or nearly so) lower bounds on connectivity of protocol complexes and hence on solutions to the $k$ k -set agreement task in these models. Earlier versions of material in this article appeared in the 2010 ACM Symposium on Principles of Distributed Computing (Herlihy and Rajsbaum 2010), and the International Conference on Distributed Computing (Herlihy and Rajsbaum 2010, doi:10.1145/1835698.1835724).  相似文献   

15.
A matching ${E_\mathcal{M}}$ of graph G =  (V, E) is a subset of the edges E, such that no vertex in V is incident to more than one edge in ${E_\mathcal{M}}$ . The matching ${E_\mathcal{M}}$ is maximum if there is no matching in G with size strictly larger than the size of ${E_\mathcal{M}}$ . In this paper, we present a distributed stabilizing algorithm for finding maximum matching in bipartite graphs based on the stabilizing PIF algorithm of Cournier et al. (Proceedings of 21st IEEE international conference on distributed computing systems, 91–98, 2001). Since our algorithm is stabilizing, it does not require initialization and withstands transient faults. The complexity of the proposed algorithm is O(d × n) rounds, where d is the diameter of the communication network and n is the number of nodes in the network. The space complexity is O(log Δ +  log d), where Δ is the largest degree of all the nodes in the communication network. In addition, an optimal version of the proposed algorithm finding maximum matching in linear time is also presented.  相似文献   

16.
Spatial reasoning with rectangular cardinal relations   总被引:1,自引:0,他引:1  
Qualitative spatial representation and reasoning plays a important role in various spatial applications. In this paper we introduce a new formalism, we name RCD calculus, for qualitative spatial reasoning with cardinal direction relations between regions of the plane approximated by rectangles. We believe this calculus leads to an attractive balance between efficiency, simplicity and expressive power, which makes it adequate for spatial applications. We define a constraint algebra and we identify a convex tractable subalgebra allowing efficient reasoning with definite and imprecise knowledge about spatial configurations specified by qualitative constraint networks. For such tractable fragment, we propose several polynomial algorithms based on constraint satisfaction to solve the consistency and minimality problems. Some of them rely on a translation of qualitative networks of the RCD calculus to qualitative networks of the Interval or Rectangle Algebra, and back. We show that the consistency problem for convex networks can also be solved inside the RCD calculus, by applying a suitable adaptation of the path-consistency algorithm. However, path consistency can not be applied to obtain the minimal network, contrary to what happens in the convex fragment of the Rectangle Algebra. Finally, we partially analyze the complexity of the consistency problem when adding non-convex relations, showing that it becomes NP-complete in the cases considered. This analysis may contribute to find a maximal tractable subclass of the RCD calculus and of the Rectangle Algebra, which remains an open problem.  相似文献   

17.
18.
We address the question of the weakest failure detector to circumvent the impossibility of $(2n-2)$ -renaming in a system of up to $n$ participating processes. We derive that in a restricted class of eventual failure detectors there does not exist a single weakest oracle, but a weakest family of oracles $\zeta _n$ : every two oracles in $\zeta _n$ are incomparable, and every oracle that allows for solving renaming provides at least as much information about failures as one of the oracles in $\zeta _n$ . As a by product, we obtain one more evidence that renaming is strictly easier to solve than set agreement.  相似文献   

19.
We study broadcasting, also known as one-to-all communication, in synchronous radio networks with known topology modeled by undirected (symmetric) graphs, where the interference range of a node is likely exceeding its transmission range. In this model, if two nodes are connected by a transmission edge they can communicate directly. On the other hand, if two nodes are connected by an interference edge they cannot communicate directly and transmission of one node disables recipience of any message at the other node. For a network $G,$ we term the smallest integer $d$ , s.t., for any interference edge $e$ there exists a simple path formed of at most $d$ transmission edges connecting the endpoints of $e$ as its interference distance $d_I$ . In this model the schedule of transmissions is precomputed in advance. It is based on the full knowledge of the size and the topology (including location of transmission and interference edges) of the network. We are interested in the design of fast broadcasting schedules that are energy efficient, i.e., based on a bounded number of transmissions executed at each node. We adopt $n$ as the number of nodes, $D_T$ is the diameter of the subnetwork induced by the transmission edges, and $\varDelta $ refers to the maximum combined degree (formed of transmission and interference edges) of the network. We contribute the following new results: (1) We prove that for networks with the interference distance $d_I\ge 2$ any broadcasting schedule requires at least $D_T+\varOmega (\varDelta \cdot \frac{\log {n}}{\log {\varDelta }})$ rounds. (2) We provide for networks modeled by bipartite graphs an algorithm that computes $1$ -shot (each node transmits at most once) broadcasting schedules of length $O(\varDelta \cdot \log {n})$ . (3) The main result of the paper is an algorithm that computes a $1$ -shot broadcasting schedule of length at most $4 \cdot D_T + O(\varDelta \cdot d_I \cdot \log ^4{n})$ for networks with arbitrary topology. Note that in view of the lower bound from (1) if $d_I$ is poly-logarithmic in $n$ this broadcast schedule is a poly-logarithmic factor away from the optimal solution.  相似文献   

20.
Aggressive technology scaling has dramatically increased the power density and degraded the reliability of embedded real-time systems. The goal of our research in this paper is to develop effective scheduling methods that can minimize the energy consumption and, at the same time, tolerate up to \(K\) transient faults when executing a hard real-time system scheduled according to the EDF policy. Three scheduling algorithms are presented in this paper. The first algorithm is an extension of a well-known fault oblivious low-power scheduling algorithm. The second algorithm intends to minimize the energy consumption under the fault-free situation while reserving adequate resources for recovery when faults strike. The third algorithm improves upon the first two by sharing the reserved resources and thus can achieve better energy efficiency. Simulation results show that the proposed algorithms consistently outperform other related approaches in energy savings.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号