首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Summary We study the relation between knowledge and space. That is, we analyze how much shared memory space is needed in order to learn certain kinds of facts. Such results are useful tools for reasoning about shared memory systems. In addition we generalize a known impossibility result, and show that results about how knowledge can be gained and lost in message passing systems also hold for shared memory systems. Michael Merritt received a B.S. degree in Philosophy and in Computer Science from Yale College in 1978, the M.S. and Ph.D. degrees in Information and Computer Science in 1980 and 1983, respectively, from the Georgia Institute of Technology. Since 1983 he has been a member of technical staff at AT & T Bell Laboratories, and has taught as an adjunct or visiting lecturer at Stevens Institute of Technology, Massachusetts Institute of Technology, and Columbia University. In 1989 he was program chair for the ACM Symposium on Principles of Distributed Computing. His research interests include distributed and concurrent computation, both algorithms and formal methods for verifying their correctness, cryptography, and security. He is an editor for Distributed Computing and for Information and Computation, recently co-authored a book on database concurrency control algorithms, and is a member of the ACM and of Computer Professionals for Social Responsibility. Gadi Taubenfeld received the B.A., M.Sc. and Ph.D. degrees in Computer Science from the Technion (Israel Institute of Technology), in 1982, 1984 and 1988, respectively. From 1988 to 1990 he was a research scientist at Yale University. Since 1991 he has been a member of technical staff at AT & T Bell Laboratories. His primary research interests are in concurrent and distributed computing.A preliminary version of this work appeared in the Proceedings of the Tenth Annual ACM Symposium on Principles of Distributed Computing, pages 189–200, Montreal, Canada, August 1991  相似文献   

2.
We present an N-process local-spin mutual exclusion algorithm, based on nonatomic reads and writes, in which each process performs Θ(log N) remote memory references to enter and exit its critical section. This algorithm is derived from Yang and Anderson's atomic tree-based local-spin algorithm in a way that preserves its time complexity. No atomic read/write algorithm with better asymptotic worst-case time complexity (under the remote-mem-ory-refer-ences measure) is currently known. This suggests that atomic memory is not fundamentally required if one is interested in worst-case time complexity.The same cannot be said if one is interested in fast-path algorithms (in which contention-free time complexity is required to be O(1)) or adaptive algorithms (in which time complexity is required to depend only on the number of contending processes). We show that such algorithms fundamentally require memory accesses to be atomic. In particular, we show that for any N-process nonatomic algorithm, there exists a single-process execution in which the lone competing process accesses Ω(log N/log log N) distinct variables to enter its critical section. Thus, fast and adaptive algorithms are impossible even if caching techniques are used to avoid accessing the processors-to-memory interconnection network.This paper was invited for inclusion in the special issue of this journal based on selected papers presented in PODC '02 (Distributed Computing 18(1)).It appears separately because of a publication delay. Yong-Jik Kimreceived a B.S. degree in Physics/Computer Science from Korea Advanced Institute of Science and Technology in 1998, and a Ph.D.degree in Computer Science from the University of Notrh Carolina at Chapel Hill in 2003. He currently works for the RDBMS group in Tmax Soft, and is otherwise occupied with his newborn daughter Darum, which means “difference” in Korean. James H. Anderson is a professor in the Department of Computer Science at the University of North Carolina at Chapel Hill. He received a B.S.degree in Computer Science from Michigan State University in 1982, an M.S. degree in Computer Science from Purdue University in 1983, and a Ph.D. degree in Computer Sciences from the University of Texas at Austin in 1990. Before joining UNC-Chapel Hill in 1993, he was with the Computer Science Department at the University of Maryland between 1990 and 1993. Dr.Anderson's main research interests are within the areas of real-time systems and concurrent and distributed computing.  相似文献   

3.
The queue based mutual exclusion protocol establishes mutual exclusion for N>1 threads by means of not necessarily atomic variables. In order to enter the critical section, a competing thread needs to traverse as many levels as there are currently competing threads. Competing threads can be overtaken by other competing threads. It is proved here, however, that every competing thread is overtaken less than N times, and that the overtaking threads were competing when the first one of them exits.  相似文献   

4.
The timestamp problem captures a fundamental aspect of asynchronous distributed computing. It allows processes to label events throughout the system with timestamps that provide information about the real-time ordering of those events. We consider the space complexity of wait-free implementations of timestamps from shared read-write registers in a system of n processes. We prove an lower bound on the number of registers required. If the timestamps are elements of a nowhere dense set, for example the integers, we prove a stronger, and tight, lower bound of n. However, if timestamps are not from a nowhere dense set, this bound can be beaten: we give an implementation that uses n − 1 (single-writer) registers. We also consider the special case of anonymous implementations, where processes are programmed identically and do not have unique identifiers. In contrast to the general case, we prove anonymous timestamp implementations require n registers. We also give an implementation to prove that this lower bound is tight. This is the first anonymous timestamp implementation that uses a finite number of registers.  相似文献   

5.
The ability to cooperate on common tasks in a distributed setting is key to solving a broad range of computation problems ranging from distributed search such as SETI to distributed simulation and multi-agent collaboration. Do-All, an abstraction of such cooperative activity, is the problem of performing N tasks in a distributed system of P failure-prone processors. Many distributed and parallel algorithms have been developed for this problem and several algorithm simulations have been developed by iterating Do-All algorithms. The efficiency of the solutions for Do-All is measured in terms of work complexity where all processing steps taken by all processors are counted. Work is ideally expressed as a function of N, P, and f, the number of processor crashes. However the known lower bounds and the upper bounds for extant algorithms do not adequately show how work depends on f. We present the first non-trivial lower bounds for Do-All that capture the dependence of work on N, P and f. For the model of computation where processors are able to make perfect load-balancing decisions locally, we also present matching upper bounds. We define the r-iterative Do-All problem that abstracts the repeated use of Do-All such as found in typical algorithm simulations. Our f-sensitive analysis enables us to derive tight bounds for r-iterative Do-All work (that are stronger than the r-fold work complexity of a single Do-All). Our approach that models perfect load-balancing allows for the analysis of specific algorithms to be divided into two parts: (i) the analysis of the cost of tolerating failures while performing work under free load-balancing, and (ii) the analysis of the cost of implementing load-balancing. We demonstrate the utility and generality of this approach by improving the analysis of two known efficient algorithms. We give an improved analysis of an efficient message-passing algorithm. We also derive a tight and complete analysis of the best known Do-All algorithm for the synchronous shared-memory model. Finally we present a new upper bound on simulations of synchronous shared-memory algorithms on crash-prone processors.Received: 15 May 2002, Accepted: 15 June 2003, Published online: 6 February 2004This work is supported in part by the NSF TOC Grants 9988304 and 0311368, and the NSF ITR Grant 0121277. The work of the second author is supported in part by the NSF CAREER Award 0093065. The work of the third author is supported in part by the NSF CAREER Award 9984778.  相似文献   

6.
Summary.  Concurrent systems in which there is a known upper bound Δ on memory access time are considered. Two prototypical synchronization problems, mutual exclusion and consensus, are studied, and solutions that have constant (i.e. independent of Δ and the total number of processes) time complexity in the absence of contention are presented. For mutual exclusion, in the absence of contention, a process needs only five accesses to the shared memory to enter its critical section, and in the presence of contention, the winning process may need to delay itself for 4 ⋅ Δ time units. For consensus, in absence of contention, a process decides after four accesses to the shared memory, and in the presence of contention, it may need to delay itself for Δ time units. Received: July 1993/Accepted: February 1996  相似文献   

7.
Summary The problem of fault-tolerant agreement is fundamental to distributed computing. When agreement is to be reached in spite of arbitrary behavior by faulty processors, this problem is calledDistributed Consensus. By requiring that the number of faulty processors be , wheren is the number of processors in the system, we are able to derive two new protocols forDistributed Consensus. Both are simple and use messages that are only one bit in length, and both provide forearly stopping: the fewer failures there are, the fewer rounds of communication are required. One protocol is optimal with respect to the number of rounds of communication required, and the other is asymptotically optimal with respect to the total number of message bits exchanged. James E. Burns received the B.S. degree in mathematics from the California Institute of Technology, the M.B.I.S. degree from Georgia State University, and the M.S. and Ph.D. degrees in information and computer science from the Georgia Institute of Technology. He served on the faculty of Computer Science at Indiana University and the College of Computing at the Georgia Institute of Technology before joining Bellcore in 1993. He is currently a Member of Technical Staff in the Network Control Research Department, where he is studying the telephone control network with special interest in behavior when faults occur. He also has research interests in theoretical issues of distributed and parallel computing, especially relating to problems of synchronization and fault tolerance. Gil Neiger was born on February 19, 1957 in New York, New York. In June 1979, he received an A.B. in Mathematics and Psycholinguistics from Brown University in Proidence, Rhode Island. In February 1985, he spent two weeks picking cotton in Nicaragua in a brigade of international volunteers. In January 1986, he received an M.S. in Computer Science from Cornell University in Ithaca, New York and, in August 1988, he received a Ph.D. in Computer Science, also from Cornell University. On August 20, 1988, Dr. Neiger married Hilary Lombard in Lansing, New York. Since August 1988, he has been an Assistant Professor in the College of Computing (formely School of Information and Computer Science) at the Georgia Institute of Technology in Atlanta, Georgia. Dr. Neiger is a member of the editorial board of theChicago Journal of Theoretical Computer Science and theJournal of Parallel and Distributed Computing.This author was supported in part by the National Science Foundation under grants CCR-8909663, CCR-9106627, and CCR-9301454.  相似文献   

8.
The condition-based approach for consensus solvability consists of identifying sets of input vectors, called conditions, for which there exists an asynchronous protocol solving consensus despite the occurrence of up to f process crashes. This paper investigates , the largest set of conditions which allow us to solve the consensus problem in an asynchronous shared memory system.The first part of the paper shows that is made up of a hierarchy of classes of conditions, where d is a parameter (called degree of the condition), starting with and ending with d = 0, where . We prove that each one is strictly contained in the previous one: . Various properties of the hierarchy are also derived. It is shown that a class can be characterized in two equivalent but complementary ways: one is convenient for designing protocols while the other is for analyzing the class properties. The paper also defines a linear family of conditions that can be used to derive many specific conditions. In particular, for each d, two natural conditions are presented.The second part of the paper is devoted to the design of efficient condition-based protocols. A generic condition-based protocol is presented. This protocol can be instantiated with any condition C, , and requires at most shared memory read/write operations per process in the synchronization part of the protocol. Thus, the value (f-d) represents the difficulty of the class . An improvement of the protocol for the conditions in is also presented.Received: 15 November 2001, Accepted: 15 April 2003, Published online: 6 February 2004Parts of it have previously appeared in [23] and [25].  相似文献   

9.
We establish a lower bound of remote memory references for N-process mutual exclusion algorithms based on reads, writes, or comparison primitives such as test-and-set and compare-and-swap. Our bound improves an earlier lower bound of established by Cypher. Our lower bound is of importance for two reasons. First, it almost matches the time complexity of the best known algorithms based on reads, writes, or comparison primitives. Second, our lower bound suggests that it is likely that, from an asymptotic standpoint, comparison primitives are no better than reads and writes when implementing local-spin mutual exclusion algorithms. Thus, comparison primitives may not be the best choice to provide in hardware if one is interested in scalable synchronization. Received: January 2002 / Accepted: September 2002 RID="*" ID="*" Work supported by NSF grants CCR 9732916, CCR 9972211, CCR 9988327, ITR 0082866, and CCR 0208289.  相似文献   

10.
This paper shows that shared-coin algorithms can be combined to optimize several complexity measures, even in the presence of a strong adversary. By combining shared coins of Bracha and Rachman (1991) [10] and of Aspnes and Waarts (1996) [7], this yields a shared-coin algorithm, and hence, a randomized consensus algorithm, with O(nlog2n)O(nlog2n) individual work and O(n2logn)O(n2logn) total work, using single-writer registers. This improves upon each of the above shared coins (where the former has a high cost for individual work, while the latter reduces it but pays in the total work), and is currently the best for this model.  相似文献   

11.
A simple proof of the uniform consensus synchronous lower bound   总被引:1,自引:0,他引:1  
We give a simple and intuitive proof of an f+2 round lower bound for uniform consensus. That is, we show that for every uniform consensus algorithm tolerating t failures, and for every f?t−2, there is an execution with f failures that requires f+2 rounds.  相似文献   

12.
Some well-known primitive operations, such as compare-and-swap, can be used, together with read and write, to implement any object in a wait-free manner. However, this paper shows that, for a large class of objects, including counters, queues, stacks, and single-writer snapshots, wait-free implementations using only these primitive operations and a large class of other primitive operations cannot be space efficient: the number of base objects required is at least linear in the number of processes that share the implemented object. The same lower bounds are obtained for implementations of starvation-free mutual exclusion using only primitive operations from this class. For wait-free implementations of a closely related class of one-time objects, lower bounds on the tradeoff between time and space are presented.  相似文献   

13.
Vidyasankar introduced a combined problem of k-exclusion and group mutual exclusion, called the group k-exclusion problem, which occurs in a situation where philosophers with the same interest can attend a forum in a meeting room, and up to k meeting rooms are available. We propose an improvement to Vidyasankar's algorithm. Waiting times in the trying region in the original algorithm and in our algorithm are bounded by n(nk)c+O(n3(nk))l and (nk)c+O(n(nk)2)l, respectively, where n is the number of processes, l is an upper bound on the time between successive two atomic steps, and c is an upper bound on the time that any philosopher spends in a forum.  相似文献   

14.
Summary The binary Byzantine Agreement problem requiresn–1 receivers to agree on the binary value broadcast by a sender even when some of thesen processes may be faulty. We investigate the message complexity of protocols that solve this problem in the case of crash failures. In particular, we derive matching upper and lower bounds on the total, worst and average case number of meassages needed in the failure-free executions of such protocols.More specifically, we prove that any protocol that tolerates up tot faulty processes requires a total of at leastn+t–1 messages in its failure-free executions —and, therefore, at least [(n+t–1)/2] messages in the worst case and min (P 0,P 1)·(n+t–1) meassages in the average case, whereP v is the probability that the value of the bit that the sender wants to broadcast isv. We also give protocols that solve the problem using only the minimum number of meassages for these three complexity measures. These protocols can be implemented by using 1-bit messages. Since a lower bound on the number of messages is also a lower bound on the number of meassage bits, this means that the above tight bounds on the number of messages are also tight bounds on the number of meassage bits. Vassos Hadzilacos received a BSE from Princeton University in 1980 and a PhD from Harvard University in 1984, both in Computer Science. In 1984 he joined the Department of Computer Science at the University of Toronto where he is currently an Associate Professor. In 1990–1991 he was visiting Associate Professor in the Department of Computer Science at Cornell University. His research interests are in the theory of distributed systems. Eugene Amdur obtained a B. Math from the University of Waterloo in 1986 and a M.Sc. from the University of Toronto in 1988. He is currently employed by the Vision and Robotics group at the University of Toronto in both technical and research capacities. His current areas of interest are vision, robotics, and networking. Samuel Weber received his B.Sc. in Mathematics and Computer Science and his M.Sc. in Computer Science from the University of Toronto. Currently, he is at Cornell University as a Ph.D. student in Computer Science with a minor in Psychology. His research interests include distributed systems, and the semantics of programming languages.  相似文献   

15.
The vast majority of papers on distributed computing assume that processes are assigned unique identifiers before computation begins. But is this assumption necessary? What if processes do not have unique identifiers or do not wish to divulge them for reasons of privacy? We consider asynchronous shared-memory systems that are anonymous. The shared memory contains only the most common type of shared objects, read/write registers. We investigate, for the first time, what can be implemented deterministically in this model when processes can fail. We give anonymous algorithms for some fundamental problems: time-stamping, snapshots and consensus. Our solutions to the first two are wait-free and the third is obstruction-free. We also show that a shared object has an obstruction-free implementation if and only if it satisfies a simple property called idempotence. To prove the sufficiency of this condition, we give a universal construction that implements any idempotent object.  相似文献   

16.
We consider the problem of computing Byzantine Agreement in a synchronous network with n processors, each with a private random string, where each pair of processors is connected by a private communication line. The adversary is malicious and non-adaptive, i.e., it must choose the processors to corrupt at the start of the algorithm. Byzantine Agreement is known to be computable in this model in an expected constant number of rounds. We consider a scalable model where in each round each uncorrupt processor can send to any set of log n other processors and listen to any set of log n processors. We define the loss of an execution to be the number of uncorrupt processors whose output does not agree with the output of the majority of uncorrupt processors. We show that if there are t corrupt processors, then any randomised protocol which has probability at least 1/2 + 1/ logn of loss less than requires at least f rounds. This also shows that lossless protocols require both rounds, and for at least one uncorrupt processor to send messages during the protocol.  相似文献   

17.
We survey results from distributed computing that show tasks to be impossible, either outright or within given resource bounds, in various models. The parameters of the models considered include synchrony, fault-tolerance, different communication media, and randomization. The resource bounds refer to time, space and message complexity. These results are useful in understanding the inherent difficulty of individual problems and in studying the power of different models of distributed computing. There is a strong emphasis in our presentation on explaining the wide variety of techniques that are used to obtain the results described.Received: September 2001, Accepted: February 2003,  相似文献   

18.
Summary A distributed system consists of a set of loosely connected machines that do not share a global memory. The system isself-stabilizing if it can be started in any global state and achieves consistency all by itself. This also means that the system can deal withinfrequent errors. This paper presents self-stabilizing multi-token rings. A multitoken ring is a generalization of a (one-)token ring. The algorithms presented are generalizations of a self-stabilizing mutual exclusion algorithm by Dijkstra [5] which can also be viewed as a token ring. We develop the algorithms in a stepwise manner, to show how and why we arrived at the final multi-token rings. The final parameterized algorithm represents a set of algorithms, one for each choice of the parameter. This enables one to select the algorithm with an optimal trade-off in desired flexibility versus memory requirements and stabilization time. Mitchell Flatebo received the B.S. degree in Mathematics (1990), the B.S. degree in Computer Science (1990), the M.S. degree in Mathematics (1992), and the M.S. degree in Computer Science (1993) from the University of Nevada, Las Vegas. He is currently a software engineer for Loral Space and Range Systems. His research interests include distributed systems, fault-tolerant computing, and self-stabilization. Ajoy Kumar Datta received the Ph.D. degree in Computer Science from the Jadavpur University, Calcutta, India in 1983. He is currently an Associate Professor of Computer Science at the University of Nevada, Las Vegas. His area of research is distributed and fault-tolerant computing —algorithms and self-stabilization. Anneke Schoone received an M.Sc. degree in Biology in 1978, an M.Sc. degree in Mathematics in 1981, and a Ph.D. degree in Computer Science in 1991 from Utrecht University (The Netherlands). Currently she is a senior research associate at the Department of Computer Science of Utrecht University, supported by ESPRIT Basic Research Action No. 7141 (project ALCOM II:Algorithms and Complexity) of the EC. Her research interests include assertional verification of distributed algorithms and the concept of self-stabilization.The research of this author was supported partially by the ESPRIT Basic Research Action No. 7141 (project ALCOM II:Algorithms and Complexity), and partially by the Netherlands Organization for Scientific Research (NWO) under contract NF 62-376 (NFI project ALADDIN:Algorithmic Aspects of Parallel and Distributed Systems)  相似文献   

19.
H. Wettstein 《Software》1977,7(1):115-126
Since E. W. Dijkstra's work on the mutual interdependencies of co-operating sequential processes there has been much discussion on various synchronizing mechanisms. This discussion usually leads to abstract definitions of synchronizing primitives. However, their realization proceeds differently in various environments. Length of critical regions, number of processors, and dispatching strategy have to be taken into account. In this paper is presented a family of synchronizing operations, the members of which are all based on the same fundamentals. They have been designed with the concept of structured programming in mind.  相似文献   

20.
Abstract We present an improvement to the Disk Paxos protocol by Gafni and Lamport which utilizes extended functionality and flexibility provided by Active Disks and supports unmediated concurrent data access by an unlimited number of processes. The solution facilitates coordination by an infinite number of clients using finite shared memory. It is based on a collection of read-modify-write objects with faults, that emulate a new, reliable shared memory abstraction called a ranked register. The required read-modify-write objects are readily available in Active Disks and in Object Storage Device controllers, making our solution suitable for state-of-the-art Storage Area Network (SAN) environments. A preliminary version of this work appears in Proceedings of the 21st ACM Symposium on Principles of Distributed Computing (PODC02), August 2002.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号