首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Summary We investigate systems where it is possible to access several shared registers in one atomic step. We characterize those systems in which the consensus problem can be solved in the presence of faults and give bounds on the space required. We also describe a fast solution to the mutual exclusion problem using atomicm-register operations. Michael Merritt received a B.S. degree in Philosophy and in Computer Science from Yale College in 1978, the M.S. and Ph. D. degrees in Information and Computer Science in 1980 and 1983, respectively, from the Georgia Institute of Technology. Since 1983 he has been a member of technical staff at AT&T Bell Laboratories, and has taught as an adjunct or visiting lecturer at Stevens Institute of Technology and Columbia University. In 1989 he was program chair for the ACM Symposium on Principles of Distributed Computing. His research interests include distributed and concurrent computation, both algorithms and formal methods for verifying their correctness, cryptography, and security. He is an editor for Distributed Computing and for Information and Computation, recently coauthored a book on database concurrency control algorithms, and is a member of the ACM and of Computer Professionals for Social Responsibility. Gadi Taubenfeld received the B.A., M.Sc. and Ph.D. degrees in Computer Science from the Technion (Israel Institute of Technology), in 1982, 1984 and 1988, respectively. From 1988 to 1990 he was a research scientist at Yale University. Since 1991 he has been a member of technical staff at AT&T Bell Laboratories. His primary research interests are in concurrent and distributed computing.A preliminary version of this workappeared in theProceedings of the Fifth International Workshop on Distributed Algorithms, Delphi, Greece, October 1991, pp 289–294  相似文献   

2.
As a result of our experience, the SR distributed programming language has evolved. One change is that resources and processes are now dynamic rather than static. Another change is that operations and processes are now integrated in a novel way: all the mechanisms for process interaction — remote and local procedure call, rendezvous, dynamic process creation, and asynchronous message passing — are expressed in similar ways. This paper explains the rationale for these and other changes. We examine the fundamental issues faced by the designers of any distributed programming language and consider the ways in which these issues could be addressed. Special attention is given to the design objectives of expressiveness, simplicity, and efficiency. Gregory R. Andrews was born in Olympia, WA, in 1947. He received the B.S. degree in mathematics from Stanford University in 1969 and the Ph.D. degree in computer science from the University of Washington in 1974. From 1974–1979 he was an Assistant Professor of Computer Science at Cornell University. Since 1979 he has been an Associate Professor of Computer Science at the University of Arizona. During 1983–1984, he was a Visiting Associate Professor of Computer Science at the University of Washington. He has also consulted for the U.S. Army Electronics Command and Cray Laboratories. His research interests include concurrent programming languages and distributed operating systems; he is currently co-authoring (with Fred Schneider) a textbook on concurrent programming. Dr. Andrews is a member of the Association for Computing Machinery (ACM). From 1980–1983 he was Secretary-Treasurer of the ACM Special Interest Group on Operating Systems. He has also been on the Board of Editors of Information Processing Letters since 1979. Ronald A. Olsson was born in Huntington, NY, in 1955. He received B.A. degrees in mathematics and computer science and the M.A. degree in mathematics from the State University of New York, College at Potsdam, in 1977. In 1979, he received the M.S. degree in computer science from Cornell University. He was a Lecturer of Computer Science at the State University of New York, College at Brockport, from 1979 to 1981. Since 1981 he has been a graduate student in computer science at the University of Arizona and will complete his Ph.D. in June 1986. His research interests include programming languages, operating systems, distributed systems, and systems software. Mr. Olsson is a student member of the Association for Computing Machinery.This work is supported by NSF under Grant DCR-8402090, and by the Air Force Office of Scientific Research under Grant AFOSR-84-0072. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes not-withstanding any copyright notices thereon  相似文献   

3.
A technique is presented for constructing a finite state protocol from an originally given finite state specification of one process. We present three constructions, showing that they each provide send-receive symmetric solutions which are selfsynchronizing. Two lemmas are proved that provide insight into the types of interactions that arise in these types of finite state protocols. In essence we show that interactions occur between the processes only through isomorphic transitions and that during any interaction between the processes at most one of the two FIFO queues of messages is nonempty.Raymond E. Miller has been Director and Professor of the School of Information and Computer Science at the Georgia Institute of Technology since 1980. Prior to that he was employed by IBM for over thirty years, most of this time as a Research Staff Member at the IBM Research Center in Yorktown Heights, N.Y., where he held a number of technical management positions. He received a B.S. in Mechanical Engineering from the University of Wisconsin, Madison, and a B.S. in Electrical Engineering, M.S. in Mathematics, and PhD in Electrical Engineering, all from the University of Illinois, Urbana. His research areas of interest include theory of computation, machine organization, parallel computation, and communication protocols. He has written over sixty papers, authored a two volume book on switching theory, served as editor for a book on computer complexity, and is editor of a book series of foundations of computer science. He is a Fellow of the IEEE, a member of ACM and AAAS and has been active in numerous ACM capacities including being a member of the ACM Council for six years, an ACM National Lecturer for 1982–83, and a member of the AFIPS Board of Directors for four years. He is a member of the Computer Science Board, and was a member of the NSF Advisory Committee for Computer Research from 1982 to 1985, serving as Chairman for 1983–84. He has taught in a visiting or part time capacity at numerous institutions including Cal Tech, New York University, Yale, University of California at Berkeley and the Polytechnic Institute of New York.This work was partially supported through a contract with GTE Laboratories  相似文献   

4.
Summary Astabilizing system is one which if started at any state is guaranteed to reach a state after which the system cannot deviate from its intended specification. In this paper, we propose a new variation of this notion, called pseudo-stabilization. Apseudo-stabilizing system is one which if started at any state is guaranteed to reach a state after which the system does not deriate from its intended specification. Thus, the difference between the two notions comes down to the difference between cannot and does not — a difference that hardly matters in many practical situations. As it happens, a number of well-known systems, for example the alternating-bit protocol, are pseudo-stabilizing but not stabilizing. We conclude that one should not try to make any such system stabilizing, especially if stabilization comes at a high price. James E. Burns received the B.S. degree in mathematics from the California Institute of Technology, the M.B.I.S. degree from Georgia State University, and the M.S. and Ph.D. degrees in information and computer science from the Georgia Institute of Technology. He is currently an Associate Professor in the College of Computing at the Georgia Institute of Technology, having served previously on the faculty at Indiana University. He has broad research in theoretical issues of distributed and parallel computing, especially relating to problems of synchronization and fault tolerance. Mohamed Gawdat Gouda was born and raised in Egypt. His first bachelor degree was in engineering, and his second was in mathematics. Both degrees are from Cairo University. After his graduation, he moved to Canada where he obtained an MA in mathematics from York University, and a Master and a Ph.D. in computing science from the University of Waterloo. Later, he moved to the United states of America where he worked for the Honeywell Corporate Technology Center for three years. In 1980, he moved to the University of Texas at Austin, and has settled there ever since, except for one summer at Bell Labs, one summer at MCC, and one winter at the Eindhoven Technical University. Gouda currently holds the Mike A. Myer Centennial Professorship in Computing Science at the University of Texas at Austin. Gouda's area of research is distributed and concurrent computing. In this area, he has been working on: abstraction, nondeterminism, atomicity, convergence, stability, formality, correctness, efficiency, scientific elegance, and technical beauty (not necesarily in that order). Gouda was the founding Editor-in-Chief of the journal Distributed Computing, published by Springer-Verlag in 1985. He was the program committee chairman of the 1989 SIGCOMM Conference sponsored by ACM. He was the first program committee chairman for the International Conference on Network Protocols, established by the IEEE Computer Society in 1993. Gouda is an original member of the Austin Tuesday Afternoon Club. In his spare time, he likes to design network protocols and prove them correct for fun. Raymond E. Miller received his Ph.D. in 1957 from the University of Illinois, Champaign-Urbana. He was a Research Staff Member at IBM, Thomas J. Watson Research Center, Yorktown Heights, N.Y., from 1957 until 1980, Director of the School of Information and Computer Science at Georgia Tech from 1980 until 1987, and is currently a professor of computer science at the University of Maryland, College Park and Director of the NASA Center of Excellence in Space Data and Information Sciences at Goddard Space Flight Center. He has written over 90 technical papers in areas of theory of computation, machine organization, parellel computation and communication protocols. He is a Fellow of the IEEE and a Fellow of the American Association for the Advancement of Science. He has been active in the ACM and IEE/CS, and is a Board member of the Computing Research Association. In the IEEE/CS, he is a member of the Board of Governors and the 1991 Vice President for Educational Activities.  相似文献   

5.
Summary We study the relation between knowledge and space. That is, we analyze how much shared memory space is needed in order to learn certain kinds of facts. Such results are useful tools for reasoning about shared memory systems. In addition we generalize a known impossibility result, and show that results about how knowledge can be gained and lost in message passing systems also hold for shared memory systems. Michael Merritt received a B.S. degree in Philosophy and in Computer Science from Yale College in 1978, the M.S. and Ph.D. degrees in Information and Computer Science in 1980 and 1983, respectively, from the Georgia Institute of Technology. Since 1983 he has been a member of technical staff at AT & T Bell Laboratories, and has taught as an adjunct or visiting lecturer at Stevens Institute of Technology, Massachusetts Institute of Technology, and Columbia University. In 1989 he was program chair for the ACM Symposium on Principles of Distributed Computing. His research interests include distributed and concurrent computation, both algorithms and formal methods for verifying their correctness, cryptography, and security. He is an editor for Distributed Computing and for Information and Computation, recently co-authored a book on database concurrency control algorithms, and is a member of the ACM and of Computer Professionals for Social Responsibility. Gadi Taubenfeld received the B.A., M.Sc. and Ph.D. degrees in Computer Science from the Technion (Israel Institute of Technology), in 1982, 1984 and 1988, respectively. From 1988 to 1990 he was a research scientist at Yale University. Since 1991 he has been a member of technical staff at AT & T Bell Laboratories. His primary research interests are in concurrent and distributed computing.A preliminary version of this work appeared in the Proceedings of the Tenth Annual ACM Symposium on Principles of Distributed Computing, pages 189–200, Montreal, Canada, August 1991  相似文献   

6.
Summary Acomposite register is an array-like shared data object that is partitioned into a number of components. An operation of such a register either writes a value to a single component, or reads the values of all components. A composite register reduces to an ordinary atomic register when there is only one component. In this paper, we show that a composite register with multiple writers per component can be implemented in a wait-free manner from a composite register with a single writer per component. It has been previously shown that registers of the latter kind can be implemented from atomic registers without waiting. Thus, our results establish that any composite register can be implemented in a wait-free manner from atomic registers. We show that our construction has comparable space compexity and better time complexity than other constructions that have been presented in the literature. James H. Anderson received the B.S. degree in Computer Science from Michigan State University in 1982, the M.S. degree in Computer Science from Purdue University in 1983, and the Ph.D. degree in Computer Sciences from the University of Texas at Austin in 1990. He recently joined the Computer Science Department at the University of North Carolina at Chapel Hill, where he is now an Assistant Professor. Prior to joining the University of North Carolina, he was an Assistant Professor of Computer Science for three years at the University of Maryland at College Park. Professor Anderson's main research interests are within the area of concurrent and distributed computing. His current interests primarily involve the implementation of resilient and scalable synchronization mechanisms.Preliminary version was presented at the Ninth Annual ACM Symposium on Principles of Distributed Computing [2]Much of the work described herein was completed while the author was with the University of Texas at Austin and the University of Maryland at College Park. This work was supported at the University of Texas by ONR Contract N00014-89-J-1913, and at the University of Maryland by NSF Contract CCR 9109497 and by an award from the University of Maryland General Research Board  相似文献   

7.
Summary The binary Byzantine Agreement problem requiresn–1 receivers to agree on the binary value broadcast by a sender even when some of thesen processes may be faulty. We investigate the message complexity of protocols that solve this problem in the case of crash failures. In particular, we derive matching upper and lower bounds on the total, worst and average case number of meassages needed in the failure-free executions of such protocols.More specifically, we prove that any protocol that tolerates up tot faulty processes requires a total of at leastn+t–1 messages in its failure-free executions —and, therefore, at least [(n+t–1)/2] messages in the worst case and min (P 0,P 1)·(n+t–1) meassages in the average case, whereP v is the probability that the value of the bit that the sender wants to broadcast isv. We also give protocols that solve the problem using only the minimum number of meassages for these three complexity measures. These protocols can be implemented by using 1-bit messages. Since a lower bound on the number of messages is also a lower bound on the number of meassage bits, this means that the above tight bounds on the number of messages are also tight bounds on the number of meassage bits. Vassos Hadzilacos received a BSE from Princeton University in 1980 and a PhD from Harvard University in 1984, both in Computer Science. In 1984 he joined the Department of Computer Science at the University of Toronto where he is currently an Associate Professor. In 1990–1991 he was visiting Associate Professor in the Department of Computer Science at Cornell University. His research interests are in the theory of distributed systems. Eugene Amdur obtained a B. Math from the University of Waterloo in 1986 and a M.Sc. from the University of Toronto in 1988. He is currently employed by the Vision and Robotics group at the University of Toronto in both technical and research capacities. His current areas of interest are vision, robotics, and networking. Samuel Weber received his B.Sc. in Mathematics and Computer Science and his M.Sc. in Computer Science from the University of Toronto. Currently, he is at Cornell University as a Ph.D. student in Computer Science with a minor in Psychology. His research interests include distributed systems, and the semantics of programming languages.  相似文献   

8.
Summary We introduce a shared data object, called acomposite register, that generalizes the notion of an atomic register. A composite register is an array-like shared data object that is partitioned into a number of components. An operation of a composite register either writes a value to a single component or reads the values of all components. A composite register reduces to an ordinary atomic register when there is only one component. In this paper, we show that multi-reader, singlewriter atomic registers can be used to implement a composite register in which there is only one writer per component. In a related paper, we show how to use the composite register construction of this paper to implement a composite register with multiple writers per component. These two constructions show that it is possible to implement a shared memory that can be read in its entirety in a single snapshot operation, without using mutual exclusion. James H. Anderson received the B.S. degree in Computer Science from Michigan State University in 1982, the M.S. degree in Computer Science from Purdue University in 1983, and the Ph.D. degree in Computer Sciences from the University of Texas at Austin in 1990. Since August 1990, he has been with the University of Maryland at College Park, where he is now an Assistant Professor of Computer Science. Since January 1992, he has been a staff scientist at NASA's Center of Excellence in Space Data and Information Sciences, located at the Goddard Space Flight Center in Greenbelt, Maryland. Professor Anderson's primary research interests are within the area of concurrent and distributed computing.Preliminary version presented at theNinth Annual ACM Symposium on Principles of Distributed Computing [2]Work supported, in part, at the University of Texas at Austin by Office of Naval Research Contract N00014-89-J-1913, and at the University of Maryland by an award from the University of Maryland General Research Board  相似文献   

9.
Summary A self-stabilizing program eventually resumes normal behavior even if excution begins in, an abnormal initial state. In this paper, we explore the possibility of extending an arbitrary program into a self-stabilizing one. Our contributions are: (1) a formal definition of the concept of one program being aself-stabilizing extension of another; (2) a characterization of what properties may hold in such extensions; (3) a demonstration of the possibility of mechanically creating such extensions. The computtional model used is that of an asynchronous distributed message-passing system whose communication topology is an arbitrary graph. We contrast the difficulties of self-stabilization in thismodel with those of themore common shared-memory models. Shmuel Katz received his B.A. in Mathematics and Englisch Literature from U.C.L.A., and his M.Sc. and Ph.D. in Computer Science (1976) from the Weizmann Institute in Rechovot, Israel. From 1976 to 1981 he was a research at the IBM Israel Scientific Center. Presently, he is an Associate Professor in the Computer Science Department at the Technion in Haifa, Israel. In 1977–78 he visited for a year at the University of California, Berkeley, and in 1984–85 was at the University of Texas at Austin. He has been a consultant and vistor at the MCC Software Technology Program, and in 1988–89 was a visiting scientist at the IBM Watson Research Center. His research interests include the methodology of programming, specification methods, program verification and semantics, distributed programming, data structure, and programming languages. Kenneth J. Pery has performed research in the area of distributed computing since obtaining Masters and Doctorate degrees in Computer Science from Cornell Univesity. His current interest is in studying problems of a partical nature in a formal context. He was graduated from Princeton University in 1979 with a B.S.E. degree in Electrical Engineering and Computer Science.The Research of this author was partially supported by Research Grant 120-749 and the Argentinian Research Fund at the Technion  相似文献   

10.
On High Dimensional Projected Clustering of Data Streams   总被引:3,自引:0,他引:3  
The data stream problem has been studied extensively in recent years, because of the great ease in collection of stream data. The nature of stream data makes it essential to use algorithms which require only one pass over the data. Recently, single-scan, stream analysis methods have been proposed in this context. However, a lot of stream data is high-dimensional in nature. High-dimensional data is inherently more complex in clustering, classification, and similarity search. Recent research discusses methods for projected clustering over high-dimensional data sets. This method is however difficult to generalize to data streams because of the complexity of the method and the large volume of the data streams.In this paper, we propose a new, high-dimensional, projected data stream clustering method, called HPStream. The method incorporates a fading cluster structure, and the projection based clustering methodology. It is incrementally updatable and is highly scalable on both the number of dimensions and the size of the data streams, and it achieves better clustering quality in comparison with the previous stream clustering methods. Our performance study with both real and synthetic data sets demonstrates the efficiency and effectiveness of our proposed framework and implementation methods.Charu C. Aggarwal received his B.Tech. degree in Computer Science from the Indian Institute of Technology (1993) and his Ph.D. degree in Operations Research from the Massachusetts Institute of Technology (1996). He has been a Research Staff Member at the IBM T. J. Watson Research Center since June 1996. He has applied for or been granted over 50 US patents, and has published over 75 papers in numerous international conferences and journals. He has twice been designated Master Inventor at IBM Research in 2000 and 2003 for the commercial value of his patents. His contributions to the Epispire project on real time attack detection were awarded the IBM Corporate Award for Environmental Excellence in 2003. He has been a program chair of the DMKD 2003, chair for all workshops organized in conjunction with ACM KDD 2003, and is also an associate editor of the IEEE Transactions on Knowledge and Data Engineering Journal. His current research interests include algorithms, data mining, privacy, and information retrieval.Jiawei Han is a Professor in the Department of Computer Science at the University of Illinois at Urbana–Champaign. He has been working on research into data mining, data warehousing, stream and RFID data mining, spatiotemporal and multimedia data mining, biological data mining, social network analysis, text and Web mining, and software bug mining, with over 300 conference and journal publications. He has chaired or served in many program committees of international conferences and workshops, including ACM SIGKDD Conferences (2001 best paper award chair, 1996 PC co-chair), SIAM-Data Mining Conferences (2001 and 2002 PC co-chair), ACM SIGMOD Conferences (2000 exhibit program chair), International Conferences on Data Engineering (2004 and 2002 PC vice-chair), and International Conferences on Data Mining (2005 PC co-chair). He also served or is serving on the editorial boards for Data Mining and Knowledge Discovery, IEEE Transactions on Knowledge and Data Engineering, Journal of Computer Science and Technology, and Journal of Intelligent Information Systems. He is currently serving on the Board of Directors for the Executive Committee of ACM Special Interest Group on Knowledge Discovery and Data Mining (SIGKDD). Jiawei has received three IBM Faculty Awards, the Outstanding Contribution Award at the 2002 International Conference on Data Mining, ACM Service Award (1999) and ACM SIGKDD Innovation Award (2004). He is an ACM Fellow (since 2003). He is the first author of the textbook “Data Mining: Concepts and Techniques” (Morgan Kaufmann, 2001).Jianyong Wang received the Ph.D. degree in computer science in 1999 from the Institute of Computing Technology, the Chinese Academy of Sciences. Since then, he ever worked as an assistant professor in the Department of Computer Science and Technology, Peking (Beijing) University in the areas of distributed systems and Web search engines (May 1999–May 2001), and visited the School of Computing Science at Simon Fraser University (June 2001–December 2001), the Department of Computer Science at the University of Illinois at Urbana-Champaign (December 2001–July 2003), and the Digital Technology Center and Department of Computer Science and Engineering at the University of Minnesota (July 2003–November 2004), mainly working in the area of data mining. He is currently an associate professor in the Department of Computer Science and Technology, Tsinghua University, Beijing, China.Philip S. Yuis the manager of the Software Tools and Techniques group at the IBM Thomas J. Watson Research Center. The current focuses of the project include the development of advanced algorithms and optimization techniques for data mining, anomaly detection and personalization, and the enabling of Web technologies to facilitate E-commerce and pervasive computing. Dr. Yu,s research interests include data mining, Internet applications and technologies, database systems, multimedia systems, parallel and distributed processing, disk arrays, computer architecture, performance modeling and workload analysis. Dr. Yu has published more than 340 papers in refereed journals and conferences. He holds or has applied for more than 200 US patents. Dr. Yu is an IBM Master Inventor.Dr. Yu is a Fellow of the ACM and a Fellow of the IEEE. He will become the Editor-in-Chief of IEEE Transactions on Knowledge and Data Engineering on Jan. 2001. He is an associate editor of ACM Transactions of the Internet Technology and also Knowledge and Information Systems Journal. He is a member of the IEEE Data Engineering steering committee. He also serves on the steering committee of IEEE Intl. Conference on Data Mining. He received an IEEE Region 1 Award for “promoting and perpetuating numerous new electrical engineering concepts”. Philip S. Yu received the B.S. Degree in E.E. from National Taiwan University, Taipei, Taiwan, the M.S. and Ph.D. degrees in E.E. from Stanford University, and the M.B.A. degree from New York University.  相似文献   

11.
Summary This paper focuses upon a particular conservative algorithm for parallel simulation, the Time of Next Event (TNE) suite of algorithms [13]. TNE relies upon a shortest path algorithm which is independently executed on each processor in order to unblock LPs in the processor and to increase the parallelism of the simulation. TNE differs fundamentally from other conservative approaches in that it takes advantage of having several LPs assigned to each processor, and does not rely upon message passing to provide lookahead. Instead, it relies upon a shortest path algorithm executed independently in each processor. A deadlock resolution algorithm is employed for interprocessor deadlocks. We describe an empirical investigation of the performance of TNE on the iPSC/i860 hypercube multiprocessor. Several factors which play an important role in TNE's behavior are identified, and the speedup relative to a fast uniprocessor-based event list algorithm is reported. Our results indicate that TNE yields good speedups and out-performs an optimized version of the Chandy&Misra-null message (CMB) algorithm. TNE was 2–5 times as fast as the CM approach for less than 10 processors (and 1.5–3 times as fast when more than 10 processors were used for the same population of processes.) Azzedine Boukerche received the State Engineer degree in Software Engineering from Oran University, Oran, Algeria, and the M.Sc. degree in Computer Science from McGill University, Montreal, Canada. He is a Ph.D. candidate at the School of Computer Science, McGill University. During 1991–1992, he was a visiting doctoral student at the California Institute of Technology. He is employed as a Faculty Lecturer of computer Science at McGill University since 1993. His research interests include parallel simulation, distributed algorithms, and system performance analysis. He is a student member of the IEEE and ACM. Carl Tropper is an Associate Professor of Computer Science at McGill University. His primary area of research is parallel discrete event simulation. His general area of interest is in parallel computing and distributed algorithms in particular. Previously, he has done research in the performance modeling of computer networks, having written a book,Local Computer Network Technologies, while active in the area. Before coming to university life, he worked for the BBN Corporation and the Mitre Corporation, both located in the Boston area. He spent the 1991–92 academic year on a sabbatical leave at the Jet Propulsion Laboratories of the California Institute of Technology where he contributed to a project centered about the verification of flight control software. As part of this project he developed algorithms for the parallel simulation of communicating finite state machines. During winters he may be found hurtling down mountains on skis.This work has been completed while the author was a visiting doctoral student at the California Institute of TechnologyWas on sabbatical leave at the Jet Propulsion laboratories, California Institute of Technology  相似文献   

12.
The quantitative μ-calculus qMμ extends the applicability of Kozen's standard μ-calculus [D. Kozen, Results on the propositional μ-calculus, Theoretical Computer Science 27 (1983) 333–354] to probabilistic systems. Subsequent to its introduction [C. Morgan, and A. McIver, A probabilistic temporal calculus based on expectations, in: L. Groves and S. Reeves, editors, Proc. Formal Methods Pacific '97 (1997), available at [PSG, Probabilistic Systems Group: Collected reports, http://web.comlab.ox.ac.uk/oucl/research/areas/probs/bibliography.html]; also appears at [A. McIver, and C. Morgan, “Abstraction, Refinement and Proof for Probabilistic Systems,” Technical Monographs in Computer Science, Springer, New York, 2005, Chap. 9], M. Huth, and M. Kwiatkowska, Quantitative analysis and model checking, in: Proceedings of 12th annual IEEE Symposium on Logic in Computer Science, 1997] it has been developed by us [A. McIver, and C. Morgan, Games, probability and the quantitative μ-calculus qMu, in: Proc. LPAR, LNAI 2514 (2002), pp. 292–310, revised and expanded at [A. McIver, and C. Morgan, Results on the quantitative μ-calculus qMμ (2005), to appear in ACM TOCL]; also appears at [A. McIver, and C. Morgan, “Abstraction, Refinement and Proof for Probabilistic Systems,” Technical Monographs in Computer Science, Springer, New York, 2005, Chap. 11], A. McIver, and C. Morgan, “Abstraction, Refinement and Proof for Probabilistic Systems,” Technical Monographs in Computer Science, Springer, New York, 2005, A. McIver, and C. Morgan, Results on the quantitative μ-calculus qMμ (2005), to appear in ACM TOCL] and by others [L. de Alfaro, and R. Majumdar, Quantitative solution of omega-regular games, Journal of Computer and System Sciences 68 (2004) 374–397]. Beyond its natural application to define probabilistic temporal logic [C. Morgan, and A. McIver, An expectation-based model for probabilistic temporal logic, Logic Journal of the IGPL 7 (1999), pp. 779–804, also appears at [A. McIver, and C. Morgan, “Abstraction, Refinement and Proof for Probabilistic Systems,” Technical Monographs in Computer Science, Springer, New York, 2005, Chap.10]], there are a number of other areas that benefit from its use.One application is stochastic two-player games, and the contribution of this paper is to depart from the usual notion of “absolute winning conditions” and to introduce a novel game in which players can “draw”.The extension is motivated by examples based on economic games: we propose an extension to qMμ so that they can be specified; we show that the extension can be expressed via a reduction to the original logic; and, via that reduction, we prove that the players can play optimally in the extended game using memoryless strategies.  相似文献   

13.
Summary We defineinterface, module and the meaning ofM offers I, whereM denotes a module andI an interface. For a moduleM and disjoint interfacesU andL, the meaning ofM using L offers U is also defined. For a linear hierarchy of modules and interfaces,M 1, I1, M2, I2, ...,M n, In, we present the following composition theorem: IfM 1 offersI 1 and, fori=2, ...,n, M i usingI i–1 offersI i, then the hierarchy of modules offersI n.Our theory is applied to solve a problem posed by Leslie Lamport at the 1987 Lake Arrowhead Workshop. We first present a formal specification of a serializable database interface. We then provide specifications of two modules, one based upon two-phase locking and the other multi-version timestamps; the two-phase locking module uses an interface offered by a physical database. We prove that each module offers the serializable interface. Simon S. Lam is Chairman of the Department of Computer Sciences at the University of Texas at Austin and holds and endowed professorship. His research interests are in the areas of computer networks, communication protocols, performance models, formal methods, and network security. He serves on the editorial boards ofIEEE Transactions on Software Engineering andPerformance Evaluation. He is an IEEE Fellow, and was a corecipient of the 1975 Leonard G. Abraham Prize Paper Award from the IEEE Communications Society. He organized and was program chairman of the first ACM SIGCOMM Symposium on Communications Architectures and Protocols in 1983. He received the BSEE degree (with Distinction) from Washington State University in 1969, and the MS and Ph.D. degrees from the University of California at Los Angeles in 1970 and 1974 respectively. Prior to joining the University of Texas faculty, he was with the IBM T.J. Watson Research Center from 1974 to 1977. A. Udaya Shankar received the B. Tech. degree in Electrical Engineering from the Indian Institute of Technology, Kanpur, in 1976, the M.S. degree in Computer Engineering from Syracuse University, Syracuse, NY, in 1978, and the Ph.D. degree in Electrical Engineering from the University of Texas at Austin, in 1982. Since January 1983, he has been with the University of Maryland, College Park, where he is now an Associate Professor of Computer Science. Since September 1985, he has been with the Institute for Advanced Computer Studies at the University of Maryland. His current research interests include the modeling and analysis of distributed systems and network protocols, from both correctness and performance aspects. He is a member of IEEE and ACM.The work of Simon S. Lam was supported by National Science Foundation grants no. NCR-8613338 and no. NCR-9004464. The work of A. Udaya Shankar was supported by National Science Foundation grants no. ECS-8502113 and no. NCR-8904590  相似文献   

14.
Summary The abstraction of a shared memory is of growing importance in distributed computing systems. Traditional memory consistency ensures that all processes agree on a common order of all operations on memory. Unfortunately, providing these guarantees entails access latencies that prevent scaling to large systems. This paper weakens such guarantees by definingcausal memory, an abstraction that ensures that processes in a system agree on the relative ordering of operations that arecausally related. Because causal memory isweakly consistent, it admits more executions, and hence more concurrency, than either atomic or sequentially consistent memories. This paper provides a formal definition of causal memory and gives an implementation for message-passing systems. In addition, it describes a practical class of programs that, if developed for a strongly consistent memory, run correctly with causal memory. Mustaque Ahamad is an Associate Professor in the College of Computing at the Georgia Institute of Technology. He received his M.S. and Ph.D. degrees in Computer Science from the State University of New York at Stony Brook in 1983 and 1985 respectively. His research interests include distributed operating systems, consistency of shared information in large scale distributed systems, and replicated data systems. James E. Burns received the B.S. degree in mathematics from the California Institute of Technology, the M.B.I.S. degree from Georgia State University, and the M.S. and Ph.D. degrees in information and computer science from the Georgia Institute of Technology. He served on the faculty of Computer Science at Indiana University and the College of Computing at the Georgia Institute of Technology before joining Bellcore in 1993. He is currently a Member of Technical Staff in the Network Control Research Department, where he is studying the telephone control network with special interest in behavior when faults occur. He also has research interests in theoretical issues of distributed and parallel computing especially relating to problems of synchronization and fault tolerance.This work was supported in part by the National Science Foundation under grants CCR-8619886, CCR-8909663, CCR-9106627, and CCR-9301454. Parts of this paper appeared in S. Toueg, P.G. Spirakis, and L. Kirousis, editors,Proceedings of the Fifth International Workshop on Distributed Algorithms, volume 579 ofLecture Notes on Computer Science, pages 9–30, Springer-Verlag, October 1991The photograph of Professor J.E. Burns was published in Volume 8, No. 2, 1994 on page 59This author's contributions were made while he was a graduate student at the Georgia Institute of Technology. No photograph and biographical information is available for P.W. Hutto Gil Neiger was born on February 19, 1957 in New York, New York. In June 1979, he received an A.B. in Mathematics and Psycholinguistics from Brown University in Providence, Rhode Island. In February 1985, he spent two weeks picking cotton in Nicaragua in a brigade of international volunteers. In January 1986, he received an M.S. in Computer Science from Cornell University in Ithaca, New York and, in August 1988, he received a Ph.D. in Computer Science, also from Cornell University. On August 20, 1988, Dr. Neiger married Hilary Lombard in Lansing, New York. He is currently a Staff Software Engineer at Intel's Software Technology Lab in Hillsboro, Oregon. Dr. Neiger is a member of the editorial boards of theChicago Journal of Theoretical Computer Science and theJournal of Parallel and Distributed Computing.  相似文献   

15.
Appraising fairness in languages for distributed programming   总被引:1,自引:0,他引:1  
The relations among various languages and models for distributed computation and various possible definitions of fairness are considered. Natural semantic criteria are presented which an acceptable notion of fairness should satisfy. These are then used to demonstrate differences among the basic models, the added power of the fairness notion, and the sensitivity of the fairness notion to irrelevant semantic interleavings of independent operations. These results are used to show that from the considerable variety of commonly used possibilities, only strong process fairness is appropriate forCSP if these criteria are adopted. We also show that under these criteria, none of the commonly used notions of fairness are fully aceptable for a model with an n-way synchronization mechanism. The notion of fairness most often mentioned for Ada is shown to be fully acceptable. For a model with nonblockingsend operations, some variants of common fairness definitions are appraised, and two are shown to satisfy the suggested criteria. Krzysztof R. Apt was born in 1949 in Poland. Received his Ph.D. in 1974 from Polish Academy of Sciences in Warsaw in mathematical logic. From 1974 until 1981 worked at various scientific institutions in the Netherlands and from 1981 until 1987 at C.N.R.S. in Paris, France. Spent 1985 as a visiting scientist at IBM Research Centre in Yorktown Heights, U.S.A. Currently holding an Endowed Professorship at the Department of Computer Sciences at the University of Texas at Austin; also a senior research scientist at the Centre for Mathematics and Computer Science in Amsterdam, the Netherlands. His research interests include program correctness and semantics, methodology of distributed computing, use of logic as a programming language and non-standard forms of reasoning. He has served on editorial boards of a number of journals and program committees of numerous conferences in computer science. Lectured in a dozen countries on four continents. Also, he has run two marathons and crossed Sumatra on a bicycle. Shmuel Katz received his B.A. in Mathematics and English Literature from U.C.L.A., and his M.Sc. and Ph.D. in Computer Science (1976) from the Weizmann Institute in Rehovot, Israel. From 1976 to 1981 he was a researcher at the IBM Israel Scientific Center. Presently, he is a Senior Lecturer in the Computer Science Department at the Technion in Haifa, Israel. In 1977–78, he visited for a year at the University of California, Berkeley, and in 1984–85 was at the University of Texas at Austin. He has also been a consultant for the MCC Software Technology Program. His research interests include the methodology of programming, specification methods, program verification and semantics, distributed programming, data structures, and programming languages. Nissim Francez received his B.A. in Mathematics and Philosophy from the Hebrew University in Jerusalem, and his M.Sc. and Ph.D. in computer science (1976) from the Weizmann Institute of Science, Rehovot, Israel. In 1976–77 he spent a postdoctoral year at Queen's university, Belfast, where he was introduced by C.A.R. Hoare to CSP. In 1977–78 he was an assistant professor at USC, Los Angeles. From 1978 he is with the Computer Science Department at the Technion. In 1982–83 he was on a sabbatical leave at IBM T.J. Watson Research Center. He has been a consultant for MCC's software technology program, working on multiparty activities in distributed systems. He had summer appointments in Harvard University, IBM T.J. Watson Research Center, Utrecht University, CWI (Amsterdam) and at MCC. He also served in several program committees. His research interests include program verification and the semantics of programming languages, mainly for concurrent and distributed programming. Is also interested in logic programming and recursive query evaluation and in compiler constration. He is the author of the first book onFairness. Unfortunately, he is incapable of Marathon running...  相似文献   

16.
Multimedia presentations (e.g., lectures, digital libraries) normally include discrete media objects such as text and images along with continuous media objects such as video and audio. Objects composing a multimedia presentation need to be delivered based on the temporal relationships specified by the author(s). Hence, even discrete media objects (that do not normally have any real-time characteristics) have temporal constraints on their presentations. Composition of multimedia presentations may be light (without any accompanying video or large multimedia data) or heavy (accompanied by video for the entire presentation duration). The varying nature of the composition of multimedia presentations provides some flexibility for scheduling their retrieval. In this paper, we present a min-max skip round disk scheduling strategy that can admit multimedia presentations in a flexible manner depending on their composition. We also outline strategies for storage of multimedia presentations on an array of disks as well as on multi-zone recording disks.Emilda Sindhu received the B.Tech degree in Electrical & Electronics from University of Calicut, India, in 1995 and the M.S. degree in Computer Science in 2003 from National University of Singapore. This paper comprises part of her master thesis work. She is presently employed as Senior Research Officer at the A-star Institute of High Performance Computing (IHPC), Singapore. Her current research interests include distributed computing particulary Grid computing. She is involved in the development of tools and components for distributed computing applications.Lillykutty Jacob obtained her B.Sc (Engg.) degree in electronics and communication from the Kerala University, India, in 1983, M.Tech. degree in electrical engineering (communication) from the Indian Institute of Technology at Madras in 1985, and PhD degree in electrical communication engineering from the Indian Institute of Science, in 1993. She was with the department of computer science, Korea Advanced Institute of Science and Technology, S. Korea, during 1996–97, for post doctoral research, and with the department of Computer Science, National University of Singapore, during 1998–2003, as a visiting faculty. Since 1985 she has been with the National Institute of Technology at Calicut, India, where she is currently a professor. Her research interests include wireless networks, QoS issues, and performance analysis.Ovidiu Daescu received the B.S. in computer science and automation from the Technical Military Academy, Bucharest, Romania, in 1991, and the M.S. and Ph.D. degrees from the University of Notre Dame, in 1997 and 2000. He is currently an Assistant Professor in the Department of Computer Science, University of Texas at Dallas. His research interests are in algorithm design, computational geometry and geometric optimization.B. Prabhakaran is currently with the Department of Computer Science, University of Texas at Dallas. Dr. B. Prabhakaran has been working in the area of multimedia systems: multimedia databases, authoring & presentation, resource management, and scalable web-based multimedia presentation servers. He has published several research papers in prestigious conferences and journals in this area.Dr. Prabhakaran received the NSF CAREER Award FY 2003 for his proposal on Animation Databases. Dr. Prabhakaran has served as an Associate Chair of the ACM Multimedia2003 (November 2003, California), ACM MM 2000 (November 2000, Los Angeles), and ACM Multimedia99 conference (Florida, November 1999). He has served as guest-editor (special issue on Multimedia Authoring and Presentation) for ACM Multimedia Systems journal. He is also serving on the editorial board of Multimedia Tools and Applications journal, Kluwer Academic Publishers. He has also served as program committee member on several multimedia conferences and workshops. Dr. Prabhakaran has presented tutorials in several conferences on topics such as network resource management, adaptive multimedia presentations, and scalable multimedia servers.B. Prabhakaran has served as a visiting research faculty with the Department of Computer Science, University of Maryland, College Park. He also served as a faculty in the Department of Computer Science, National University of Singapore as well as in the Indian Institute of Technology, Madras, India.  相似文献   

17.
Summary We present a formal proof method for distributed programs. The semantics used to justify the proof method explicitly identifies equivalence classes of execution sequences which are equivalent up to permuting commutative operations. Each equivalence class is called an interleaving set or a run. The proof rules allow concluding the correctness of certain classes of properties for all execution sequences, even though such properties are demonstrated directly only for a subset of the sequences. The subset used must include a representative sequence from each interleaving set, and the proof rules, when applicable, guarantee that this is the case. By choosing a subset with appropriate sequences, simpler intermediate assertions can be used than in previous formal approaches. The method employs proof lattices, and is expressed using the temporal logic ISTL. Shmuel Katz received his B.A. in Mathematics and English Literature from U.C.L.A., and his M.Sc. and Ph.D. in Computer Science (1976) from the Weizmann Institute in Rechovot, Israel. From 1976 to 1981 he was at the IBM Israel Scientific Center. Presently, he is on the faculty of the Computer Science Department at the Technion in Haifa, Israel. In 1977–1978 he visited for a year at the University of California, Berkeley, and in 1984–1985 was at the University of Texas at Austin. He has been a consultant and visitor at the MCC Software Technology Program, and in 1988–1989 was a visiting scientist at the I.B.M. Watson Research Center. His research interests include the methodology of programming, specification methods, program verification and semantics, distributed programming, data structures, and programming languages. Doron Peled was born in 1962 in Haifa. He received his B.Sc. and M.Sc. in Computer Science from the Technion, Israel in 1984 and 1987, respectively. Between 1987 and 1991 he did his military service. He also completed his D.Sc. degree in the Technion during these years. Dr. Peled was with the Computer Science department at Warwick University in 1991–1992. He is currently a member of the technical staff with AT & T Bell Laboratories. His main research interests are specification and verification of programs, especially as related to partial order models, fault-tolerance and real-time. He is also interested in semantics and topology.This research was carried out while the second author was at the Department of Computer Science, The Technion, Haifa 32000, Israel  相似文献   

18.
Summary Algorithms for mutual exclusion that adapt to the current degree of contention are developed. Afilter and a leader election algorithm form the basic building blocks. The algorithms achieve system response times that are independent of the total number of processes and governed instead by the current degree of contention. The final algorithm achieves a constant amortized system response time. Manhoi Choy was born in 1967 in Hong Kong. He received his B.Sc. in Electrical and Electronic Engineerings from the University of Hong Kong in 1989, and his M.Sc. in Computer Science from the University of California at Santa Barbara in 1991. Currently, he is working on his Ph.D. in Computer Science at the University of California at Santa Barbara. His research interests are in the areas of parallel and distributed systems, and distributed algorithms. Ambuj K. Singh is an Assistant Professor in the Department of Computer Science at the University of California, Santa Barbara. He received a Ph.D. in Computer Science from the University of Texas at Austin in 1989, an M.S. in Computer Science from Iowa State University in 1984, and a B.Tech. from the Indian Institute of Technology at Kharagpur in 1982. His research interests are in the areas of adaptive resource allocation, concurrent program development, and distributed shared memory.A preliminary version of the paper appeared in the 12th Annual ACM Symposium on Principles of Distributed ComputingWork supported in part by NSF grants CCR-9008628 and CCR-9223094  相似文献   

19.
Summary Three self-stabilizing protocols for distributed systems in the shared memory model are presented. The first protocol is a mutual-exclusion prootocol for tree structured systems. The second protocol is a spanning tree protocol for systems with any connected communication graph. The thrid protocol is obtianed by use offair protoco combination, a simple technique which enables the combination of two self-stabilizing dynamic protocols. The result protocol is a self-stabilizing, mutualexclusion protocol for dynamic systems with a general (connected) communication graph. The presented protocols improve upon previous protocols in two ways: First, it is assumed that the only atomic operations are either read or write to the shared memory. Second, our protocols work for any connected network and even for dynamic network, in which the topology of the network may change during the excution. Shlomi Dolev received his B.Sc. in Civil Engineering and B.A. in Computer Science in 1984 and 1985, and his M.Sc. and Ph.D. in computer Sciene in 1989 and 1992 from the Technion Israel Institute of Technology. He is currently a post-dotoral fellow in the Department of Computer Science at Texas A & M Univeristy. His current research interests include the theoretical aspects of distributed computing and communcation networks. Amos Israeli received his B.Sc. in Mathematics and Physics from Hebrew University in 1976, and his M.Sc. and D.Sc. in Computer Science from the Weizmann Institute in 1980 and the Technion in 1985, respectively. Currently he is a sensior lecturer at the Electrical Engineering Department at the Technion. Prior tot his he was a postdoctoral fellow at the Aiken Computation Laboratory at harvard. His research interests are in Parellel and Distributed Computing and in Robotics. In particular he has worked on the design and analysis of Wait-Free and Self-Stabilizing distributed protocols. Shlomo Moran received his B.Sc. and D.Sc. degrees in matheamtics from Technion, Israel Institute of Technology, Haifa, in 1975 and 1979, respectively. From 1979 to 1981 he was assistant professors and a visiting research specialist at the University of Minnesota, Minneapolis. From 1981 to 1985 he was a senior lecturer at the Department of Computer Science. Technion, and from 1985 to 1986 he visted at IBM Thoas J. Watson Research Center, Yorktown Heights. From 1986 to 1993 he was an associated professor at the Department of Computer Science, Technin. in 1992–3 he visited at AT & T Bell Labs at Murray Hill and at Centrum voor Wiskunde en Informatica, Amsterdam. From 1993 he is a full professor at the Department of Computer Science, Technion. His researchinterests include distributed algorithm, computational complexity, combinatorics and grapth theory.Part of this research was supported in part by Technion V.P.R. Funds — Wellner Research Fund, and by the Foundation for Research in Electronics, Computers and Communictions, administrated by the Israel Academy of Sciences and Humanities.  相似文献   

20.
We present a distributed algorithm for electing a leader (i. e., breaking symmetry) in bidirectional rings ofN processors with no global sense of orientation, that uses at most 1.44 ...N logN+O(N) messages in the worst case.Jan van Leeuwen received his M. Sc. degree in 1969 (cum laude) and the Ph.D. degree in 1972 from the University of Utrecht, Utrecht, The Netherlands. He held a postdoctorate fellowship in computer science at the University of California at Berkeley (1972–1973), visiting assistant professorship in computer science at the State University of New York at Buffalo (1973–1974, 1975–1976), and a visiting associate professorship in computer science at The Pennsylvania State University, University Park (1976–1977). In 1977 he was appointed Associate Professor of Computer Science at the University of Utrecht and became head of the new Department of Computer Science at this university. He is presently Full Professor of Computer Science. Dr. van Leeuwen is active in many disciplines within computer science. His primary research interests are fundamental studies in varied areas of computer science, viz. the analysis and complexity of computer algorithms, in both a theoretical and an applied sense (e. g. data structures, machine models, VLSI, parallel and distributed computing, and cryptography).Richard B. Tan is an Associate Professor of Mathematics and Computer Science at the University of Sciences and Arts of Oklahoma. He spends his summers at the University of Utrecht, the Netherlands. His research interests are in distributed computation and graph algorithms. He received the B. Sc. in Physics from Beloit College, WI., the M.S. in Computer Science and the Ph.D. (in 1980) in Mathematics from the University of Oklahoma.This work was done while the second author was visiting the University of Utrecht, supported by a grant of the Netherlands Organization for the Advancement of Pure Research (ZWO)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号