首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
Vector and matrix clocks are extensively used in asynchronous distributed systems. This paper asks, how does the clock abstraction generalize? To address this problem, the paper motivates and proposes logical clocks of arbitrary dimensions. It then identifies and explores the conceptual link between such clocks and knowledge. It establishes the necessary and sufficient conditions on the size and dimension of clocks required to attain any specified level of knowledge about the timestamp of the most recent system state for which this is possible without using any messages in the clock protocol. The paper then gives algorithms to determine the timestamp of the latest system state about which a specified level of knowledge is attainable in a given system state, and to compute the timestamp of the earliest system state in which a specified level of knowledge about a given system state is attainable. The results are applicable to applications that deal with a certain class of properties, identified as monotonic properties.Received: 15 March 2002, Accepted: 15 January 2004, Published online: 9 June 2004This material is based on work supported by the National Science Foundation under Grant No. 9875617. A conference version appeared in [14] and the full technical report is in [13].  相似文献   

2.
3.
4.
Ensuring causal consistency in a Distributed Shared Memory (DSM) means all operations executed at each process will be compliant to a causality order relation. This paper first introduces an optimality criterion for a protocol P, based on a complete replication of variables at each process and propagation of write updates, that enforces causal consistency. This criterion measures the capability of a protocol to update the local copy as soon as possible while respecting causal consistency. Then we present an optimal protocol built on top of a reliable broadcast communication primitive and we show how previous protocols based on complete replication presented in the literature are not optimal. Interestingly, we prove that the optimal protocol embeds a system of vector clocks which captures the read/write semantics of a causal memory. From an operational point of view, an optimal protocol strongly reduces its message buffer overhead. Simulation studies show that the optimal protocol roughly buffers a number of messages of one order of magnitude lower than non-optimal ones based on the same communication primitive. R. Baldoni Roberto Baldoni is a Professor of Distributed Systems at the University of Rome “La Sapienza”. He published more than one hundred papers (from theory to practice) in the fields of distributed and mobile computing, middleware platforms and information systems. He is the founder of MIDdleware LABoratory <://www.dis.uniroma1.it/&dollar;∼midlab> textgreater (MIDLAB) whose members participate to national and european research projects. He regularly serves as an expert for the EU commission in the evaluation of EU projects. Roberto Baldoni chaired the program committee of the “distributed algorithms” track of the 19th IEEE International Conference on Distributed Computing Systems (ICDCS-99) and /he was PC Co-chair of the ACM International Workshop on Principles of Mobile Computing/ (POMC). He has been also involved in the organizing and program committee of many premiership international conferences and workshops. A. Milani Alessia Milani is currently involved in a joint research doctoral thesis between the Department of Computer and Systems Science of the University of Rome “La Sapienza” and the University of Rennes I, IRISA.She earned a Laurea degree in Computer Engineering at University of Rome “La Sapienza” on May 2003. Her research activity involves the area of distributed systems. Her current research interests include communication paradigms, in particular distributed shared memories, replication and consistency criterions. S. Tucci Piergiovanni Sara Tucci Piergiovanni is currently a Ph.D. Student at the Department of Computer and Systems Science of the University of Rome “La Sapienza”.She earned a Laurea degree in Computer Engineering at University of Rome “La Sapienza” on March 2002 with marks 108/110. Her laurea thesis has been awarded the italian national “Federcommin-AICA” prize 2002 for best laurea thesis in Information Technology. Her research activity involves the area of distributed systems. Early works involved the issue of fault-tolerance in asynchronous systems and software replication. Currently, her main focus is on communication paradigms that provide an “anonymous” communication as publish/subscribe and distributed shared memories. The core contributions are several papers published in international conferences and journals.  相似文献   

5.
POTENTIAL: A highly adaptive core of parallel database system   总被引:1,自引:1,他引:0       下载免费PDF全文
POTENTIAL is a virtual database machine based on general computing platforms,especially parllel computing platforms.It provides a complete solution to high-performance database systems by a ‘virtual processor virtual data bus virtual memory‘ architecture.Virtual processors manage all CPU resources in the system,on which various operations are running.Virtual data bus is responsible for the management of data transmission between associated operations.which forms the higes of the entire system.Virtual memory provides efficient data storage and buffering mechanisms that conform to data reference behaviors in database systems.The architecture of POTENTIAL is very clear and has many good features,including high efficiency,high scalability,high extensibility,high portability,etc.  相似文献   

6.
An approach is presented for modeling networks of processes that communicate exclusively through message passing. A process (or a network) is defined by its set of possible behaviors, where each behavior is an abstraction of an infinite execution sequence of the process. The resulting model is simple and modular and facilitates information hiding. It can describe both synchronous and asynchronous networks. It supports recursively-defined networks and can characterize liveness properties such as progress of inputs and outputs, termination, and deadlock.A sound and complete temporal proof system based on the model is presented. It is compositional — a specification of a network is formed naturally from specifications of its components.Van Nguyen received a B.S. degree from Monash University in 1982, an M.S. degree from Cornell University in 1983 and a Ph.D. degree from Cornell University in 1985. He has accepted a research position at the IBM Thomas J. Watson Research Center. His research interests include logics and semantics of programs, programming languages, program synthesis and distributed computing.David Gries received a Ph.D. (actually, a Dr. rer. nat.) from the Munich Institute of Technology (Germany) in 1966. He was an assistant professor at Stanford from 1966 to 1969 and has been on the faculty of Computer Science at Cornell since 1969, where he is presently chairman of the department. He is known for his research in compilers (he is a co-author of the Alcor-Illinois 7090 Algol compiler, finished in 1964), for his research in programming methodology, and for his texts Compiler Construction for Digital Computers (1971) and Science of Programming (1981). He was a Guggenheim Fellow in 1984–85.Susan Owicki received the B.S. degree in mathematics from Michigan State University in 1968. She then attended Cornell University as an NSF Fellow, receiving the M.S. and Ph.D. degrees in computer science in 1970 and 1975, respectively. From 1975 to 1976 she was an Assistant Professor in the Department of Computer Science at Cornell University. Since then she has been a member of the Department of Electrical Engineering at Stanford University, where she is currently an Associate Professor.Dr. Owicki's research in the area of concurrent programming has included work in program verification, programming languages and methodology, and design of algorithms for concurrent systems. She has been particularly interested in problems in computer networks and distributed systems.This work was supported by the NSF under grants MCS-81-03605, DCR-83-202-74, and DCR-83-123-19; by NASA under contract NAGW419; and by the third author's Guggenheim FellowshipThis paper is based on part of the first author's Ph.D. thesis  相似文献   

7.
With the explosive growth of the Internet and World Wide Web comes a dramatic increase in the number of users that compete for the shared resources of distributed system environments. Most implementations of application servers and distributed search software do not distinguish among requests to different web pages. This has the implication that the behavior of application servers is quite unpredictable. Applications that require timely delivery of fresh information consequently suffer the most in such competitive environments. This paper presents a model of quality of service (QoS) and the design of a QoS-enabled information delivery system that implements such a QoS model. The goal of this development is two-fold. On one hand, we want to enable users or applications to specify the desired quality of service requirements for their requests so that application-aware QoS adaptation is supported throughout the Web query and search processing. On the other hand, we want to enable an application server to customize how it should respond to external requests by setting priorities among query requests and allocating server resources using adaptive QoS control mechanisms. We introduce the Infopipe approach as the systems support architecture and underlying technology for building a QoS-enabled distributed system for fresh information delivery. Ling Liu, Ph.D.: She is an associate professor at the College of Computing, Georgia Institute of Technology. She received her Ph.D. from Tilburg University, The Netherlands in 1993. Her research interests are in the area of large-scale data intensive systems and its applications in distributed, mobile, multimedia, and Internet computing environments. Her work has focused on systems support for creating, searching, manipulating, and monitoring streams of information in wide area networked information systems. She has published more than 70 papers in internal journals or international conferences, and has served on more than 20 program committees in the area of data engineering, databases, and knowledge and information management. Calton Pu, Ph. D.: He is a Professor and John P. Imlay, Jr. Chair in Software at the College of Computing, Georgia Institute of Technology. Calton received his Ph.D. from University of Washington in 1986. He leads the Infosphere expedition project, which is building the system software to support the next generation information flow applications. Infosphere research includes adaptive operating system kernels, communications middleware, and distributed information flow applications. His past research included operating system projects such as Synthetix and Microfeedback, extended transaction projects such as Epsilon Serializability, and Internet data management. He has published more than 125 journal and conference papers, and served on more than 40 program committees. Karsten Schwan, Ph.D.: He is a professor in the College of Computing at the Georgia Institute of Technology. He received the M.Sc. and Ph.D. degrees from Carnegie-Mellon University in Pittsburgh, Pennsylvania. He directs the IHPC project for high performance cluster computing at Georgia Tech. His current research addresses the interactive nature of modern high performance applications (i.e., online monitoring and computational steering), the development of efficient and object-based middleware, the operating system support for distributed and parallel programs, and the online configuration of applications for distributed real-time applications and for communication protocols. Jonathan Walpole, Ph.D.: He is a Professor in the Computer Science and Engineering Department at oregon Graduate Institute of Science and Technology. He received his Ph.D. in Computer Science from Lancaster University, U.K. in 1987. His research interests are in the area of adaptive systems software and its application in distributed, mobile, multimedia computing environments. His work has focused on quality of service specification, adaptive resource management and dynamic specialization for enhanced performance, survivability and evolvability of large software systems, and he has published extensively in these areas.  相似文献   

8.
Computational modeling in the health sciences is still very challenging and much of the success has been despite the difficulties involved in integrating all of the technologies, software, and other tools necessary to answer complex questions. Very large-scale problems are open to questions of spatio-temporal scale, and whether physico-chemical complexity is matched by biological complexity. For example, for many reasons, many large-scale biomedical computations today still tend to use rather simplified physics/chemistry compared with the state of knowledge of the actual biology/biochemistry. The ability to invoke modern grid technologies offers the ability to create new paradigms for computing, enabling access of resources which facilitate spanning the biological scale. Wibke Sudholt: She is a postdoc with J. A. McCammon and K. Baldridge at the University of California, San Diego and a fellow of the German Academic Exchange Service (DAAD). She received her diploma (Dipl. Chem.) at the University Dortmund, Germany in 1996, and her doctoral degree in 2001 (Dr. rer. nat.) at Heinrich-Heine-University Duesseldorf, Germany with Wolfgang Domcke on theoretical studies of a charge-transfer process. Her current research interests include the combination of quantum chemistry, molecular mechanics and continum electrostatics to describe chemical reactions in complex molecular systems. Kim K. Baldridge: She is a theoretical and computational chemist with expertise in the design, development, and application of computational quantum chemical methodology for understanding chemical and biochemical reaction processes of broad interest. Efforts include development of computational tools and associated grid technologies for the broader scientific community. She is a Fellow of the APS and AAAS, and was the 2000 Agnes Fay Morgan Awardee for Research Achievement in Chemistry. She is the Program Director for Integrative Computational Sciences at SDSC, where she has worked since 1989, and additionally holds an adjunct professorship at UCSD. David Abramson: He is currently a professor of Computer Science in the School of Computer Science and Software Engineering (CSSE) at Monash University, Australia. He is a project leader in the Co-operative Research Centre for Distributed Systems Nimrod Project and Chief Investigator on two ARC funded research projects. He is a co-founder of Active Tools P/L with Dr. Rok Sosic, established to commercialize the Nimrod project, and Guardsoft, focused on commercializing the Guard project. Abramson’s current interests are in high performance computer systems design and software engineering tools for programming parallel, distributed supercomputers. Colin Enticott: He completed a BComp (Hons) degree mid. 2002 at Monash University, Australia. His project, done under the supervision of Professor David Abramson, “The Multi Site EnFuzion Client” dealt in the area of cluster-of-clusters computing that has lead him into Grid computing. Currently employed by DSTC (Distributed Systems Technology Centre, Melbourne, Australia) working on the user front-end of Nimrod (the Nimrod Portal) and cluster implementations. Slavisa Garic: He completed Bachelor of Computer Science (Hons) degree at Monash University, Australia in November 2001. His project, “Suburban Area Networks: Security” involved working on security aspects of wireless community and suburban networks. The beginning of year 2002, he joined Distributed Systems Technology Centre, Melbourne Australia, where he currently works as a core Nimrod/G developer.  相似文献   

9.
In this paper, genetic programming is used as an alternative means to automatically generate secure and minimal hardware designs of public-key cryptosystems such as the RSA cryptosystem. We evolve optimal hardware circuits for modular exponentiation, which is a cornerstone operation in many public-key cryptographic system. The evolved circuits minimize both space (i.e. required gate number) and time (i.e. encryption and decryption time). The evolved designs are shielded against side-channel leakage and hence secure. The structure of the cryptographic circuit is random and so the private key cannot be deduced using known attacks. We compare our results against existing well-known designs, which were produced by human designers based on the binary method. Nadia Nedjah, Ph.D.: She is an associate professor in the Department of Electronics Engineering and Telecommunications at the Faculty of Engineering, State University of Rio de Janeiro, Brazil. Her research interests include functional programming, embedded systems and reconfigurable hardware design as well as cryptography. Nedjah received her Ph.D. in Computation from the University of Manchester — Institute of Science and Technology (UMIST), England, her M.S.c. in System Engineering and Computation from the University of Annaba, Algeria and her Engineerind degree in Computer Science also from the University of Annaba, Algeria. Luiza de Macedo Mourelle, Ph.D.: She is an associate professor in the Department of System Engineering and Computation at the Faculty of Engineering, State University of Rio de Janeiro, Brazil. Her research interests include computer architecture, embedded systems design, hardware/software codesign and reconfigurable hardware. She received her Ph.D. in Computation from the University of Manchester — Institute of Science and Technology (UMIST), England, her M.S.c. in System Engineering and Computation from the Federal University of Rio de Janeiro (UFRJ), Brazil and her Engineering degree in Electronics also from UFRJ, Brazil.  相似文献   

10.
*1 Constraint Satisfaction Problems (CSPs)17) are an effective framework for modeling a variety of real life applications and many techniques have been proposed for solving them efficiently. CSPs are based on the assumption that all constrained data (values in variable domains) are available at the beginning of the computation. However, many non-toy problems derive their parameters from an external environment. Data retrieval can be a hard task, because data can come from a third-party system that has to convert information encoded with signals (derived from sensors) into symbolic information (exploitable by a CSP solver). Also, data can be provided by the user or have to be queried to a database. For this purpose, we introduce an extension of the widely used CSP model, called Interactive Constraint Satisfaction Problem (ICSP) model. The variable domain values can be acquired when needed during the resolution process by means of Interactive Constraints, which retrieve (possibly consistent) information. A general framework for constraint propagation algorithms is proposed which is parametric in the number of acquisitions performed at each step. Experimental results show the effectiveness of the proposed approach. Some applications which can benefit from the proposed solution are also discussed. This paper is an extended and revised version of the paper presented at IJCAI’99 (Stockholm, August 1999)4). Paola Mello, Ph.D.: She received her degree in Electronic Engineering from University of Bologna, Italy, in 1982 and her Ph.D. degree in Computer Science in 1989. Since 1994 she is full Professor. She is enrolled, at present, at the Faculty of Engineering of the University of Bologna where she teaches Artificial Intelligence. Her research activity focuses around: programming languages, with particular reference to logic languages and their extensions towards modular and object-oriented programming; artificial intelligence; knowledge representation; expert systems. Her research has covered implementation, application and theoretical aspects and is presented in several national and international publications. She took part to several national (Progetti Finalizzati e MURST) and international (UE) research projects in the context of computational logic. Michela Milano, Ph.D.: She is a Researcher in the Department of Electronics, Computer Science and Systems at the University of Bologna. From the same University she obtained her master degree in 1994 and her Ph.D. in 1998. In 1999 she had a post-doc position at the University of Ferrara. Her research focuses on Artificial Intelligence, Constraint Satisfaction and Constraint Programming. In particular, she worked on using and extending the constraint-based paradigm for solving real-life problems such as scheduling, routing, object recognition and planning. She has served on the program committees of several international conferences in the area of Constraint Satisfaction and Programming, and she has served as referee in several related international journals. Marco Gavanelli: He is currently a Ph.D. Student in the Department of Engineering at the University of Ferrara, Italy. He graduated in Computer Science Engineering in 1998 at the University of Bologna, Italy. His research interest include Artificial Intelligence, Constraint Logic Programming, Constraint Satisfaction and visual recognition. He is a member of ALP (the Association for Logic Programming) and AI*IA (the Italian Association for Artificial Intelligence). Evelina Lamma, Ph.D.: She got her degree in Electrical Engineering at the University of Bologna in 1985, and her Ph.D. in Computer Science in 1990. Her research activity centers on logic programming languages, Artificial Intelligence and software engineering. She was co-organizers of the 3rd International Workshop on Extensions of Logic Programming ELP92, held in Bologna in February 1992, and of the 6th Italian Congress on Artificial Intelligence, held in Bologna in September 1999. She is a member of the Executive Committee of the Italian Association for Artificial Intelligence (AI*IA). Currently, she is Full Professor at the University of Ferrara, where she teaches Artificial Intelligence and Fondations of Computer Science. Massimo Piccardi, Ph.D.: He graduated in electronic engineering at the University of Bologna, Italy, in 1991, where he received a Ph.D. in computer science and computer engineering in 1995. He currently an assistant professor of computer science with the Faculty of Engineering at the University of Ferrara, Italy, where he teaches courses on computer architecture and microprocessor systems. Massimo Piccardi participated in several research projects in the area of computer vision and pattern recognition. His research interests include architectures, algorithms and benchmarks for computer vision and pattern recognition. He is author of more than forty papers on international scientific journals and conference proceedings. Dr. Piccardi is a member of the IEEE, the IEEE Computer Society, and the International Association for Pattern Recognition — Italian Chapter. Rita Cucchiara, Ph.D.: She is an associate professor of computer science at the Faculty of Engineering at the University of Modena and Reggio Emilia, Italy, where she teaches courses on computer architecture and computer vision. She graduated in electronic engineering at the University of Bologna, Italy, in 1989 and she received a Ph.D. in electronic engineering and computer science from the same university in 1993. From 1993 to 1998 she been an assistant professor of computer science with the University of Ferrara, Italy. She participated in many research projects, including a SIMD parallel system for vision in the context of an Italian advanced research program in robotics, funded by CNR (the Italian National Research Council). Her research interests include architecture and algorithms for computer vision and multimedia systems. She is author of several papers on scientific journals and conference proceedings. She is member of the IEEE, the IEEE Computer Society, and the International Association for Pattern Recognition — Italian Chapter.  相似文献   

11.
This paper studies the problem of balancing the demand for content in a peer-to-peer network across heterogeneous peer nodes that hold replicas of the content. Previous decentralized load balancing techniques in distributed systems base their decisions on periodic updates containing information about load or available capacity observed at the serving entities. We show that these techniques do not work well in the peer-to-peer context; either they do not address peer node heterogeneity, or they suffer from significant load oscillations which result in unutilized capacity. We propose a new decentralized algorithm, Max-Cap, based on the maximum inherent capacities of the replica nodes. We show that unlike previous algorithms, it is not tied to the timeliness or frequency of updates, and consequently requires significantly less update overhead. Yet, Max-Cap can handle the heterogeneity of a peer-to-peer environment without suffering from load oscillations. Mema Roussopoulos is an Assistant Professor of Computer Science on the Gordon McKay Endowment at Harvard University. Before joining Harvard, she was a Postdoctoral Fellow in the Computer Science Department at Stanford University. She received her PhD and Master’s degrees in Computer Science from Stanford, and her Bachelor’s degree in Computer Science from the University of Maryland at College Park. Her interests are in the areas of distributed systems, networking, and mobile and wireless computing. Mary Baker is a Senior Research Scientist at HP Labs. Her research interests include distributed systems, networks, mobile systems, security, and digital preservation. Before joining HP Labs she was on the faculty of the computer science department at Stanford University where she ran the MosquitoNet project. She received her PhD from the University of California at Berkeley.  相似文献   

12.
Summary Distributed Mutual Exclusion algorithms have been mainly compared using the number of messages exchanged per critical section execution. In such algorithms, no attention has been paid to the serialization order of the requests. Indeed, they adopt FCFS discipline. Conversely, the insertion of priority serialization disciplines, such as Short-Job-First, Head-Of-Line, Shortest-Remaining-Job-First etc., can be useful in many applications to optimize some performance indices. However, such priority disciplines are prone to starvation. The goal of this paper is to investigate and evaluate the impact of the insertion of a priority discipline in Maekawa-type algorithms. Priority serialization disciplines will be inserted by means of agated batch mechanism which avoids starvation. In a distributed algorithm, such a mechanism needs synchronizations among the processes. In order to highlight the usefulness of the priority based serialization discipline, we show how it can be used to improve theaverage response time compared to the FCFS discipline. The gated batch approach exhibits other advantages: algorithms are inherently deadlock-free and messages do not need to piggyback timestamps. We also show that, under heavy demand, algorithms using gated batch exchange less messages than Maekawa-type algorithms per critical section excution. Roberto Baldoni was born in Rome on February 1, 1965. He received the Laurea degree in electronic engineering in 1990 from the University of Rome La Sapienza and the Ph.D. degree in Computer Science from the University of Rome La Sapienza in 1994. Currently, he is a researcher in computer science at IRISA, Rennes (France). His research interests include operating systems, distributed algorithms, network protocols and real-time multimedia applications. Bruno Ciciani received the Laurea degree in electronic engineering in 1980 from the University of Rome La Sapienza. From 1983 to 1991 he has been a researcher at the University of Rome Tor Vergata. He is currently full professor in Computer Science at the University of Rome La Sapienza. His research activities include distributed computer systems, fault-tolerant computing, languages for parallel processing, and computer system performance and reliability evaluation. He has published in IEEE Trans. on Computers, IEEE Trans. on Knowledge and Data Engineering, IEEE Trans. on Software Engineering and IEEE Trans. on Reliability. He is the author of a book titled Manufactoring Yield Evaluation of VLSI/WSI Systems to be published by IEEE Computer Society Press.This research was supported in part by the Consiglio Nazionale delle Ricerche under grant 93.02294.CT12This author is also supported by a grant of the Human Capital and Mobility project of the European Community under contract No. 3702 CABERNET  相似文献   

13.
The ability to cooperate on common tasks in a distributed setting is key to solving a broad range of computation problems ranging from distributed search such as SETI to distributed simulation and multi-agent collaboration. Do-All, an abstraction of such cooperative activity, is the problem of performing N tasks in a distributed system of P failure-prone processors. Many distributed and parallel algorithms have been developed for this problem and several algorithm simulations have been developed by iterating Do-All algorithms. The efficiency of the solutions for Do-All is measured in terms of work complexity where all processing steps taken by all processors are counted. Work is ideally expressed as a function of N, P, and f, the number of processor crashes. However the known lower bounds and the upper bounds for extant algorithms do not adequately show how work depends on f. We present the first non-trivial lower bounds for Do-All that capture the dependence of work on N, P and f. For the model of computation where processors are able to make perfect load-balancing decisions locally, we also present matching upper bounds. We define the r-iterative Do-All problem that abstracts the repeated use of Do-All such as found in typical algorithm simulations. Our f-sensitive analysis enables us to derive tight bounds for r-iterative Do-All work (that are stronger than the r-fold work complexity of a single Do-All). Our approach that models perfect load-balancing allows for the analysis of specific algorithms to be divided into two parts: (i) the analysis of the cost of tolerating failures while performing work under free load-balancing, and (ii) the analysis of the cost of implementing load-balancing. We demonstrate the utility and generality of this approach by improving the analysis of two known efficient algorithms. We give an improved analysis of an efficient message-passing algorithm. We also derive a tight and complete analysis of the best known Do-All algorithm for the synchronous shared-memory model. Finally we present a new upper bound on simulations of synchronous shared-memory algorithms on crash-prone processors.Received: 15 May 2002, Accepted: 15 June 2003, Published online: 6 February 2004This work is supported in part by the NSF TOC Grants 9988304 and 0311368, and the NSF ITR Grant 0121277. The work of the second author is supported in part by the NSF CAREER Award 0093065. The work of the third author is supported in part by the NSF CAREER Award 9984778.  相似文献   

14.
In traditional distributed power control (DPC) algorithms, every user in the system is treated in the same way, i.e., the same power control algorithm is applied to every user in the system. In this paper, we divide the users into different groups depending on their channel conditions and use different DPC accordingly. Our motivation comes from the fact that different DPC algorithms have its own advantages and drawbacks, and our aim in this paper is to “combine” the advantages of different DPC algorithms, and we use soft computing techniques for that. In the simulations results, we choose Foschini and Miljanic Algorithm in [3], which has relatively fast convergence but is not robust against time-varying link gain changes and CIR estimation errors, and fixed step algorithm of Kim [3], which is robust but its convergence is slow. By “combining” these two algorithms using soft computing techniques, the resulting algorithm has fast convergence and is robust. Acknowledgments This work was supported in part by GETA (Finnish Academy Graduate School on Electronics, Telecommunications and Automation), Finland.  相似文献   

15.
Probabilistic clock synchronization   总被引:18,自引:0,他引:18  
A probabilistic method is proposed for reading remote clocks in distributed systems subject to unbounded random communication delays. The method can achieve clock synchronization precisions superior to those attainable by previously published clock synchronization algorithms. Its use is illustrated by presenting a time service which maintains externally (and hence, internally) synchronized clocks in the presence of process, communication and clock failures. Flaviu Cristian is a computer scientist at the IBM Almaden Research Center in San Jose, California. He received his PhD from the University of Grenoble, France, in 1979. After carrying out research in operating systems and programming methodology in France, and working on the specification, design, and verification of fault-tolerant programs in England, he joined IBM in 1982. Since then he has worked in the area of fault-tolerant distributed protocols and systems. He has participated in the design and implementation of a highly available system prototype at the Almaden Research Center and has reviewed and consulted for several fault-tolerant distributed system designs, both in Europe and in the American divisions of IBM. He is now a technical leader in the design of a new U.S. Air Traffic Control System which must satisfy very stringent availability requirements.  相似文献   

16.
Summary We show how synchronized clocks can be realized in a distributed system as a byproduct of a common communication paradigm where processors periodically perform broadcasts. Our approach decouples theprecision concern of clock synchronization—limiting how much correct clocks can differ from each other—from theaccuracy concern—limiting the rate at which any correct clock may drift from real time. Given a system that guarantees only precision, we develop a protocol whereby high accuracy can be achieved on demand. In this manner, the lazy protocol we obtain incurs the cost of high accuracy only when needed while keeping the basic synchronization procedure extremely simple and cheap. Rogério Drummond is Associate Professor of Computer Science at the Universidade de Campinas (Unicamp), Brazil. He received his Ph.D. in computer science from Cornell University in 1986. He has previously worked on distributed fault-tolerant computing, such as the present paper. Currently, he heads the A_HAND project which aims to provide an object-oriented distributed programming environment for the development of very large software systems. Özalp Babaolu is Professor of Computer Science at the University of Bologna, Italy. His research interests include distributed algorithms, fault tolerance and parallel computing. He received a BS in electrical engineering from George Washington University, Washington, D.C. in 1976. From the University of California, Berkeley, he received a MS in 1977 and a Ph.D. in 1981, both in computer science. While at Berkeley, he designed and implemented the virtual memory extensions to BSD Unix. From 1981 to 1987 he was on the faculty at the Department of Computer Science, Cornell University.Partial support for this work was provided by the National Science Foundation under Grant DCR-86-01864, AT&T under a Foundation Grant, the Commission of the European Communities under the ESPRIT Programme Basic Research Action Number 3092 (Predictably Dependable Computing Systems) and the Italian Ministry of University and Research. Drummond was partially supported through a Fellowship from the CAPES Agency of the Government of Brazil  相似文献   

17.
Summary Methodological design of distributed programs is necessary if one is to master the complexity of parallelism. The class of control programs, whose purpose is to observe or detect properties of an underlying program, plays an important role in distributed computing. The detection of a property generally rests upon consistent evaluations of a predicate; such a predicate can be global, i.e. involve states of several processes and channels of the observed program. Unfortunately, in a distributed system, the consistency of an evaluation cannot be trivially obtained. This is a central problem in distributed evaluations. This paper addresses the problem of distributed evaluation, used as a basic tool for solution of general distributed detection problems. A new evaluation paradigm is put forward, and a general distributed detection program is designed, introducing the iterative scheme ofguarded waves sequence. The case of distributed termination detection is then taken to illustrate the proposed methodological design. Jean-Michel Hélary is currently professor of Computer Science at the University of Rennes, France. He received a first Ph.D. degree in Numerical Analysis in 1968, then another Ph.D. Degree in Computer Science in 1988. His research interests include distributed algorithms and protocols, specially the methodological aspects. He is a member of an INRIA research group working at IRISA (Rennes) on distributed algorithms and applications. Professor Jean-Michel Hélary has published several papers on these subjects, and is co-author of a book with Michel Raynal. He serves as a PC member in an international conference. Michel Raynal is currently professor of Computer Science at the University of Rennes, France. He received the Ph.D. degree in 1981. His research interests include distributed algorithms, operating systems, protocols and parallelism. He is the head of an INRIA research group working at IRISA (Rennes) on distributed algorithms and applications. Professor Michel Raynal has organized several international conferences and has served as a PC member in many international workshops, conferences and symposia. Over the past 9 years, he has written 7 books that constitute an introduction to distributed algorithms and distributed systems (among them: Algorithms for Mutual Exclusion, the MIT Press, 1986, and Synchronization and Control of Distributed Programs, Wiley, 1990, co-authored with J.M. Hélary). He is currently involved in two european Esprit projects devoted to large scale distributed systems.This work was supported by French Research Program C3 on Parallelism and Distributed ComputingAn extended abstract has been presented to ISTCS '92 [12]  相似文献   

18.
An improved version of Afek and Gafni's synchronous algorithm for distributed election in complete networks is given and anO(n) expected message complexity is shown. M.Y. Chan received her Ph.D. in 1988 from the University of Hong Kong, and her M.S. and B.A. degrees in computer science from the University of California, San Diego in 1980 and 1981, respectively. She is currently an Assistant Professor at the University of Texas at Dallas. Francis Y.L. Chin (S71-M76-SM85) received the B.Sc. degree in engineering science from the University of Toronto, Toronto, Canada, in 1972, and the M.S., M.A., and Ph.D. degrees in electrical engineering and computer science from Princeton University, New Jersey, in 1974, 1975, and 1976, respectively. Since 1975, he has taught at the University of Maryland, Baltimore Country, University of California, San Diego, University of Alberta, and Chinese University of Hong Kong. He is currently Head of the Department of Computer Science, University of Hong Kong. He has served as a program co-chairman of the 1988 International Conference on Computer Processing of Chinese and Oriental Languages (Toronto) and the International Computer Science Conference '88 (Hong Kong). His current research interests include algorithm design and analysis, parallel and distributed computing.  相似文献   

19.
This paper introduces a model for parallel computation, called thedistributed randomaccess machine (DRAM), in which the communication requirements of parallel algorithms can be evaluated. A DRAM is an abstraction of a parallel computer in which memory accesses are implemented by routing messages through a communication network. A DRAM explicitly models the congestion of messages across cuts of the network.We introduce the notion of aconservative algorithm as one whose communication requirements at each step can be bounded by the congestion of pointers of the input data structure across cuts of a DRAM. We give a simple lemma that shows how to shortcut pointers in a data structure so that remote processors can communicate without causing undue congestion. We giveO(lgn)-step, linear-processor, linear-space, conservative algorithms for a variety of problems onn-node trees, such as computing treewalk numberings, finding the separator of a tree, and evaluating all subexpressions in an expression tree. We giveO(lg2 n)-step, linear-processor, linear-space, conservative algorithms for problems on graphs of sizen, including finding a minimum-cost spanning forest, computing biconnected components, and constructing an Eulerian cycle. Most of these algorithms use as a subroutine a generalization of the prefix computation to trees. We show that any suchtreefix computation can be performed inO(lgn) steps using a conservative variant of Miller and Reif's tree-contraction technique.This research was supported in part by the Defense Advanced Research Projects Agency under Contract N00014-80-C-0622 and by the Office of Naval Research under Contract N00014-86-K-0593. Charles Leiserson is supported in part by an NSF Presidential Young Investigator Award with matching funds provided by AT&T Bell Laboratories and Xerox Corporation. Bruce Maggs is supported in part by an NSF Fellowship.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号