共查询到20条相似文献,搜索用时 15 毫秒
1.
Event fairness and non-interleaving concurrency 总被引:1,自引:0,他引:1
Marta Z. Kwiatkowska 《Formal Aspects of Computing》1989,1(1):213-228
Event fairness suitable for non-interleaving concurrency is proposed. Fairness is viewed with respect to concurrency, rather than non-determinism, in the sense that no concurrent component of the system should be delayed indefinitely. Shields' asynchronous transition systems and Mazurkiewicz's traces have been used; the model gives rise to a partial order. A class of generalised notions of (weak, strong and unconditional) event fairness relative to progress requirements is derived. The weakest fairness notion in this class is shown to coincide with maximality with respect to the partial order over traces. 相似文献
2.
This paper describes the design and implementation of a kernel for the distributed programming language StarMod. The distributed programming kernel was written in a subset of StarMod supported by a concurrent programming kernel. Kernel issues addressed include process representation, I/O device management, signal semantics, system utilities, network communication and the implementation of high-level language communication primitives. We conclude with a summary of our experiences in the development of a ‘bare machine’ kernel for a network of microprocessors. 相似文献
3.
The field of distributed computing started around 1970 when people began to imagine a future world of multiple interconnected computers operating collectively. The theoretical challenge was to define what a computational problem would be in such a setting and to explore what could and could not be accomplished in a realistic setting in which the different computers fell under different administrative structures, operated at different speeds under the control of uncoordinated clocks, and sometimes failed in unpredictable ways. Meanwhile, the practical problem was to turn the vision into reality by building networks and networking equipment, communication protocols, and useful distributed applications. The theory of distributed computing became recognized as a distinct discipline with the holding of the first ACM Principles of Distributed Computing conference in 1982. This paper reviews some of the accomplishments of the theoretical community during the past two decades, notes an apparent disconnect between theoretical and practical concerns, and speculates on future synergy between the two.Received: August 2001, Accepted: May 2003, 相似文献
4.
《Journal of Parallel and Distributed Computing》2014,74(12):3228-3239
Large-scale compute clusters of heterogeneous nodes equipped with multi-core CPUs and GPUs are getting increasingly popular in the scientific community. However, such systems require a combination of different programming paradigms making application development very challenging.In this article we introduce libWater, a library-based extension of the OpenCL programming model that simplifies the development of heterogeneous distributed applications. libWater consists of a simple interface, which is a transparent abstraction of the underlying distributed architecture, offering advanced features such as inter-context and inter-node device synchronization. It provides a runtime system which tracks dependency information enforced by event synchronization to dynamically build a DAG of commands, on which we automatically apply two optimizations: collective communication pattern detection and device-host-device copy removal.We assess libWater’s performance in three compute clusters available from the Vienna Scientific Cluster, the Barcelona Supercomputing Center and the University of Innsbruck, demonstrating improved performance and scaling with different test applications and configurations. 相似文献
5.
Much progress has been made in distributed computing in the areas of distribution structure, open computing, fault tolerance,
and security. Yet, writing distributed applications remains difficult because the programmer has to manage models of these
areas explicitly. A major challenge is to integrate the four models into a coherent development platform. Such a platform
should make it possible to cleanly separate an application’s functionality from the other four concerns. Concurrent constraint
programming, an evolution of concurrent logic programming, has both the expressiveness and the formal foundation needed to
attempt this integration. As a first step, we have designed and built a platform that separates an application’s functionality
from its distribution structure. We have prototyped several collaborative tools with this platform, including a shared graphic
editor whose design is presented in detail. The platform efficiently implements Distributed Oz, which extends the Oz language
with constructs to express the distribution structure and with basic primitives for open computing, failure detection and
handling, and resource control. Oz appears to the programmer as a concurrent object-oriented language with dataflow synchronization.
Oz is based on a higher-order, state-aware, concurrent constraint computation model.
Seif Haridi, Ph.D.: He received his Ph.D. in computer science in 1981 from the Royal Institute of Technology, Sweden. After spending 18 months
at IBM T. J. Watson Research Center, he moved to the Swedish Institute of Computer Science (SICS) to form a research lab on
logic programming and parallel systems. Dr. Haridi is currently the research director of the Swedish Institute of Computer
Science. He has been an active researcher in the area of logic and constraint programming and parallel processing since the
beginning of the eighties. His earlier work includes contributions to the design of SICStus Prolog, various parallel Prolog
systems and a class of scalable cache-coherent multiprocessors known as Cache-Only Memory Architecture (COMA). During the
nineties most of his work focused on the design of multiparadigm programming systems based on Concurrent Constraint Programming
(CCP). Currently, he is interested in programming systems and software methodology for distributed and agent-based applications.
Peter Van Roy, Ph.D.: He obtained an engineering degree from the Vrije Universiteit Brussel (1983), Masters and Ph.D. degrees from the University
of California at Berkeley (1984, 1990), and the Habilitation à Diriger des Recherches from Paris VII Denis Diderot (1996).
He has made major contributions to logic language implementation. His research showed for the first time that Prolog can be
implemented with the same execution efficiency as C. He was principal developer or codeveloper of Aquarius Prolog, Wild_Life,
Logical State Threads, and FractaSketch. He joined the Oz project in 1994 and is currently working on Distributed Oz. His
research interests are motivated by the desire to provide increased expressivity and efficiency to application developers.
Per Brand: He is a researcher at the Swedish Institute of Computer Science. He has previously worked on the design and implementation
of OR-parallel Prolog (the Aurora project) and optimized compilation techniques for Concurrent Constraint Programming Languages
(in particular, AKL). He has been a member of the Distributed Oz design team since the project began. His research interests
are focused on techniques, languages, and methodology for distributed programming.
Christian Schulte: He studied computer science at the University of Karlsruhe, Germany, from 1987 to 1992 where he received his diploma. Since
1992 he has been a member of the Programming Systems Lab at DFKI. He is one of the principal designers of Oz. His research
interests include design, implementation, and application of concurrent and distributed programming languages as well as constraint
programming. 相似文献
6.
In this paper, we propose an intelligent distributed query processing method considering the characteristics of a distributed ontology environment. We suggest more general models of the distributed ontology query and the semantic mapping among distributed ontologies compared with the previous works. Our approach rewrites a distributed ontology query into multiple distributed ontology queries using the semantic mapping, and we can obtain the integrated answer through the execution of these queries. Furthermore, we propose a distributed ontology query processing algorithm with several query optimization techniques: pruning rules to remove unnecessary queries, a cost model considering site load balancing and caching, and a heuristic strategy for scheduling plans to be executed at a local site. Finally, experimental results show that our optimization techniques are effective to reduce the response time. 相似文献
7.
Robert P. Cook 《Computer Languages, Systems and Structures》1981,6(3-4):131-138
The StarMod language is designed to provide its users with abstractions for distributed computations. The language is based on Wirth's definition of a “module” as impiemented in Modula. The paper discusses abstraction mechanisms for distributed access control and scheduling: in addition, several examples are used to illustrate these concepts. 相似文献
8.
Summary We present a formal proof method for distributed programs. The semantics used to justify the proof method explicitly identifies equivalence classes of execution sequences which are equivalent up to permuting commutative operations. Each equivalence class is called an interleaving set or a run. The proof rules allow concluding the correctness of certain classes of properties for all execution sequences, even though such properties are demonstrated directly only for a subset of the sequences. The subset used must include a representative sequence from each interleaving set, and the proof rules, when applicable, guarantee that this is the case. By choosing a subset with appropriate sequences, simpler intermediate assertions can be used than in previous formal approaches. The method employs proof lattices, and is expressed using the temporal logic ISTL.
Shmuel Katz received his B.A. in Mathematics and English Literature from U.C.L.A., and his M.Sc. and Ph.D. in Computer Science (1976) from the Weizmann Institute in Rechovot, Israel. From 1976 to 1981 he was at the IBM Israel Scientific Center. Presently, he is on the faculty of the Computer Science Department at the Technion in Haifa, Israel. In 1977–1978 he visited for a year at the University of California, Berkeley, and in 1984–1985 was at the University of Texas at Austin. He has been a consultant and visitor at the MCC Software Technology Program, and in 1988–1989 was a visiting scientist at the I.B.M. Watson Research Center. His research interests include the methodology of programming, specification methods, program verification and semantics, distributed programming, data structures, and programming languages.
Doron Peled was born in 1962 in Haifa. He received his B.Sc. and M.Sc. in Computer Science from the Technion, Israel in 1984 and 1987, respectively. Between 1987 and 1991 he did his military service. He also completed his D.Sc. degree in the Technion during these years. Dr. Peled was with the Computer Science department at Warwick University in 1991–1992. He is currently a member of the technical staff with AT & T Bell Laboratories. His main research interests are specification and verification of programs, especially as related to partial order models, fault-tolerance and real-time. He is also interested in semantics and topology.This research was carried out while the second author was at the Department of Computer Science, The Technion, Haifa 32000, Israel 相似文献
9.
This paper discusses issues of design for software systems for computer controlled manipulators. A short review of the features which have become important in present soft-ware systems for industrial applications is presented, including how various desirable system capabilities can be introduced at reasonable computational costs.The paper is based mainly on the experiences obtained in designing and implementing MAL, a software system for controlling and programming an experimental robot, and VML, a machine independent intermediate language to be used as a target for compliers of high level programming languages for robots.An explanation of how management of multiprocess capabilities, synchronization of different devices, error handling and other desirable features can be inserted in a simple system, implemented on micro and minicomputers and made suitable for industrial applications will be shown. 相似文献
10.
Jin-Long Wang Author Vitae 《Computers & Electrical Engineering》2004,30(3):183-205
In a typical distributed computing system (DCS), nodes consist of processing elements, memory units, shared resources, data files, and programs. For a distributed application, programs and data files are distributed among many processing elements that may exchange data and control information via communication link. The reliability of DCS can be expressed by the analysis of distributed program reliability (DPR) and distributed system reliability (DSR). In this paper, two reliability measures are introduced which are Markov-chain distributed program reliability (MDPR) and Markov-chain distributed system reliability (MDSR) to accurately model the reliability of DCS. A discrete time Markov chain with one absorbing state is constructed for this problem. The transition probability matrix is employed to represent the transition probability from one state to another state in a unit of time. In addition to mathematical method to evaluate the MDPR and MDSR, a simulation result is also presented to prove its correction. 相似文献
11.
《国际计算机数学杂志》2012,89(1-4):315-345
An operational model which allows the complete formal definition of the full syntax and, particularly, semantics of programming languages is described. Both its syntactic and semantic parts are based on so-called linked-forest manipulation systems which allow the definition of mappings on forests. The idea of “linking” is crucial for the given model, we represent not only abstract programs but also intermediate states of our system (abstract computer) by labelled forests with pointers. 相似文献
12.
13.
This paper is concerned with the design, implementation, and evaluation of algorithms for communication partner identification in mobile agent-based distributed job workflow execution. We first describe a framework for distributed job workflow execution over the Grid: the Mobile Code Collaboration Framework (MCCF). Based on the study of agent communications during a job workflow execution on MCCF, we identify the unnecessary agent communications that degrade the system performance. Then, we design a novel subjob grouping algorithm for preprocessing the job workflow's static specification in MCCF. The obtained information is used in both static and dynamic algorithms to identify partners for agent communication. The mobile agent dynamic location and communication based on this approach is expected to reduce the agent communication overhead by removing unnecessary communication partners during the dynamic job workflow execution. The proof of the dynamic algorithm's correctness and effectiveness are elaborated. Finally, the algorithms are evaluated through a comparison study using simulated job workflows executed on a prototype implementation of the MCCF on a LAN environment and an emulated WAN setup. The results show the scalability and efficiency of the algorithms as well as the advantages of the dynamic algorithm over the static one. 相似文献
14.
Practical uses of synchronized clocks in distributed systems 总被引:5,自引:0,他引:5
Barbara Liskov 《Distributed Computing》1993,6(4):211-219
Summary Synchronized clocks are interesting because they can be used to improve performance of a distributed system by reducing communications. Since they have only recently become a reality in distributed systems, their use in distributed algorithms has received relatively little attention. This paper discusses a number of distributed algorithms that make use of synchronized clocks and analyzes how clocks are used in these algorithms
Barbara Liskov received her B.A. in mathematics from the University of California at Berkeley and her M.S. and Ph.D. in computer science from Stanford University. She is currently a member of the faculty at the Massachusetts Institute of Technology, where she is NEC Professor of Software Science and Engineering. Her research and teaching interests include programming languages, programming methodology, distributed computing, and parallel computing. Her work on data abstraction led to the development of the CLU programming language and to a programming methodology based on data abstraction and specifications. This work is described in her book Abstraction and Specification in Program Development. Her subsequent research in distributed computing resulted in the Argus programming language, which supports robust distributed programs that survive hardware failures, and the Mercury communications mechanism, which supports efficient communication in a heterogeneous distributed system. At present Prof. Liskov is continuing her work in distributed computing, including development of replication algorithms for implementing highly-available systems. She is working on Harp, a replicated Unix file system for use via NFS, and on the design and implementation of Thor, a highly available object repository for use in a heterogeneous distributed environment. She is a member of ACM, IEEE, the National Academy of Engineering, and is a fellow of the American Academy of Arts and Sciences.This research was supported in part by the Advanced Research Projects Agency of the Department of Defense, monitored by the Office of Naval Research under contract N00014-89-J-1988, and in part by the National Science Foundation under grant CCR-8822158. 相似文献
15.
Easy proofs are given, of the impossibility of solving several consensus problems (Byzantine agreement, weak agreement, Byzantine firing squad, approximate agreement and clock synchronization) in certain communication graphs.It is shown that, in the presence ofm faults, no solution to these problems exists for communication graphs with fewer than 3m+1 nodes or less than 2m+1 connectivity. While some of these results had previously been proved, the new proofs are much simpler, provide considerably more insight, apply to more general models of computation, and (particularly in the case of clock synchronization) significantly strengthen the results.Michael J. Fischer is currently Professor of Computer Science at Yale University, New Haven, CT, where he heads the Theory of Computation Group. He is also Editor-in-Chief of the Journal of the Association for Computing Machinery. His research interests include theory of distributed systems, cryptographic protocols, and computational complexity.Dr. Fischer received the B. S. degree in matheamtics from the University of Michigan, Ann Arbor, in 1963, and the M. A. and Ph. D. degrees in applied mathematics from Harvard University, Cambridge, MA, in 1965 and 1968, respectively. He has taught previously at Carnegie-Mellon University, the Massachusetts Institute of Technology, and University of Washington.Nancy Lynch is currently Associate professor of Computer Science at M.I.T., and heads the Theory of Distributed Systems group in M.I.T.'s Laboratory for Computer Science. Her interests are in all aspects of distributed computing theory, including formal models, algorithms, analysis, and correctness proofs. Dr. Lynch received the B.S. degree in mathematics from Brooklyn College in 1968 and the Ph. D. degree in mathematics from M.I.T. in 1972. She has served on the faculty of Tufts University, the University of Southern California, Florida International University, Georgia Tech.Michael Merritt is currently a member of the technical staff with AT&T Bell Laboratories. During the 1984 –85 academic year, he was a visiting lecturer at M.I.T., sponsered by Bell Labs. His research interests include distributed computation, cryptography and security. Dr. Merritt received the B. S. degree in computer science and philosophy from Yale in 1978 and the M. Sc. and Ph. D. degrees in 1980 and 1983, respectively, both in information and computer science from Georgia Tech. He is a member of SIGACT and of Computer Professionals for Social Responsibility.This paper has appeared in the ACM Conference Proceedings of PODC 1985. © 1985, Association for Computing Machinery, reprinted by permission 相似文献
16.
The number of mobile agents and total execution time are two factors used to represent the system overhead that must be considered as part of mobile agent planning (MAP) for distributed information retrieval. In addition to these two factors, the time constraints at the nodes of an information repository must also be taken into account when attempting to improve the quality of information retrieval. In previous studies, MAP approaches could not consider dynamic network conditions, e.g., variable network bandwidth and disconnection, such as are found in peer-to-peer (P2P) computing. For better performance, mobile agents that are more sensitive to network conditions must be used. In this paper, we propose a new MAP approach that we have named Timed Mobile Agent Planning (Tmap). The proposed approach minimizes the number of mobile agents and total execution time while keeping the turnaround time to a minimum, even if some nodes have a time constraint. It also considers dynamic network conditions to reflect the dynamic network condition more accurately. Moreover, we incorporate a security and fault-tolerance mechanism into the planning approach to better adapt it to real network environments. 相似文献
17.
Michael Elkin 《Journal of Computer and System Sciences》2006,72(8):1282-1308
This paper studies the problem of constructing a minimum-weight spanning tree (MST) in a distributed network. This is one of the most important problems in the area of distributed computing. There is a long line of gradually improving protocols for this problem, and the state of the art today is a protocol with running time due to Kutten and Peleg [S. Kutten, D. Peleg, Fast distributed construction of k-dominating sets and applications, J. Algorithms 28 (1998) 40-66; preliminary version appeared in: Proc. of 14th ACM Symp. on Principles of Distributed Computing, Ottawa, Canada, August 1995, pp. 20-27], where Λ(G) denotes the diameter of the graph G. Peleg and Rubinovich [D. Peleg, V. Rubinovich, A near-tight lower bound on the time complexity of distributed MST construction, in: Proc. 40th IEEE Symp. on Foundations of Computer Science, 1999, pp. 253-261] have shown that time is required for constructing MST even on graphs of small diameter, and claimed that their result “establishes the asymptotic near-optimality” of the protocol of [S. Kutten, D. Peleg, Fast distributed construction of k-dominating sets and applications, J. Algorithms 28 (1998) 40-66; preliminary version appeared in: Proc. of 14th ACM Symp. on Principles of Distributed Computing, Ottawa, Canada, August 1995, pp. 20-27].In this paper we refine this claim, and devise a protocol that constructs the MST in rounds, where μ(G,ω) is the MST-radius of the graph. The ratio between the diameter and the MST-radius may be as large as Θ(n), and, consequently, on some inputs our protocol is faster than the protocol of [S. Kutten, D. Peleg, Fast distributed construction of k-dominating sets and applications, J. Algorithms 28 (1998) 40-66; preliminary version appeared in: Proc. of 14th ACM Symp. on Principles of Distributed Computing, Ottawa, Canada, August 1995, pp. 20-27] by a factor of . Also, on every input, the running time of our protocol is never greater than twice the running time of the protocol of [S. Kutten, D. Peleg, Fast distributed construction of k-dominating sets and applications, J. Algorithms 28 (1998) 40-66; preliminary version appeared in: Proc. of 14th ACM Symp. on Principles of Distributed Computing, Ottawa, Canada, August 1995, pp. 20-27].As part of our protocol for constructing an MST, we develop a protocol for constructing neighborhood covers with a drastically improved running time. The latter result may be of independent interest. 相似文献
18.
Modern computer systems become increasingly distributed and heterogeneous by comprising multi-core CPUs, GPUs, and other accelerators. Current programming approaches for such systems usually require the application developer to use a combination of several programming models (e.g., MPI with OpenCL or CUDA) in order to exploit the system’s full performance potential. In this paper, we present dOpenCL (distributed OpenCL)—a uniform approach to programming distributed heterogeneous systems with accelerators. dOpenCL allows the user to run unmodified existing OpenCL applications in a heterogeneous distributed environment. We describe the challenges of implementing the OpenCL programming model for distributed systems, as well as its extension for running multiple applications concurrently. Using several example applications, we compare the performance of dOpenCL with MPI + OpenCL and standard OpenCL implementations. 相似文献
19.
On agents and grids: Creating the fabric for a new generation of distributed intelligent systems 总被引:1,自引:0,他引:1
The semantic grid is the result of semantic web and grid researchers building bridges in recognition of the shared vision and research agenda of both fields. This paper builds on prior experiences with both agents and grids to illustrate the benefits of bringing agents into the mix. Because semantic grids represent and reason about knowledge declaratively, additional capabilities typical of agents are then possible including learning, planning, self-repair, memory organization, meta-reasoning, and task-level coordination. These capabilities would turn semantic grids into cognitive grids. Only a convergence of these technologies will provide the ingredients to create the fabric for a new generation of distributed intelligent systems. 相似文献
20.
We survey results from distributed computing that show tasks to be impossible, either outright or within given resource bounds, in various models. The parameters of the models considered include synchrony, fault-tolerance, different communication media, and randomization. The resource bounds refer to time, space and message complexity. These results are useful in understanding the inherent difficulty of individual problems and in studying the power of different models of distributed computing. There is a strong emphasis in our presentation on explaining the wide variety of techniques that are used to obtain the results described.Received: September 2001, Accepted: February 2003, 相似文献