首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The language FCP(:,?) is the outcome of attempts to integrate the best of several flat concurrent logic programming languages, including Flat GHC, FCP (↓, |) and Flat Concurrent Prolog, in a single consistent framework. FCP(:) is a subset of FCP(:, ?), which is a variant of FPP(↓, |) and employs concepts of the concurrent constraint framework of cc(↓, |). FCP(:, ?) is a language which is strong enough to accommodate all useful concurrent logic programming techniques, including those which rely on atomic test unification and read-only variables, yet incorporates the weaker languages mentioned as subsets. This allows the programmer to remain within a simple subset of the language such as Flat GHC when the full power of atomic unification or read-only variables is not needed.  相似文献   

2.
Much progress has been made in distributed computing in the areas of distribution structure, open computing, fault tolerance, and security. Yet, writing distributed applications remains difficult because the programmer has to manage models of these areas explicitly. A major challenge is to integrate the four models into a coherent development platform. Such a platform should make it possible to cleanly separate an application’s functionality from the other four concerns. Concurrent constraint programming, an evolution of concurrent logic programming, has both the expressiveness and the formal foundation needed to attempt this integration. As a first step, we have designed and built a platform that separates an application’s functionality from its distribution structure. We have prototyped several collaborative tools with this platform, including a shared graphic editor whose design is presented in detail. The platform efficiently implements Distributed Oz, which extends the Oz language with constructs to express the distribution structure and with basic primitives for open computing, failure detection and handling, and resource control. Oz appears to the programmer as a concurrent object-oriented language with dataflow synchronization. Oz is based on a higher-order, state-aware, concurrent constraint computation model. Seif Haridi, Ph.D.: He received his Ph.D. in computer science in 1981 from the Royal Institute of Technology, Sweden. After spending 18 months at IBM T. J. Watson Research Center, he moved to the Swedish Institute of Computer Science (SICS) to form a research lab on logic programming and parallel systems. Dr. Haridi is currently the research director of the Swedish Institute of Computer Science. He has been an active researcher in the area of logic and constraint programming and parallel processing since the beginning of the eighties. His earlier work includes contributions to the design of SICStus Prolog, various parallel Prolog systems and a class of scalable cache-coherent multiprocessors known as Cache-Only Memory Architecture (COMA). During the nineties most of his work focused on the design of multiparadigm programming systems based on Concurrent Constraint Programming (CCP). Currently, he is interested in programming systems and software methodology for distributed and agent-based applications. Peter Van Roy, Ph.D.: He obtained an engineering degree from the Vrije Universiteit Brussel (1983), Masters and Ph.D. degrees from the University of California at Berkeley (1984, 1990), and the Habilitation à Diriger des Recherches from Paris VII Denis Diderot (1996). He has made major contributions to logic language implementation. His research showed for the first time that Prolog can be implemented with the same execution efficiency as C. He was principal developer or codeveloper of Aquarius Prolog, Wild_Life, Logical State Threads, and FractaSketch. He joined the Oz project in 1994 and is currently working on Distributed Oz. His research interests are motivated by the desire to provide increased expressivity and efficiency to application developers. Per Brand: He is a researcher at the Swedish Institute of Computer Science. He has previously worked on the design and implementation of OR-parallel Prolog (the Aurora project) and optimized compilation techniques for Concurrent Constraint Programming Languages (in particular, AKL). He has been a member of the Distributed Oz design team since the project began. His research interests are focused on techniques, languages, and methodology for distributed programming. Christian Schulte: He studied computer science at the University of Karlsruhe, Germany, from 1987 to 1992 where he received his diploma. Since 1992 he has been a member of the Programming Systems Lab at DFKI. He is one of the principal designers of Oz. His research interests include design, implementation, and application of concurrent and distributed programming languages as well as constraint programming.  相似文献   

3.
Current implementation techniques for functional languages differ considerably from those for logic languages. This complicates the development of flexible and efficient abstract machines that can be used for the compilation of declarative languages combining concepts of functional and logic programming. We propose an abstract machine, called the JUMP-machine, which systematically integrates the operational concepts needed to implement the functional and logic programming paradigm. The use of a tagless representation for heap objects, which originates from the Spineless Tagless G-machine, supports the integration of different concepts. In this paper, we provide a functional logic kernel language and show how to translate it into the abstract machine language of the JUMP-machine. Furthermore, we define the operational semantics of the machine language formally and discuss the mapping of the abstract machine to concrete machine architectures. We tested the approach by writing a compiler for the functional logic language GTML. The obtained performance results indicate that the proposed method allows to implement functional logic languages efficiently.  相似文献   

4.
Abstract: In this paper we take up the plight of the programmer of a rule based language. Our focus is on the type of development environment that is most supportive of such programmers. Our view will be that programming is programming, whether it be with a rule based, functional or imperative language. While it is true that rule based languages have strong links to the Expert Systems field, our discussion in this paper has less to do with 'expert systems' per se 1 , and more to do with the view of rule based languages as yet another computational paradigm, often included under the same roof with non-rule-based languages. Just as programming environment research has progressed for non-rule-based languages, we would like to build more powerful environments in the rule based world as well. We report here on an attempt to build such an environment. 2  相似文献   

5.
This paper presents a new language that integrates the real-time and distributed paradigms within the framework of a concurrent logic language. Concurrent logic languages (CLLs) are capable of expressing concurrence, communication and nondeterminism in a natural way. That is, the intrinsic parallel semantics of the concurrent logic languages makes them well-suited for distributed programming. The proposed language is particularly suitable for loosely coupled systems and it contains mechanisms for distributed and real-time process control. A new execution model for concurrent logic languages is presented, which enables efficient distributed execution and real-time control. The model is introduced by giving an operational semantics for the language and the new model's implementation is discussed, including the definition of a new abstract machine and its implementation on a network of Unix workstations. Although the sequential core is not optimized, some previous results are discussed, showing the feasibility of the language's execution model for distributed real-time systems. The language is currently being used as the kernel language for a distributed simulation and validation tool for communication protocols.  相似文献   

6.
This paper presents a new language that integrates the real-time and distributed paradigms within the framework of a concurrent logic language. Concurrent logic languages (CLLs) are capable of expressing concurrence, communication and nondeterminism in a natural way. That is, the intrinsic parallel semantics of the concurrent logic languages makes them well-suited for distributed programming. The proposed language is particularly suitable for loosely coupled systems and it contains mechanisms for distributed and real-time process control. A new execution model for concurrent logic languages is presented, which enables efficient distributed execution and real-time control. The model is introduced by giving an operational semantics for the language and the new model's implementation is discussed, including the definition of a new abstract machine and its implementation on a network of Unix workstations. Although the sequential core is not optimized, some previous results are discussed, showing the feasibility of the language's execution model for distributed real-time systems. The language is currently being used as the kernel language for a distributed simulation and validation tool for communication protocols.  相似文献   

7.
One problem with debugging (committed choice) concurrent logic programs is that their behaviour may be non-deterministic, in that successive executions of the same program may produce different results. We describe a scheme, based on the ‘Instant Replay’ scheme developed for more conventional parallel languages, that allows us to reproduce the execution behaviour of a concurrent logic program on subsequent executions, so that the execution may be examined for debugging purposes. The properties of concurrent logic programming languages allow us to simplify our scheme greatly. We have demonstrated our scheme with KLIC, and KL1 on the PIM multiprocessors, but it can also be applied to other committed choice concurrent logic programming languages.  相似文献   

8.
The Image Processing applications require both computing and communication power. The object of the GFLOPS project was to study all aspects concerning the design of such computers. The project's aim was to develop a parallel architecture as well as its software environment to implement these applications efficiently. A development environment, especially a C data-parallel language, has been built for this purpose. The C parallel language presented here, simplifies the use of such architectures by providing the programmer with a global name space and a control mechanism to exploit fine and medium grain parallelism of its applications. The main advantage of our paradigm is that it allows a unique framework to express both data and control parallelism. We have implemented this programming environment on the GFLOPS machine which supports up to 512 processor nodes, which are PC mother boards, connected over a scaleable and cost-effective network, via the PCI-bus, at a constant cost per node. The aim is to obtain at low cost a scaleable virtually shared memory machine. In this paper we discuss the design of the GFLOPS machine and its C parallel language, and evaluate the effectiveness of the mechanisms incorporated. The analysis of the architecture's behaviour was conducted with microbenchmarks and image processing algorithms, written in C.  相似文献   

9.
The optimized handling of reductions on parallel supercomputers or clusters of workstations is critical to high performance because reductions are common in scientific codes and a potential source of bottlenecks. Yet in many high-level languages, a mechanism for writing efficient reductions remains surprisingly absent. Further, when such mechanisms do exist, they often do not provide the flexibility a programmer needs to achieve a desirable level of performance. In this paper, we present a new language construct for arbitrary reductions that lets a programmer achieve a level of performance equal to that achievable with the highly flexible, but low-level combination of Fortran and MPI. We have implemented this construct in the ZPL language and evaluate it in the context of the initialization of the NAS MG benchmark. We show a 45 times speedup over the same code written in ZPL without this construct. In addition, performance on a large number of processors surpasses that achieved in the NAS implementation showing that our mechanism provides programmers with the needed flexibility.  相似文献   

10.
The optimized handling of reductions on parallel supercomputers or clusters of workstations is critical to high performance because reductions are common in scientific codes and a potential source of bottlenecks. Yet in many high-level languages, a mechanism for writing efficient reductions remains surprisingly absent. Further, when such mechanisms do exist, they often do not provide the flexibility a programmer needs to achieve a desirable level of performance. In this paper, we present a new language construct for arbitrary reductions that lets a programmer achieve a level of performance equal to that achievable with the highly flexible, but low-level combination of Fortran and MPI. We have implemented this construct in the ZPL language and evaluate it in the context of the initialization of the NAS MG benchmark. We show a 45 times speedup over the same code written in ZPL without this construct. In addition, performance on a large number of processors surpasses that achieved in the NAS implementation showing that our mechanism provides programmers with the needed flexibility.  相似文献   

11.
Fault tolerance is an issue ignored in most parallel languages. The overhead of making parallel, high-performance programs resilient to processor crashes is often too high, given the low probability of such events. If parallel systems become more large-scaled, however, processor failures will become likely, so they should be dealt with. Two approaches to this problem are feasible. First, the system can make programs fault-tolerant transparently. It can log messages, make checkpoints, and so on. Second, the programmer can write explicit code for handling failures in an application-specific way. The latter approach is potentially more efficient, but also requires more work from the programmer. In this paper, we intend to get some initial insight into how hard and efficient explicit fault-tolerant parallel programming is. We do so by implementing four parallel applications in Argus, a language supporting parallelism as well as fault tolerance. Our experiences indicate that the extra effort needed for fault tolerance varies much between different applications. Also, trade-offs can frequently be made between programming effort and efficiency. One lesson we learned is that fault tolerance should not be added as an afterthought, but is best taken into account from the start. As another result, the ability to integrate transparent and explicit mechanisms for fault tolerance would sometimes be highly useful.  相似文献   

12.
MOHAMEDHamada 《软件学报》2001,12(9):1279-1286
函数式语言和逻辑语言在下列意义上是互补的,基于归约的函数式程序设计语言具有确定和懒惰求解等性质.但同时它又缺少诸如存在量化的变量以及部分数据结构等所希望的性质.相反,基于HORN子句逻辑和消解原理的逻辑程序设计语言允许存在量化的变量和部分数据结构但又缺少确定和懒惰求解的性质.从这个角度出发,把函数和逻辑程序设计语言结合成一种范型是很自然的,这种结合提供了一种比逻辑和函数语言表达能力更强的合一语言.提出了函数式逻辑语言的操作语义,同时表明这种操作语义在实践中是可见的.  相似文献   

13.
In this paper, we introduce Continuation Passing C (CPC), a programming language for concurrent systems in which native and cooperative threads are unified and presented to the programmer as a single abstraction. The CPC compiler uses a compilation technique, based on the CPS transform, that yields efficient code and an extremely lightweight representation for contexts. We provide a proof of the correctness of our compilation scheme. We show in particular that lambda-lifting, a common compilation technique for functional languages, is also correct in an imperative language like C, under some conditions enforced by the CPC compiler. The current CPC compiler is mature enough to write substantial programs such as Hekate, a highly concurrent BitTorrent seeder. Our benchmark results show that CPC is as efficient, while using significantly less space, as the most efficient thread libraries available.  相似文献   

14.
There is an increasing interest in the study of software architectures; however, it still unclear which kind of formalisms and techniques should be used in their design. We study the suitability of a rule-based, parallel logic language in the specification of the architecture of a complex software system, i.e. a software development environment. We have used as a case study SMILE, an environment for programming-in-the-large. Because of the declarative, concurrent and object-oriented features of parallel logic programming, we have been able to design a software architecture that emphasizes the dynamics of co-ordination inside the software development environment. The result of this experience shows the usefulness and some weaknesses of logic languages for specifying and prototyping the software architecture of a distributed interactive system.  相似文献   

15.
This paper presents a practical evaluation and comparison of three state-of-the-art parallel functional languages. The evaluation is based on implementations of three typical symbolic computation programs, with performance measured on a Beowulf-class parallel architecture.We assess three mature parallel functional languages: PMLS, a system for implicitly parallel execution of ML programs; GPH, a mainly implicit parallel extension of Haskell; and Eden, a more explicit parallel extension of Haskell designed for both distributed and parallel execution. While all three languages employ a completely implicit approach to communication, each language takes a different approach to specifying and controlling parallelism, ranging from explicit identification of processes as language constructs (Eden) through annotation of potential parallelism (GPH) to automatic detection of parallel skeletons in sequential code (PMLS).We present detailed performance measurements of all three systems on a widely available parallel architecture: a Beowulf cluster of low-cost commodity workstations. We use three representative symbolic applications: a matrix multiplication algorithm, an exact linear system solver, and a simple ray-tracer. Our results show how moderate speedups can be achieved with little or no changes to the sequential code, and that parallel performance can be significantly improved even within our high-level model of parallel functional programming by controlling key aspects of the program such as load distribution and thread granularity.  相似文献   

16.
17.
A common problem when writing compilers for programming languages or little, domain‐specific languages is that an input token may have several interpretations, depending on context. Solutions to this problem demand programmer intervention, obfuscate the language's grammar, and may introduce subtle bugs. We present a technique which is simple and without the above drawbacks—allowing a token to simultaneously have different types—and show how it can be applied to areas such as little language processing and fuzzy parsing. We also describe ways that compiler tools can support this technique. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

18.
Jonathan J. Cook 《Software》2004,34(9):815-845
We discuss P#, our implementation of a tool that allows interoperation between a concurrent superset of the Prolog programming language and C#. This enables Prolog to be used as a native implementation language for Microsoft's .NET platform. P# compiles a linear logic extension of Prolog to C# source code. We can thus create C# objects from Prolog and use C#'s graphical, networking and other libraries. We add language constructs on the Prolog side that allow concurrent Prolog code to be written. A primitive predicate is provided that evaluates a Prolog structure on a newly forked thread. Communication between threads is based on the unification of variables contained in such a structure. It is also possible for threads to communicate through a globally accessible table. All of the new features are available to the programmer through new built-in Prolog predicates. We discuss two software engineering tools implemented using P#. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

19.
Gopal Gupta  Enrico Pontelli 《Software》2001,31(12):1143-1181
Naive parallel implementation of non‐deterministic systems (such as a theorem proving system) and languages (such as logic, constraint, or concurrent constraint languages) can result in poor performance. We present three optimization schemas, based on flattening of the computation tree, procrastination of overheads, and sequentialization of computations that can be systematically applied to parallel implementations of non‐deterministic systems/languages to reduce the parallel overhead and to obtain improved efficiency of parallel execution. The effectiveness of these schemas is illustrated by applying them to the ACE parallel logic programming system. The performance data presented show that considerable improvement in execution efficiency can be achieved. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

20.
Vienna Fortran, High Performance Fortran (HPF), and other data parallel languages have been introduced to allow the programming of massively parallel distributed-memory machines (DMMP) at a relatively high level of abstraction, based on the SPMD paradigm. Their main features include directives to express the distribution of data and computations across the processors of a machine. In this paper, we use Vienna-Fortran as a general framework for dealing with sparse data structures. We describe new methods for the representation and distribution of such data on DMMPs, and propose simple language features that permit the user to characterize a matrix as “sparse” and specify the associated representation. Together with the data distribution for the matrix, this enables the complier and runtime system to translate sequential sparse code into explicitly parallel message-passing code. We develop new compilation and runtime techniques, which focus on achieving storage economy and reducing communication overhead in the target program. The overall result is a powerful mechanism for dealing efficiently with sparse matrices in data parallel languages and their compilers for DMMPs  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号