首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper describes theoretical and practical aspects of a partial evaluator that treats a parallel lambda language.The parallel language presented is a combination of lambda calculus and message passing communication mechanism.This parallel language can be used to write a programming language‘s denotational semantics which extracts the paallelism in the program.From this denotational definition of the programming language,the partial evaluator can generate parallel compiler of the language by self-application. The key technique of partial evaluation is binding time analysis that determines in advance which parts of the source program can be evaluated during partial evaluation,and which parts cannot,A binding time analysis is described based upon type inference.A new type chcode in introduced into the type system,which denotes the type of those expressions containing residual channel operations.A well-formedness criterion is given which ensures that partial evaluation not only doesn‘t commit type errors but also doesn‘t change the sequence of channel operations.Before binding time analysis,channel analysis is used to analyze the communication relationship between send and receive processes.  相似文献   

2.
Y. Tsujino  M. Ando  T. Araki  N. Tokura 《Software》1984,14(11):1061-1078
Recent advances in hardware technology have made the construction of multiprocessor systems economically feasible. This paper describes a new programming language (Concurrent C) suitable for distributed systems which are networks of loosely connected processors, each with its own local storage. Concurrent C is the extended version of the programming language C, incorporating features for parallel processing and interprocess communications.  相似文献   

3.
This paper describes, in an informal manner, the programming language ACTUS which was designed to facilitate programming array processing and vector processing ‘supercomputers’. ACTUS extends the program structuring and data structuring facilities of Pascal to the synchronous parallel environment as represented by array and vector processor architectures. A knowledge of Pascal is assumed and only the parallel features of ACTUS are described.  相似文献   

4.
5.
This paper introduces the JStar parallel programming language, which is a Java-based declarative language aimed at discouraging sequential programming, encouraging massively parallel programming, and giving the compiler and runtime maximum freedom to try alternative parallelisation strategies. We describe the execution semantics and runtime support of the language, several optimisations and parallelism strategies, with some benchmark results.  相似文献   

6.
As the cost of processor hardware declines multiprocessor architectures become increasingly cost-effective and represent an important area for future research. In order to exploit the full potential of multiprocessors, however, it is necessary to understand how to design software which can make effective use of the available parallelism. This paper considers the impact of multiprocessor architecture on the design of high-level programming languages and, in particular, evaluates the language Ada in the light of the special requirements of realtime multiprocessor systems. We conclude that Ada does not, as currently designed, meet the needs for real-time embedded systems.  相似文献   

7.
Opus is a new programming language designed to assist in coordinating the execution of multiple, independent program modules. With the help of Opus, coarse grained task parallelism between data parallel modules can be expressed in a clean and structured way. In this paper we address the problems of how to build a compilation and runtime support system that can efficiently implement the Opus constructs. Our design considers the often‐conflicting goals of efficiency and modular construction through software re‐use. In particular, we present the system requirements for an efficient Opus implementation, the Opus runtime system, and describe how they work together to provide the underlying services that the Opus compiler needs for a broad class of machines. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

8.
The rapid rise of OpenMP as the preferred parallel programming paradigm for small‐to‐medium scale parallelism could slow unless OpenMP can show capabilities for becoming the model‐of‐choice for large scale high‐performance parallel computing in the coming decade. The main stumbling block for the adaptation of OpenMP to distributed shared memory (DSM) machines, which are based on architectures like cc‐NUMA, stems from the lack of capabilities for data placement among processors and threads for achieving data locality. The absence of such a mechanism causes remote memory accesses and inefficient cache memory use, both of which lead to poor performance. This paper presents a simple software programming approach called copy‐inside–copy‐back (CC) that exploits the data privatization mechanism of OpenMP for data placement and replacement. This technique enables one to distribute data manually without taking away control and flexibility from the programmer and is thus an alternative to the automat and implicit approaches. Moreover, the CC approach improves on the OpenMP‐SPMD style of programming that makes the development process of an OpenMP application more structured and simpler. The CC technique was tested and analyzed using the NAS Parallel Benchmarks on SGI Origin 2000 multiprocessor machines. This study shows that OpenMP improves performance of coarse‐grained parallelism, although a fast copy mechanism is essential. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

9.
This paper describes the design and implementation of an Efficient Architecture for Running THreads (EARTH) runtime system for a multi‐processor/multi‐node cluster. The (EARTH) model was designed to support the efficient execution of parallel (multi‐threaded) programs with irregular fine‐grain parallelism using off‐the‐shelf computers. Implementing an EARTH runtime system requires an explicitly threaded runtime system. For portability, we built this runtime system on top of Pthreads under Linux and used sockets for inter‐node communication. Moreover, in order to make the best use of the resources available on a cluster of symmetric multi‐processors (SMP), this implementation enables the overlapping of communication and computation. We used Threaded‐C, a language designed to implement the programming model supported by the EARTH architecture. This language allows the expression of various levels of parallelism and provides the primitives needed to manage the required communication and synchronization. The Threaded‐C programming language supports irregular fine‐grain parallelism through a two‐level hierarchy of threads and fibers. It also provides various synchronization and communication constructs that reflect the nature of EARTH's fibers—non‐preemptive execution with data‐driven scheduling—as well as the extensive use of split‐phase transactions on EARTH to execute long‐latency operations. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

10.
11.
PRESTO is a programming system for writing object-oriented parallel programs in a multiprocessor environment. PRESTO provides the programmer with a set of pre-defined object types that simplify the construction of parallel programs. Examples of PRESTO objects are threads, which provide fine-grained control over a program's execution, and synchronization objects, which allow simultaneously executing threads to co-ordinate their activities. The goals of PRESTO are to provide a programming environment that makes it easy to express concurrent algorithms, to do so efficiently, and to do so in a manner that invites extensions and modifications. The first two goals, which are the focus of this paper, allow a programmer to use parallelism in a way that is naturally suited to the problem at hand, rather than being constrained by the limitations of a particular underlying kernel or hardware architecture. The third goal is touched upon but not emphasized in this paper. PRESTO is written in C++; it currently runs on the Sequent shared-memory multiprocessor on top of the Dynix operating system. In this paper we describe the system model, its applicability to parallel programming, experiences with the initial implementation, and some early performance measurements.  相似文献   

12.
FCL is a higher-order functional programming language which consolidates and extends a number of desirable features of existing languages. This paper describes the salient features of FCL and an algorithm for translation to highly parallel data flow graphs. The translation algorithm is based on a set of extended “combinators”. The relationship between functional programming languages and demand-driven or data-driven data flow architectures is established.  相似文献   

13.
Brinch Hansen 《Software》1981,11(4):325-359
This paper defines a programming language called Edison. The language is suitable both for teaching the principles of concurrent programming and for designing reliable real-time programs for multiprocessor systems. Edison is block structured and includes modules, concurrent statements, and when statements.  相似文献   

14.
《Computer Languages》1996,22(2-3):181-192
An effective resolution multiprocessor can be built from distributed processing, logic programming, and interface elements. Widely used, portable, components can be modularly composed into a portable parallel system that displays good resistance to premature obsolescence by software evolution. A virtual multiprocessor offering common message passing and configuration services integrates a distributed mesh of sequential resolution engines. Users configure and control the resolution engines and virtual multiprocessor through a GUI using an embedded command language to drive its facilities. Prolog programs either explicitly control parallel execution through message passing or would have to rely on program transformation techniques to extract parallelism implicitly.  相似文献   

15.
在国际超级计算机领域,并行机的体系结构和相应的并行程序设计语言一直是前沿课题和难点,而且体系结构的变化必将带来程序设计语言的改进和发展。本文基于一个带有超级节点的SPP体系结构,对该结构下的控制(任务)并行、数据分布和同步问题分别从语言一级进行了探讨。  相似文献   

16.
Apara-functional programming language is a functional language that has been extended with special annotations that provide an extra degree of control over parallel evaluation. Of most interest are annotations that allow one to express the dynamic mapping of a program onto a known multiprocessor topology. Since it is quite desirable to provide a precise semantics for any programming language, in this paper adenotational semantics is given for a simple para-functional programming language with mapping annotations. A precise meaning is given not only to the normalfunctional behavior of the program (i.e., the answer), but also to theoperational notion of where (i.e., on what processor) expressions are evaluated. The latter semantics is accomplished through an abstract entity called anexecution tree.This research was supported in part by the National Science Foundation under Grants DCR-8403304 and DCR-8451415, and the Department of Energy under Grant DE-FG02-86ER25012.  相似文献   

17.
It is currently possible to build multiprocessor systems which will support the tightly coupled activity of hundreds to thousands of different instruction streams, or processes. This can be done by coupling many monoprocessors, or a smaller number of pipelined multiprocessors, through a high concurrency switching network. The switching network may couple processors to memory modules, resulting in a shared memory multiprocessor system, or it may couple processor/memory pairs, resulting in a distributed memory system.

The need to direct the activity of very many processes simultaneously places qualitatively different demands on a programming language than the direction of a single process. In spite of the different requirements, most languages for multiprocessors have been simple extensions of conventional, single stream programming languages. The extensions are often implemented by way of subroutine calls and have little impact on the basic structure of the language. This paper attempts to examine the underlying conceptual structure of parallel languages for large-scale multiprocessors on the basis of an existing language for shared memory multiprocessors, known as the Force, and to extend the concepts in this language to distributed memory systems.  相似文献   


18.
Per Brinch Hansen 《Software》1989,19(6):579-592
Joyce is a programming language for parallel computers based on CSP and Pascal. A Joyce program defines concurrent agents which communicate through unbuffered channels. This paper describes a multiprocessor implementation of Joyce.  相似文献   

19.
This paper reports on the memory performance of parallel scientific algorithms, written in both pure and impure functional styles. The Id programming language is used, since it allows both pure and impure parallel functional programs to be expressed. The non-strict storage model of Id is introduced. The study focuses on two algorithms: the Dongarra Sorensen Eignensolver and the NAS FT three dimensional heat equation solver, based on FFTs.This study verifies the claim that functional languages allow a composition of programs from modules, exploiting the inter- and intra-module parallelism without the need for rewrinting these modules. But it also shows that memory use of pure functional programs can be excessive, and theat impure functional programs can be as memory-efficient as imperative implementations.  相似文献   

20.
The Hydra Parallel Programming System, a new parallel language extension to Java, and its supporting software are described. It is a fairly simple yet powerful language designed to address a number of areas that have not received much attention. One of these areas is the recompilation of parallel programs at runtime to allow a parallel program to adapt to the architecture it is executing on. The first version of this software system focuses on smaller Symmetric Multiprocessing and compatible architectures which are becoming more common. This particular class of machines has a great need for more options in the area of parallel programming among the vastly popular Java language programmers. Hydra programs will run as sequential Java on machines that do not have the parallel support or do not have an implemented Hydra runtime system without requirement of any modifications to the program. This paper describes the language, compares it with other languages (specifically with JOMP, an OpenMP implementation for Java), presents a brief discussion on compiling and executing Hydra programs, presents some sample benchmarks and their performance on three platforms, and concludes with a discussion of issues and future directions for Hydra. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号