首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Two parallel implementations of a state-of-the-art ocean model are described and analyzed: one is written in the implicitly parallel language Id for the Monsoon multithreaded dataflow architecture, and the other in data-parallel CM Fortran for the CM-5. The multithreaded programming model is inherently more expressive than the data-parallel model but is not especially adapted to regular data structures common to many scientific codes. One goal of this study is to understand what, if any, are the performance penalties of multithreaded execution when implementing a program that is well suited for data-parallel execution. To avoid technology and machine configuration issues, the two implementations are compared in terms of overhead cycles perrequiredfloating point operation. When flows in complex geometries typical of ocean basins are simulated, the data-parallel model only remains efficient if redundant computations are performed over land. The generality of the Id programming model, however, allows one to easily and transparently implement a parallel code that computes only in the ocean. When ocean basins with complex and irregular geometry are simulated the normalized performance on Monsoon is comparable with that of the CM-5. For more regular geometries that map well to the computational domain, the data-parallel approach proves to be a better match. We conclude by examining the extent to which clusters of mainstream symmetric multiprocessor (SMP) systems offer a scientific computing environment which can capitalize on and combine the strengths of the two paradigms.  相似文献   

2.
Lee  B. Hurson  A.R. 《Computer》1994,27(8):27-39
Contrary to initial expectations, implementing dataflow computers has presented a monumental challenge. Now, however, multithreading offers a viable alternative for building hybrid architectures that exploit parallelism. The eventual success of dataflow computers will depend on their programmability. Traditionally, they've been programmed in languages such as Id and SISAL (Streams and Iterations in a Single Assignment Language) that use functional semantics. These languages reveal high levels of concurrency and translate on to dataflow machines and conventional parallel machines via the Threaded Abstract Machine (TAM). However, because their syntax and semantics differ from the imperative counterparts such as Fortran and C, they have been slow to gain acceptance in the programming community. An alternative is to explore the use of established imperative languages to program dataflow machines. However, the difficulty will be analyzing data dependencies and extracting parallelism from source code that contains side effects. Therefore, more research is still needed to develop compilers for conventional languages that can produce parallel code comparable to that of parallel functional languages  相似文献   

3.
4.
Vienna Fortran, High Performance Fortran (HPF), and other data parallel languages have been introduced to allow the programming of massively parallel distributed-memory machines (DMMP) at a relatively high level of abstraction, based on the SPMD paradigm. Their main features include directives to express the distribution of data and computations across the processors of a machine. In this paper, we use Vienna-Fortran as a general framework for dealing with sparse data structures. We describe new methods for the representation and distribution of such data on DMMPs, and propose simple language features that permit the user to characterize a matrix as “sparse” and specify the associated representation. Together with the data distribution for the matrix, this enables the complier and runtime system to translate sequential sparse code into explicitly parallel message-passing code. We develop new compilation and runtime techniques, which focus on achieving storage economy and reducing communication overhead in the target program. The overall result is a powerful mechanism for dealing efficiently with sparse matrices in data parallel languages and their compilers for DMMPs  相似文献   

5.
Scientific computing is usually associated with compiled languages for maximum efficiency. However, in a typical application program, only a small part of the code is time-critical and requires the efficiency of a compiled language. It is often advantageous to use interpreted high-level languages for the remaining tasks, adopting a mixed-language approach. This will be demonstrated for Python, an interpreted object-oriented high-level language that is well suited for scientific computing. Particular attention is paid to high-level parallel programming using Python and the BSP model. We explain the basics of BSP and how it differs from other parallel programming tools like MPI. Thereafter we present an application of Python and BSP for solving a partial differential equation from computational science, utilizing high-level design of libraries and mixed-language (Python–C or Python–Fortran) programming.  相似文献   

6.
This paper proposes extensions of sequential programming languages for parallel programming that have the following features: 1) Dynamic Structures: The process structure is dynamic. Processes and variables can be created and deleted. 2) Paradigm Integration: The programming notation supports shared memory and message passing models. 3) Determinism: Demonstrating that a program is deterministic-all executions with the same input produce the same output-is straightforward, Programs can be written so that compilers can verify that the programs are deterministic. Nondeterministic constructs can be introduced in a sequence of refinement steps to obtain greater efficiency if required. The ideas have been incorporated in an extension of Fortran, but the underlying sequential imperative language is not central to the ideas described here. A compiler for the Fortran extension, called Fortran M, is available by anonymous ftp From Argonne National Laboratory. Fortran M has been used for a variety of parallel applications  相似文献   

7.
David R. Hanson 《Software》1977,7(5):625-630
Adaptable programs are one of the benefits of structured programming. The adaptability of a program is the degree to which it can be transformed into another program that performs a similar, but slightly different, function. While it is clear that the aims of structured programming are best satisfied by the use of modern programming languages, a great number of programmers must use languages such as Fortran. To alleviate this situation, a number of preprocessors have been introduced that give Fortran a more structured facade. This paper describes an experiment performed to test the adaptability of programs written in RATFOR, one of these preprocessors. Judging from the results, the use of a good preprocessor can significantly increase the adaptability of Fortran programs.  相似文献   

8.
The Portable Parallelizing Fortran Compiler (PPFC) is an additional component for the portable programming environment developed in Tel-Aviv University for scientific code. This environment supports portable and efficient programming of diverse MIMD multiprocessors, both distributed- and shared-memory. Till now this environment has consisted of two tools: the Virtual Machine for MultiProcessors (VMMP) and the Portable Parallelizing Pascal compiler (P3C). We have added the PPFC which is an automatic parallelizer compiler for the Fortran language. The compiler is fully automatic (does not require additional declarations to assist parallelization), which is characterized by loops operating on regular data structures, and produces efficient and portable code for a variety of multiprocessors from the same serial code. The parallel implementation uses the VMMP, which is a software package that provides a coherent set of services for explicitly parallel application programs running on diverse MIMD multiprocessors. VMMP is intended to simplify parallel program writing and to promote portable and efficient programming. The PPFC parallelized 12 out of the 24 Livermore Loops. It was also applied to parallelize all the 14 Fortran application programs that where parallelized by the P3C and achieved the same speed-ups and efficiencies. In most examples the PPFC achieved high speed-ups and efficiencies on all target multiprocessors. The PPFC emphasizes efficiency and code portability. Although PPFC employs a relatively simple data flow analysis, it produces efficient code for various widely used application programs.  相似文献   

9.
There are many paradigms being promoted and explored for programming parallel computers, including modified sequential languages, new imperative languages and applicative languages. SISAL is an applicative language which has been designed by a consortium of industrial and research organizations for the specification and execution of parallel programs. It allows programs to be written with little concern for the structure of the underlying machine, thus the programmer is free to explore different ways of expressing the parallelism. A major problem with applicative languages has been their poor efficiency at handling large data structures. To counter this problem SISAL includes some advanced memory management techniques for reducing the amount of data copying that occurs. In this paper we discuss the implementation of some image processing benchmarks in SISAL and C to evaluate the effectiveness of the memory management code. In general, the SISAL program was easier to code than the C (augmented with the PARMACS macros) because we were not concerned with the parallel implementation details. We found that the SISAL performance was in general comparable to C, and that it could be brought in line with an efficient parallel C implementation by some programmer-specified code transformations.  相似文献   

10.
The mpF programming language, which is an extension of Fortran 90 for parallel systems with distributed memory, is described. This language was developed using the expertise obtained in the application and evolution of the mpC programming language. mpF is based on the explicit parallelism approach and is an attempt to find a compromise between the efficiency and expressive power on the one hand and the convenience of use on the other hand. Basic concepts of the language are outlined. The efficiency of programs written in mpF and in C with the calls of MPI functions is compared.  相似文献   

11.
The Fortran language has been commonly used for many kinds of scientific computation. In this paper, we focus on the solution of an unsteady heat conduction equation, which is one of the simplest problems for thermal dynamics. Recently, a GPU (graphics processing unit) has been enhanced with a Fortran programming language capability employing CUDA (compute unified device architecture), known as CUDA Fortran. We find that the speed performance of a system using an ordinary program coding of CUDA Fortran is lower than that of systems using a program coding of CUDA C. We also find that intermediate assembly files PTX (parallel thread execution) of the two languages are not coincident. Therefore, by comparing the PTX files from the two coding programs we could detect the bottleneck that causes the speed reduction. We propose three optimization techniques that can enable the calculated speeds using CUDA Fortran and CUDA C to be coincident. The optimizations can be performed by the Fortran language when improved by an analyzed PTX file. It is thus possible to improve the performance of CUDA Fortran by adding a correction to it, which happens to be at a programming language level.  相似文献   

12.
This paper reports on the memory performance of parallel scientific algorithms, written in both pure and impure functional styles. The Id programming language is used, since it allows both pure and impure parallel functional programs to be expressed. The non-strict storage model of Id is introduced. The study focuses on two algorithms: the Dongarra Sorensen Eignensolver and the NAS FT three dimensional heat equation solver, based on FFTs.This study verifies the claim that functional languages allow a composition of programs from modules, exploiting the inter- and intra-module parallelism without the need for rewrinting these modules. But it also shows that memory use of pure functional programs can be excessive, and theat impure functional programs can be as memory-efficient as imperative implementations.  相似文献   

13.
Dynamic programming algorithms are traditionally expressed by a set of matrix recurrences—a low level of abstraction, which renders the design of novel dynamic programming algorithms difficult and makes debugging cumbersome.Bellman's GAP is a declarative, domain-specific language, which supports dynamic programming over sequence data. It implements the algebraic style of dynamic programming and allows one to specify algorithms by combining so-called yield grammars with evaluation algebras. Products on algebras allow to create novel types of analyses from already given ones, without modifying tested components. Bellman's GAP extends the previous concepts of algebraic dynamic programming in several respects, such as an “interleaved” product operation and the analysis of multi-track input.Extensive analysis of the yield grammar and the evaluation algebras is required for generating efficient imperative code from the algebraic specification. This article gives an overview of the analyses required and presents several of them in detail. Measurements with “real-world” applications demonstrate the quality of the code produced.  相似文献   

14.
Current implementation techniques for functional languages differ considerably from those for logic languages. This complicates the development of flexible and efficient abstract machines that can be used for the compilation of declarative languages combining concepts of functional and logic programming. We propose an abstract machine, called the JUMP-machine, which systematically integrates the operational concepts needed to implement the functional and logic programming paradigm. The use of a tagless representation for heap objects, which originates from the Spineless Tagless G-machine, supports the integration of different concepts. In this paper, we provide a functional logic kernel language and show how to translate it into the abstract machine language of the JUMP-machine. Furthermore, we define the operational semantics of the machine language formally and discuss the mapping of the abstract machine to concrete machine architectures. We tested the approach by writing a compiler for the functional logic language GTML. The obtained performance results indicate that the proposed method allows to implement functional logic languages efficiently.  相似文献   

15.
In object programming languages, the Visitor design pattern allows separation of algorithms and data structures. When applying this pattern to tree‐like structures, programmers are always confronted with the difficulty of making their code evolve. One reason is that the code implementing the algorithm is interwound with the code implementing the traversal inside the visitor. When implementing algorithms such as data analyses or transformations, encoding the traversal directly into the algorithm turns out to be cumbersome as this type of algorithm only focuses on a small part of the data‐structure model (e.g., program optimization). Unfortunately, typed programming languages like Java do not offer simple solutions for expressing generic traversals. Rewrite‐based languages like ELAN or Stratego have introduced the notion of strategies to express both generic traversal and rule application control in a declarative way. Starting from this approach, our goal was to make the notion of strategic programming available in a widely used language such as Java and thus to offer generic traversals in typed Java structures. In this paper, we present the strategy language SL that provides programming support for strategies in Java. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

16.
17.
Parallel languages allow the programmer to express parallelism at a high level. The management of parallelism and the generation of interprocessor communication is left to the compiler and the runtime system. This approach to parallel programming is particularly attractive if a suitable widely accepted parallel language is available. High Performance Fortran (HPF) has emerged as the first popular machine independent parallel language, and remarkable progress has been made towards compiling HPF efficiently. However, the performance of HPF programs is often poor and unpredictable, and obtaining adequate performance is a major stumbling block that must be overcome if HPF is to gain widespread acceptance. The programmer is often in the dark about how to improve the performance of an HPF program since poor performance can be attributed to a variety of reasons, including poor choice of algorithm, limited use of parallelism, or an inefficient data mapping. This paper presents a profiling tool that allows the programmer to identify the regions of the program that execute inefficiently, and to focus on the potential causes of poor performance. The central idea is to distinguish the code that is executing efficiently from the code that is executing poorly. Efficient code uses all processors of a parallel system to make progress, while inefficient code causes processors to wait, execute replicated code, idle, communicate, or perform compiler bookkeeping. We designate the latter code as non-scalable, since adding more processors generally does not lead to improved performance for such code. By analogy, the former code is called scalable. The tool presented here separates a program into scalable and non-scalable components and identifies the causes of non-scalability of different components. We show that compiler information is the key to dividing the execution times into logical categories that are meaningful to the programmer. We present the design and implementation of a profiler that is integrated with Fx, a compiler for a variant of HPF. The paper includes two examples that demonstrate how the data reported by the profiler are used to identify and resolve performance bugs in parallel programs. © 1997 John Wiley & Sons, Ltd.  相似文献   

18.
Ken Slonneger 《Software》1993,23(12):1379-1397
Several authors have suggested translating denotational semantics into prototype interpreters written in high-level programming languages to provide evaluation tools for language designers. These implementations have generally been understandable when restricted to direct denotational semantics. This paper considers using two declarative programming languages, Prolog and Standard ML, to implement an interpreter that follows the continuation semantics of a small imperative programming language, called Gull. Each of the two declarative languages presents certain difficulties related to evaluation strategies and expressiveness. The implementations are compared in terms of their ease of use for prototyping, their resemblance to the denotational definitions, and their efficiency.  相似文献   

19.
Batson  A. 《Computer》1976,9(11):21-26
No application programmer writes machine-language programs–i.e., strings of ones and zeroes. That primitive pursuit has long been reserved for those few who create the very first modules of a software system for new hardware. Instead, programmers make use of a wide spectrum of symbolic programming languages, ranging from assembly code to high-level languages such as Fortran, Cobol, and the Algol family. Every programming language has semantics which define some abstract machine. For the assembly-language programmer this machine bears a great resemblance to the actual hardware on which the program will be interpreted, but even here the programmer will frequently use system-defined subroutines or macros which represent extensions of the base hardware facilities. The high-level language programmer's abstract machine reflects the control mechanisms and data structures characteristic of the language. The Fortran programmer, for example, can think in terms of multidimensional array structures, DO loops, subprogram facilities, and so on. In principle he need never be concerned with the manner in which his abstract Fortran machine is to be realized by a particular hardware and software system. The user of a modern electronic hand calculator needs no knowledge of the works inside the box, and a modern high-level language system should present to its users an equally consistent environment, completely defined in terms of the syntax and semantics of the source language.  相似文献   

20.
Grids are arrays that can have any shape. Grid elements need not be contiguous i.e. grids can have holes in them. For example, grids can be trapezoidal, parabolic, rectangular with a hole, or pyramid-like. Programs written using grids and the grid iteration statement are more general than those written using arrays and/or functions to simulate non-array shapes. To alter an existing program to work for another grid shape, one need only modify the grid declaration suitably, leaving the rest of the program intact. Programs are smaller, semantically clearer and have a more natural problem representation. Grids may be used to represent sparse matrices.Using Fortran as the base language, we show by means of examples from numerical analysis and engineering that the use of grids simplifies programming considerably. Efficient ways of implementing grids are discussed. Grids havebeen implemented as an extension to Fortran.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号