首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Keith E. Gorlen 《Software》1987,17(12):899-922
The Object-Oriented Program Support (OOPS) class library is a portable collection of classes similar to those of Smalltalk-80 that has been developed using the C++ programming language under the UNIX operating system. The OOPS library includes generally useful data types, such as String, Date and Time, and most of the Smalltalk-80 collection classes such as OrderedCtn (indexed arrays), LinkedList (singly linked lists), Set (hash tables), and Dictionary (associative arrays). Arbitrarily complex data structures comprised of OOPS and user-defined objects can be stored on disk files or moved between UNIX processes by means of an object I/O facility. The classes Process, Scheduler, Semaphore and SharedQueue provide multiprogramming with coroutines. This paper gives a brief introduction to object-oriented programming and how it is supported by the C+ + programming language. An overview of the OOPS library is also presented, followed by a programming example. The implementation details of two of the class library's more interesting features, object I/O and processes, are described. The paper concludes with a discussion of the differences between the OOPS library and Smalltalk-80 and some observations based on our programming experience with C++ and OOPS.  相似文献   

2.

Context

Writing software for the current generation of parallel systems requires significant programmer effort, and the community is seeking alternatives that reduce effort while still achieving good performance.

Objective

Measure the effect of parallel programming models (message-passing vs. PRAM-like) on programmer effort.

Design, setting, and subjects

One group of subjects implemented sparse-matrix dense-vector multiplication using message-passing (MPI), and a second group solved the same problem using a PRAM-like model (XMTC). The subjects were students in two graduate-level classes: one class was taught MPI and the other was taught XMTC.

Main outcome measures

Development time, program correctness.

Results

Mean XMTC development time was 4.8 h less than mean MPI development time (95% confidence interval, 2.0-7.7), a 46% reduction. XMTC programs were more likely to be correct, but the difference in correctness rates was not statistically significant (p = .16).

Conclusions

XMTC solutions for this particular problem required less effort than MPI equivalents, but further studies are necessary which examine different types of problems and different levels of programmer experience.  相似文献   

3.
The Generate-Test-Aggregate (GTA for short) algorithm is modeled following a simple and straightforward programming pattern, for combinatorial problems. First, generate all candidates; second, test and filter out invalid ones; finally, aggregate valid ones to make the final result. These three processing steps can be specified by three building blocks namely, generator, tester, and aggregator. Despite the simplicity of algorithm design, implementing the GTA algorithm naively following the three processing steps, i.e., brute-force, will result in an exponential-cost computation, and thus it is impractical for processing large data. The theory of GTA illustrates that if the definitions of generator, tester, and aggregator satisfy certain conditions, an efficient (usually near-linear cost) MapReduce program can be automatically derived from the GTA algorithm.  相似文献   

4.
We present a parallel algorithm for computing an optimal sequence alignment in efficient space. The algorithm is intended for a message-passing architecture with one-dimensional-array topology. The algorithm computes an optimal alignment of two sequences of lengthsM andN inO((M+N) 2 /P) time andO((M+N)/P) space per processor, where the number of processors isP>=max(M, N). Thus, whenP=max(M, N) it achieves linear speedup and requires constant space per processor. Some experimental results on an Intel hypercube are provided.This research was supported by NIH Grant LM05110 from the National Library of Medicine.  相似文献   

5.
A truly parallel logic programming system is proposed. The system is based on the commercially available parallel logic programming language STRAND, which has been extended in order to overcome the inherent limitations of such systems, like AND-type of parallelism, lack of backtracking, limited unification, etc. The system has been tested using an example from the area of natural language processing.  相似文献   

6.
In C++, multi‐dimensional arrays are often used but the language provides limited native support for them. The language, in its Standard Library, supplies sophisticated interfaces for manipulating sequential data, but relies on its bare‐bones C heritage for arrays. The MultiArray library, a part of the Boost library collection, enhances a C++ programmer's tool set with versatile multi‐dimensional array abstractions. It includes a general array class template and native array adaptors that support idiomatic array operations and interoperate with C++ Standard Library containers and algorithms. The arrays share a common interface, expressed as a generic programming concept, in terms of which generic array algorithms can be implemented. We present the library design, introduce a generic interface for array programming, demonstrate how the arrays integrate with the C++ Standard Library, and discuss the essential aspects of their implementation. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

7.
A major problem for the integration of concurrency in object-oriented languages is the so-called inheritance anomaly, i.e. the conflicts between inheritance and concurrency that often cause the need to redefine inherited methods in order to maintain the integrity of objects. Several solutions have been proposed for resolving these conflicts. However, some of them are incomplete, and do not solve all types of inheritance anomaly; others make the definition of classes complex and/or their implementation inefficient. This paper describes a C++ library for concurrent programming that provides a comprehensive framework particularly suitable for coarse-grained distributed applications. This library copes with the inheritance anomaly problem, presenting a solution that minimizes the redefinition of inherited methods without increasing the complexity to write them. This solution is based on the use of a special set of methods, called interface methods, composed of a body and two sets of synchronization constraints. These two sets of synchronization constraints are respectively used to enable the execution of their method body and to disable the methods that cannot be executed after their method. © 1998 John Wiley & Sons, Ltd.  相似文献   

8.
9.
This paper advocates a configuration approach to parallel programming for distributed memory multicomputers, in particular, arrays of transputers. The configuration approach prescribes the rigorous separation of the logical structure of a program from its component parts. In the context of parallel programs, components are processes which communicate by exchanging messages. The configuration defines the instances of these processes which exist in the program and the paths by which they are interconnected.

The approach is demonstrated by a toolset (Tonic) which embodies the configuration paradigm. A separate configuration language is used to describe both the logical structure of the parallel program and the physical structure of the target multicomputer. Different logical to physical mappings can be obtained by applying different physical configurations to the same logical configuration. The toolset has been developed from the Conic system for distributed programming. The use of the toolset is illustrated through its application to the development of a parallel program to compute Mandelbrot sets.  相似文献   


10.
Large-scale scientific and engineering computation problems are usually complex and consequently the development of parallel programs for solving these problems is a difficult task. In this paper, we describe the graph-oriented programming (GOP) model and environment for building and evaluating parallel applications. The GOP model provides higher level abstractions for message-passing parallel programming and the software environment offers tools which can ease programmers for parallelizing, writing, and deploying scientific and engineering computing applications. We discuss the motivations and various issues in developing the model and the software environment, present the design of the system architecture and the components, and describe the evaluation of the environment implemented on top of MPI with a sample parallel scientific application program. With the support of the high-level abstractions provided by the proposed GOP environment, programming of parallel applications on various parallel architectures can be greatly simplified.  相似文献   

11.
Marco Vanneschi   《Parallel Computing》2002,28(12):595-1732
A software development system based upon integrated skeleton technology (ASSIST) is a proposal of a new programming environment oriented to the development of parallel and distributed high-performance applications according to a unified approach. The main goals are: high-level programmability and software productivity for complex multidisciplinary applications, including data-intensive and interactive software; performance portability across different platforms, in particular large-scale platforms and grids; effective reuse of parallel software; efficient evolution of applications through versions that scale according to the underlying technologies.

The purpose of this paper is to show the principles of the proposed approach in terms of the programming model (successive papers will deal with the environment implementation and with performance evaluation). The features and the characteristics of the ASSIST programming model are described according to an operational semantics style and using examples to drive the presentation, to show the expressive power and to discuss the research issues.

According to our previous experience in structured parallel programming, in ASSIST we wish to overcome some limitations of the classical skeletons approach to improve generality and flexibility, expressive power and efficiency for irregular, dynamic and interactive applications, as well as for complex combinations of task and data parallelism. A new paradigm, called “parallel module” (parmod), is defined which, in addition to expressing the semantics of several skeletons as particular cases, is able to express more general parallel and distributed program structures, including both data-flow and nondeterministic reactive computations. ASSIST allows the programmer to design the applications in the form of generic graphs of parallel components. Another distinguishing feature is that ASSIST modules are able to utilize external objects, including shared data structures and abstract objects (e.g. CORBA), with standard interfacing mechanisms. In turn, an ASSIST application can be reused and exported as a component for other applications, possibly expressed in different formalisms.  相似文献   


12.
PARC++ is a system that supports object-oriented parallel programming in C++. PARC++ provides the user with a set of predefined C++ classes that can easily be used for the construction of parallel C++ programs. With the help of PARC++ objects, the programmer is able to create and start new processes (threads), to synchronize their activities (Blocklock, Monitor) and to manage communication via message passing (Mailbox). PARC++ is written in C++ and currently runs on top of the EMEX operating system on a FORCE machine with 11 processing elements and an EDS (European Declarative System) with 28 processing elements. The paper also contains information about the run-time system model, the implementation and some performance measurements.  相似文献   

13.
A macro package for expressing message passing functions within parallel FORTRAN program is presented. It makes the user program fully portable among all parallel computers where the macros are implemented. The implementation on the Intel iPSC/2 hypercube is discussed in more detail. New message passing primitives have been added to the iPSC/2 operating system, offering the user a broader functionality at no efficiency loss. The full macro set, using these primitives, works with the same performance as the original Intel primitives.  相似文献   

14.
This paper presents tuple channel model (TCM), a new coordination model for parallel and distributed programming. Our proposal is based on the use of tuple channels (TCs) to model the communication and synchronization of different activities. TCs are multi-point channels that allow complex data structures to be communicated among multiple producers and consumers. This communication model allows incremental and backward communication to be expressed, providing an elegant way of implicit and direct communication and reactive control. TCs can be dynamically interconnected through the use of user-defined connectors, providing great flexibility for the definition of complex and dynamic interaction protocols. TCM also provides a simple service management mechanism, by means of which open systems can be implemented in an appropriate way. The suitability, expressiveness and programming techniques of the model are presented by means of some illustrative examples. In addition, some implementation details of the developed prototypes are sketched and we show the preliminary results demonstrating the efficiency of the proposal.  相似文献   

15.
This paper introduces NiHu, a C++ template library for boundary element methods (BEM). The library is capable of computing the coefficients of discretised boundary integral operators in a generic way with arbitrarily defined kernels and function spaces. NiHu’s template core defines the workflow of a general BEM algorithm independent of the specific application. The core provides expressive syntax, based on the operator notation of the BEM, reflecting the mathematics behind boundary elements in the C++ source code. The customisable Component library contains elements specific to particular applications such as different numerical integration techniques and regularisation methods. The library can be used for creating a standalone C++ application using external open source libraries, or compiling a Matlab toolbox through the MEX interface. By massively exploiting C++ template metaprogramming, NiHu generates optimised codes for specific applications, including heterogeneous problems. The paper introduces the main concepts of the novel development, demonstrates its versatility and flexibility and compares the implementation’s performance to that of other open source projects.  相似文献   

16.
From patterns to frameworks to parallel programs   总被引:1,自引:0,他引:1  
Object-oriented programming, design patterns, and frameworks are abstraction techniques that have been used to reduce the complexity of sequential programming. This paper describes our approach of applying these three techniques to the more difficult parallel programming domain. The Parallel Design Patterns (PDP) process, the basis of the CO2P3S parallel programming system, combines these techniques in a layered development model. The result is a new approach to parallel programming that addresses correctness and openness in a unique way. At the topmost development layer, a customized framework is generated from a design pattern specification of the parallel structure of the program. This framework encapsulates all of the structural details of the pattern, including communication and synchronization, to prevent programmer errors and ensure correctness. Lower layers are used only for performance tuning to make the code as efficient as necessary. This paper describes CO2P3S, based on the PDP process, and demonstrates it using an example application. We also provide results from a usability study of CO2P3S.  相似文献   

17.
We present a method to derandomizeRNC algorithms, converting them toNC algorithms. Using it, we show how to approximate a class of NP-hard integer programming problems inNC, to within factors better than the current-bestNC algorithms (of Berger and Rompel and Motwaniet al.); in some cases, the approximation factors are as good as the best-known sequential algorithms, due to Raghavan. This class includes problems such as global wire-routing in VLSI gate arrays and a generalization of telephone network planning in SONET rings. Also for a subfamily of the “packing” integer programs, we provide the firstNC approximation algorithms; this includes problems such as maximum matchings in hypergraphs, and generalizations. The key to the utility of our method is that it involves sums ofsuperpolynomially many terms, which can however be computed inNC; this superpolynomiality is the bottleneck for some earlier approaches, due to Berger and Rompel and Motwaniet al. A preliminary version of this work appeared inProc. International Colloquim on Automata, Languages and Programming, 1996, pages 562–573. Work done in parts at DIMACS (supported in part by NSF-STC91-19999 and by support from the N.J. Commission on Science and Technology), at the Institute for Advanced Study, Princeton (supported in part by Grant 93-6-6 of the Alfred P. Sloan Foundation), and at the National University of Singapore.  相似文献   

18.
N. H. Gehani  W. D. Roome 《Software》1988,18(12):1157-1177
C++ and Concurrent C are both upward-compatible supersets of C that provide data abstraction and parallel programming facilities, respectively. Although data abstraction facilities are important for writing concurrent programs, we did not provide data abstraction facilities in Concurrent C because we did not want to duplicate the C++ research effort. Instead, we decided that we would eventually integrate C++ and Concurrent C facilities to produce a language with both data abstraction and parallel programming facilities, namely, Concurrent C++. Data abstraction and parallel programming facilities are orthogonal. Despite this, the merger of Concurrent C and C++ raised several integration issues. In this paper, we will give introductions to C++ and Concurrent C, give two examples illustrating the advantages of using data abstraction facilities in concurrent programs, and discuss issues in integrating C++ and Concurrent C to produce Concurrent C++.  相似文献   

19.
基于动态链接库的Visual C++混合编程   总被引:1,自引:0,他引:1  
为了在不同的编程语言中实现取长补短,探讨了动态链接库(DLL)在Visual C++混合编程中的多方面应用.结合Visual C++的编程环境,分析了DLL的技术特点与调用方式.基于DLL技术,分别讨论了Visual C++如何与Visual FORTRAN、MATLAB以及Visual C#等进行混合编程,并利用一些生动的示例代码进行了说明.实验结果与分析表明了利用DLL进行混合编程的优越性.  相似文献   

20.
C^++语言异常处理机制的研究   总被引:4,自引:0,他引:4  
裘宗燕 《计算机科学》2003,30(11):155-156
Here we make a detailed investigation on the exception handling mechanism of C^ ,have much discussion on many design and implementation problems ,and offer many suggestions on the use of it.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号