首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
Distributed systems are an alternative to shared-memory multiprocessors for the execution of parallel applications.Panda is a run-time system that provides architectural support for efficient parallel and distributed programming. It supplies fast user-level threads and a means for transparent and coordinated sharing of objects across a homogeneous network. The paper motivates the major architectural choices that guided our design. The problem of sharing data in a distributed environment is discussed, and the performance of the mechanisms provided by thePanda prototype implementation is assessed.  相似文献   

3.
Recurrence formulations for various problems, such as finding an optimal order of matrix multiplication, finding an optimal binary search tree, and optimal triangulation of polygons, assume a similar form. A. Gibbons and W. Rytter (1988) gave a CREW PRAM algorithm to solve such dynamic programming problems. The algorithm uses O(n6/log n) processors and runs in O(log2 n) time. In this article, a modified algorithm is presented that reduces the processor requirement to O(n6/log 5n) while maintaining the same time complexity of O(log2 n)  相似文献   

4.
5.
Factors driving the growth of parallel computing and those inhibiting its growth are identified. The impact of advances in communication, VLSI technology, and embedded systems on parallel architecture are discussed. The two categories of parallel programs-transformational and reactive-are examined. The use of templates and libraries is considered. Reasoning about parallel processes in the context of software design is discussed. Programming environments and operating systems support are addressed  相似文献   

6.
Formal properties of logic languages are largely studied; however, their impact on the practice of software design and programming is currently minimal. In this paper we survey some interesting representatives of the family of logic languages aiming at comparing the different capabilities they offer for designing and programming parallel systems. The logic languages Prolog, Aurora, Flat Concurrent Prolog, Parlog, GHC, and DeltaProlog were chosen, because a suitable set of relevant examples has been published, mostly by the language designers themselves. A number of sample programs is used to expose and compare the languages with respect to their object oriented programming capabilities for multiprocess coordination, interprocess communication, and resource management. Special attention is devoted also to metaprogramming as well, seen as a useful technique for specifying and building the operating environments of the languages themselves. The paper ends with a discussion on positive and negative features found comparing these languages, and indicates some guidelines to be followed in the design of new logic languages.  相似文献   

7.
Reeves  A.P. 《Software, IEEE》1991,8(6):51-59
Two Unix environments developed for programming parallel computers to handle image-processing and vision applications are described. Visx is a portable environment for the development of vision applications that has been used for many years on serial computers in research. Visx was adapted to run on a multiprocessor with modest parallelism by using functional decomposition and standard operating-system capabilities to exploit the parallel hardware. Paragon is a high-level environment for multiprocessor systems that has facilities for both functional decomposition and data partitioning. It provides primitives that will work efficiently on several parallel-processing systems. Paragon's primitives can be used to build special image-processing operations, allowing one's own programming environment to be grown naturally  相似文献   

8.
Parallel programming for multimedia applications   总被引:2,自引:2,他引:0  
Computing capabilities are continuing to increase with the availability of multi core and many core processors. The wide availability of multi core processors has made parallel programming possible for end user applications running on desktops, workstations, and mobile devices. While parallel hardware has become common, software that exploits parallel capabilities is just beginning to take hold. Multimedia applications, with their data parallel nature and large computing requirements will benefit significantly from parallel programming. In this paper an overview of parallel programming is presented and languages and tools for parallel programming such as OpenMP and CUDA are introduced within the scope of multimedia applications.  相似文献   

9.
The Parallel Programming Interface for Distributed Data (PPIDD) library provides an interface, suitable for use in parallel scientific applications, that delivers communications and global data management. The library can be built either using the Global Arrays (GA) toolkit, or a standard MPI-2 library. This abstraction allows the programmer to write portable parallel codes that can utilise the best, or only, communications library that is available on a particular computing platform.Program summaryProgram title: PPIDDCatalogue identifier: AEEF_v1_0Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AEEF_1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 17 698No. of bytes in distributed program, including test data, etc.: 166 173Distribution format: tar.gzProgramming language: Fortran, CComputer: Many parallel systemsOperating system: VariousHas the code been vectorised or parallelized?: Yes. 2–256 processors usedRAM: 50 MbytesClassification: 6.5External routines: Global Arrays or MPI-2Nature of problem: Many scientific applications require management and communication of data that is global, and the standard MPI-2 protocol provides only low-level methods for the required one-sided remote memory access.Solution method: The Parallel Programming Interface for Distributed Data (PPIDD) library provides an interface, suitable for use in parallel scientific applications, that delivers communications and global data management. The library can be built either using the Global Arrays (GA) toolkit, or a standard MPI-2 library. This abstraction allows the programmer to write portable parallel codes that can utilise the best, or only, communications library that is available on a particular computing platform.Running time: Problem dependent. The test provided with the distribution takes only a few seconds to run.  相似文献   

10.
This paper develops some ideas expounded in [1]. It distinguishes a number of ways of using parallelism, including disjoint processes, competition, cooperation, and communication. In each case an axiomatic proof rule is given.  相似文献   

11.
The two major design approaches taken to build distributed and parallel computer systems, multiprocessing and multicomputing, are discussed. A model that combines the best properties of both multiprocessor and multicomputer systems, easy-to-build hardware, and a conceptually simple programming model is presented. Using this model, a programmer defines and invokes operations on shared objects, the runtime system handles reads and writes on these objects, and the reliable broadcast layer implements indivisible updates to objects using the sequencing protocol. The resulting system is easy to program, easy to build, and has acceptable performance on problems with a moderate grain size in which reads are much more common than writes. Orca, a procedural language whose sequential constructs are roughly similar to languages like C or Modula 2 but which also supports parallel processes and shared objects and has been used to develop applications for the prototype system, is described  相似文献   

12.
This paper presents a parallel implementation in APL of an algorithm to set up a database for the KRK -endgame in chess. It shows clearly the techniques necessary to achieve the parallelism and thereby proves that APL can be a valuable productivity increasing aid in this kind of Artificial Intelligence (AI)-research. The complete APL-functions are given in the Appendix. Both reversed pigeon hole and bit map techniques are used. Move generation is table driven with a new technique to cater for the blockage of sliding pieces like a rook. In order to maintain parallelism ‘if’ statements are avoided and extensive use is made of compression and identity elements.  相似文献   

13.
The paper is concerned with the design and implementation of a parallel dynamic programming algorithm for use in ship voyage management The basic concepts are presented in terms of a simple model for weather routing. Other factors involved in voyage management, and their inclusion in a more comprehensive algorithm, are also discussed. The algorithms have been developed and implemented using a transputer-based distributed-memory parallel machine using the high-level communication harness CS Tools. Trial calculations over grids of up to 282 nodes have been carried out and the results are presented. Good speed-ups for the calculations have been attained, and the factors affecting the efficiency of the parallel computations are reviewed. These trial calculations indicate that a ship voyage management system based on parallel dynamic programming is likely to be beneficial.  相似文献   

14.
Motivated by biological inspiration and the issue of instruction disruption, we develop a new form of Linear Genetic Programming (LGP) called Parallel LGP (PLGP) for classification problems. PLGP programs consist of multiple lists of instructions. These lists are executed in parallel after which the resulting vectors are combined to produce the classification result. PLGP limits the disruptive effects of crossover and mutation, which allows PLGP to significantly outperform regular LGP. Furthermore, PLGP programs are naturally suited to caching due to their parallel architecture. Although caching techniques have been used in tree based GP, to our knowledge, there are no caching techniques specifically developed for LGP. Thus, a novel caching technique is also developed with the intrinsic properties of PLGP in mind, which can decrease fitness evaluation time by almost an order of magnitude for the classification problems.  相似文献   

15.
The standard DP (dynamic programming) algorithms are limited by the substantial computational demands they put on contemporary serial computers. In this work, the theory behind the solution to serial monadic dynamic programming problems highlights the theory and application of parallel dynamic programming on a general-purpose architecture (cluster or network of workstations). A simple and well-known technique, message passing, is considered. Several parallel serial monadic DP algorithms are proposed, based on the parallelization in the state variables and the parallelization in the decision variables. Algorithms with no interpolation are also proposed. It is demonstrated how constraints introduce load unbalance which affect scalability and how this problem is inherent to DP.  相似文献   

16.
Parallel and distributed methods for evolutionary algorithms have concentrated on maintaining multiple populations of genotypes, where each genotype in a population encodes a potential solution to the problem. In this paper, we investigate the parallelisation of the genotype itself into a collection of independent chromosomes which can be evaluated in parallel. We call this multi-chromosomal evolution (MCE). We test this approach using Cartesian Genetic Programming and apply MCE to a series of digital circuit design problems to compare the efficacy of MCE with a conventional single chromosome approach (SCE). MCE can be readily used for many digital circuits because they have multiple outputs. In MCE, an independent chromosome is assigned to each output. When we compare MCE with SCE we find that MCE allows us to evolve solutions much faster. In addition, in some cases we were able to evolve solutions with MCE that we unable to with SCE. In a case-study, we investigate how MCE can be applied to to a single objective problem in the domain of image classification, namely, the classification of breast X-rays for cancer. To apply MCE to this problem, we identify regions of interest (RoI) from the mammograms, divide the RoI into a collection of sub-images and use a chromosome to classify each sub-image. This problem allows us to evaluate various evolutionary mutation operators which can pairwise swap chromosomes either randomly or topographically or reuse chromosomes in place of other chromosomes.  相似文献   

17.
对非线性规划问题的并行求解与分析   总被引:1,自引:0,他引:1  
根据非线性规划问题的网格并行运算的要求,在分析非线性最小二乘问题的迭代法的基础上,提出了非线性最小二乘问题的并行迭代法来提高其并行度.然后给出检验函数,使用C MPI语言编写非线性最小二乘问题的并行程序,再将其转换为RSL作业脚本,实时地监控其状态信息,并对返回信息做出分析,证明了并行迭代法的优越性.  相似文献   

18.
An algorithm based on parallel programming technology is proposed for solving coordination problems in decentralized local economic models. Examples of decomposition methods for linear distributed systems are considered. The software tools for the solution of these problems are supplied by the PARUS programming system.Translated from Kibernetika, No. 3, pp. 105–110, May–June 1990.  相似文献   

19.
基于粗糙规划的不确定加工时间的并行机调度   总被引:1,自引:0,他引:1  
于艾清  顾幸生 《控制与决策》2008,23(12):1427-1431
针对并行机调度中的不确定工件加工时间,提出用粗糙变量表示不确定量,并由此建立该问题的粗糙期望值规划模型.提出一种应用于调度问题的进化规划算法,改进了针对并行机问题的编码方式和变异方法.采用粗糙模拟的方法计算个体的适应值,即粗糙期望估计值,并加以不同规模的算例进行仿真实验.仿真结果表明,改进进化规划算法得到的解优于遗传算法得到的解.  相似文献   

20.
This paper describes the architecture of DISC, a system for parallel software development. The system is designed for programming computer systems having several autonomous units, not memory-sharing, and linked by means of a communication network.

The system consists of three parts. The concurrent programming language DISC (DIStributed C), which is an extension of the C language based on the concurrent mechanisms envisaged by the CSP computational model. The programming environment, designed to promote software engineering techniques in the development of distributed-programs. The language run-time support, which provides for the distributed execution of programs.  相似文献   


设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号