首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
This paper presents an efficient parallel algorithm for sorting N data items on two-dimensional mesh connected-computers with multiple broadcasting (2-MCCMB). The algorithm uses N × N 2/3 processors and takes 0(N 1/3) time, whereas the previous algorithm by Chung-Horng Lung [3] uses N × N processors and takes 0(N l/2) time on 2-MCCMB.  相似文献   

2.
3.
A divisible load is an amount W of computational work that can be arbitrarily divided into chunks and distributed among a set P of worker processors to be processed in parallel. Divisible load applications occur in many fields of science and engineering. They can be parallelized in a master‐worker fashion, but they pose several scheduling challenges. The divisible load scheduling problem consists in (a) selecting a subset of active workers, (b) defining the order in which the chunks will be transmitted to each of them, and (c) deciding the amount of load that will be transmitted to each worker , with , so as to minimize the makespan, i.e., the total elapsed time since the master began to send data to the first worker, until the last worker stops its computations. In this work, we propose a biased random‐key genetic algorithm for solving the divisible load scheduling problem. Computational results show that the proposed heuristic outperforms the best heuristic in the literature.  相似文献   

4.
With the popularity of parallel database machines based on the shared-nothing architecture, it has become important to find external sorting algorithms which lead to a load-balanced computation, i.e., balanced execution, communication and output. If during the course of the sorting algorithm each processor is equally loaded, parallelism is fully exploited. Similarly, balanced communication will not congest the network traffic. Since sorting can be used to support a number of other relational operations (joins, duplicate elimination, building indexes etc.) data skew produced by sorting can further lead to execution skew at later stages of these operations. In this paper we present a load-balanced parallel sorting algorithm for shared-nothing architectures. It is a multiple-input multiple-output algorithm with four stages, based on a generalization of Batcher's odd-even merge. At each stage then keys are evenly distributed among thep processors (i.e., there is no final sequential merge phase) and the distribution of keys between stages ensures against network congestion. There is no assumption made on the key distribution and the algorithm performs equally well in the presence of duplicate keys. Hence our approach always guarantees its performance, as long asn is greater thanp 3, which is the case of interest for sorting large relations. In addition, processors can be added incrementally. Recommended by: Patrick Valduriez  相似文献   

5.
一种新的并行归并排序算法   总被引:5,自引:0,他引:5  
文章提出了一种新的并行归并排序算法。算法充分利用并行系统中各个处理机中数据排序后序列长度相等的特点,计算出归并段对中的一个元素和最后一个元素的位置,然后再从相应的位置进行归并排序。该算法可使排序后的数据分布完全达到平衡,具有较高的负载平衡性、可扩展性和排序稳定性。文章最后给出了基于PC集群的实验结果,并把该结果与PSRS算法作了比较。  相似文献   

6.
7.
Multicore processors form the basis of most traditional high performance parallel processing architectures. Early experiences with these computers showed significant performance problems, both with regard to computation and inter‐process communication. The transition from Purple, an IBM POWER5‐based machine, to Cielo, a Cray XE6, as the main capability computing platform for the United States Department of Energy's Advanced Simulation and Computing campaign provides an opportunity to reexamine these issues after experiences with a few generations of multicore‐based machines. Experiences with Purple identified some important characteristics that led to strong performance of complex scientific application programs at very large scales. Herein, we compare the performance of some Advanced Simulation and Computing mission critical applications at capability scale across this transition to multicore processors. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

8.
We present three parallel sorting algorithms suitable for implementation on tightly coupled multiprocessors and compare their performance on the Denelcor HEP. Two of the algorithms implemented—parallel Shellsort and quickmerge—are new. Shellsort is amenable to parallelization; however, since Shellsort has higher complexity than quicksort, parallel Shellsort is inferior to parallel quicksort. A second new parallel algorithm, called quickmerge, is based upon both quicksort and mergesort. Our implementation of quickmerge achieves significantly higher speedup than occur implementation of parallel quicksort.  相似文献   

9.
Inter‐iteration dependences in loops can hinder loop‐level parallelism. For some loops, existing thread‐level speculation techniques fail to expose their inherent loop‐level parallelism, because some inter‐iteration dependences are too costly to synchronize, predict, pre‐compute and isolate. This paper presents a compiler technique called loop recreation to change the nature of some dependences (by turning some inter‐iteration dependences into intra‐iteration ones and vice versa) in a loop so that the inter‐iteration dependences in the transformed loop are less costly to enforce at runtime than those in the original loop. We present an algorithm for finding an optimal loop recreation transformation with respect to a simple misspeculation cost model and demonstrate the performance advantages of loop recreation over two recent techniques for multicore systems running nine representative irregular applications. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

10.
The early algorithms for in-place merging were mainly focused on the time complexity, whereas their structures themselves were ignored. Most of them therefore are elusive and of only theoretical significance. For this reason, the paper simplifies the unstable in-place merge by Geffert et al. [V. Geffert, J. Katajainen, T. Pasanen, Asymptotically efficient in-place merging, Theoret. Comput. Sci. 237 (2000) 159-181]. The simplified algorithm is simple yet practical, and has a small time complexity.  相似文献   

11.
In this paper we consider the scalability of parallel space‐filling curve generation as implemented through parallel sorting algorithms. Multiple sorting algorithms are studied and results show that space‐filling curves can be generated quickly in parallel on thousands of processors. In addition, performance models are presented that are consistent with measured performance and offer insight into performance on still larger numbers of processors. At large numbers of processors, the scalability of adaptive mesh refined codes depends on the individual components of the adaptive solver. One such component is the dynamic load balancer. In adaptive mesh refined codes, the mesh is constantly changing resulting in load imbalance among the processors requiring a load‐balancing phase. The load balancing may occur often, requiring the load balancer to perform quickly. One common method for dynamic load balancing is to use space‐filling curves. Space‐filling curves, in particular the Hilbert curve, generate good partitions quickly in serial. However, at tens and hundreds of thousands of processors serial generation of space‐filling curves will hinder scalability. In order to avoid this issue we have developed a method that generates space‐filling curves quickly in parallel by reducing the generation to integer sorting. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

12.
High‐performance application development remains challenging, particularly for scientists making the transition to a heterogeneous grid environment. In general areas of computing, virtual environments such as Java and .Net have proved to be successful in fostering application development, allowing users to target and compile to a single environment, rather than a range of platforms, instruction sets and libraries. However, existing runtime environments are focused on business and desktop computing and they do not support the necessary high‐performance computing (HPC) abstractions required by e‐Scientists. Our work is focused on developing an application‐runtime that can support these services natively. The result is a new approach to the development of an application‐runtime for HPC: the Motor system has been developed by integrating a high‐performance communication library directly within a virtual machine. The Motor message passing library is integrated alongside and in cooperation with other runtime libraries and services while retaining a strong message passing performance. As a result, the application developer is provided with a common environment for HPC application development. This environment supports both procedural languages, such as C, and modern object‐oriented languages, such as C#. This paper describes the unique Motor architecture, presents its implementation and demonstrates its performance and use. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

13.
Sequence comparison leads to a combinatorial optimization problem of sorting permutations by reversals and transpositions.Namely,given any two permutations,find the shortest distance between them.This problem is related with genome rearrangement,genes are oriented in DNA sequences.The transpositions which have been studied in the liteature can be viewed as operations working on two consecutive segments of the genome.In this paper,a new kind of transposition which can work on two arbitrary segments of the genome is proposed,and the sorting of signed permutations by reversals and this new kind of transpostitions are studied.After establishing a lower bound on the number of operations needed,a 2-approximation algorithm is presented for this problem and an example is given to show that the performance ratio of the algorithm cannot be improved.  相似文献   

14.
There has been an increasing research interest in extending the use of Java towards high‐performance demanding applications such as scalable Web servers, distributed multimedia applications, and large‐scale scientific applications. However, extending Java to a multicomputer environment and improving the low performance of current Java implementations pose great challenges to both the systems developer and application designer. In this survey, we describe and classify 14 relevant proposals and environments that tackle Java's performance bottlenecks in order to make the language an effective option for high‐performance network‐based computing. We further survey significant performance issues while exposing the potential benefits and limitations of current solutions in such a way that a framework for future research efforts can be established. Most of the proposed solutions can be classified according to some combination of three basic parameters: the model adopted for inter‐process communication, language extensions, and the implementation strategy. In addition, where appropriate to each individual proposal, we examine other relevant issues, such as interoperability, portability, and garbage collection. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

15.
16.
The field of computational biology encloses a wide range of optimization problems that show non‐deterministic polynomial‐time hard complexities. Nowadays, phylogeneticians are dealing with a growing amount of biological data that must be analyzed to explain the origins of modern species. Evolutionary relationships among organisms are often described by means of tree‐shaped structures known as phylogenetic trees. When inferring phylogenies, two main challenges must be addressed. First, the inference of reliable evolutionary trees on data sets where different optimality principles support conflicting evolutionary hypotheses. Second, the processing of enormous tree searches spaces where traditional sequential strategies cannot be applied. In this sense, phylogenetic inference can benefit from the combination of high performance computing and evolutionary computation to carry out the reconstruction of complex evolutionary histories in reduced execution times. In this paper, we introduce multiobjective phylogenetics, a hybrid OpenMP/MPI approach to parallelize a well‐known multiobjective metaheuristic, the fast non‐dominated sorting genetic algorithm (NSGA‐II). This algorithm has been designed to conduct phylogenetic analyses on multi‐core clusters in accordance with two principles: maximum parsimony and maximum likelihood. The main goal is to combine the benefits of shared‐memory and distributed‐memory programming paradigms to efficiently infer a set of high‐quality Pareto solutions. Experiments on six real nucleotide data sets and comparisons with other hybrid parallel approaches show that multiobjective phylogenetics is able to achieve significant performance in terms of parallel, multiobjective, and biological results. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

17.
A new parallel algorithm for transforming an arithmetic infix expression into a par se tree is presented. The technique is based on a result due to Fischer (1980) which enables the construction of the parse tree, by appropriately scanning the vector of precedence values associated with the elements of the expression. The algorithm presented here is suitable for execution on a shared memory model of an SIMD machine with no read/write conflicts permitted. It uses O(n) processors and has a time complexity of O(log2n) where n is the expression length. Parallel algorithms for generating code for an SIMD machine are also presented.  相似文献   

18.
Programming for large‐scale, multicore‐based architectures requires adequate tools that offer ease of programming and do not hinder application performance. StarSs is a family of parallel programming models based on automatic function‐level parallelism that targets productivity. StarSs deploys a data‐flow model: it analyzes dependencies between tasks and manages their execution, exploiting their concurrency as much as possible. This paper introduces Cluster Superscalar (ClusterSs), a new StarSs member designed to execute on clusters of SMPs (Symmetric Multiprocessors). ClusterSs tasks are asynchronously created and assigned to the available resources with the support of the IBM APGAS runtime, which provides an efficient and portable communication layer based on one‐sided communication. We present the design of ClusterSs on top of APGAS, as well as the programming model and execution runtime for Java applications. Finally, we evaluate the productivity of ClusterSs, both in terms of programmability and performance and compare it to that of the IBM X10 language. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

19.
We have designed a family of parallel data flow analysis algorithms for execution on distributed-memory MIMD machines, based on general-purpose, hybrid algorithms for data flow analysis [Marlowe and Ryder 1990]. We exploit a natural partitioning of the hybrid algorithms and explore a static mapping, dynamic scheduling strategy. Alternative mapping-scheduling choices and refinements of the flow graph condensation used are discussed. Our parallel hybrid algorithm family is illustrated on Reaching Definitions, although parallel algorithms also exist for many interprocedural (e.g., Aliasing) and intraprocedural (e.g., Available Expressions) problems [Marlowe 1989]. We have implemented the parallel hybrid algorithm for Reaching Definitions on an Intel iPSC/2. Our empirical results suggest the practicality of parallel hybrid algorithms.An earlier version of this paper was presented at Supercomputing '90.The research reported here was supported, in part, by the New Jersey Commission on Science and Technology and the CAIP Center's Industrial Members, by Siemens Research Corporation and by National Science Foundation grant CCR-8920078.  相似文献   

20.
The computational complexity of a parallel algorithm depends critically on the model of computation. We describe a simple and elegant rule-based model of computation in which processors apply rules asynchronously to pairs of objects from a global object space. Application of a rule to a pair of objects results in the creation of a new object if the objects satisfy the guard of the rule. The model can be efficiently implemented as a novel MIMD array processor architecture, the Intersecting Broadcast Machine. For this model of computation, we describe an efficient parallel sorting algorithm based on mergesort. The computational complexity of the sorting algorithm isO(nlog2 n), comparable to that for specialized sorting networks and an improvement on theO(n 1.5) complexity of conventional mesh-connected array processors.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号