首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
    
Sparse matrix vector multiply (SpMVM) is an important kernel that frequently arises in high performance computing applications. Due to its low arithmetic intensity, several approaches have been proposed in literature to improve its scalability and efficiency in large scale computations. In this paper, our target systems are high end multi‐core architectures and we use messaging passing interface + open multiprocessing hybrid programming model for parallelism. We analyze the performance of recently proposed implementation of the distributed symmetric SpMVM, originally developed for large sparse symmetric matrices arising in ab initio nuclear structure calculations. We study important features of this implementation and compare with previously reported implementations that do not exploit underlying symmetry. Our SpMVM implementations leverage the hybrid paradigm to efficiently overlap expensive communications with computations. Our main comparison criterion is the ‘CPU core hours’ metric, which is the main measure of resource usage on supercomputers. We analyze the effects of topology‐aware mapping heuristic using simplified network load model. We have tested the different SpMVM implementations on two large clusters with 3D Torus and Dragonfly topology. Our results show that the distributed SpMVM implementation that exploits matrix symmetry and hides communication yields the best value for the ‘CPU core hours’ metric and significantly reduces data movement overheads. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

2.
    
This paper focuses on parallel implementations of three two‐dimensional explicit numerical methods on Intel® Xeon® Scalable Processor and the coprocessor Knights Landing. In this study, the performance of a hybrid parallel programming with message passing interface (MPI) and Open Multi‐Processing (OpenMP) and a pure MPI implementation used with two thread binding policies is compared with an improved OpenMP‐based implementation in three explicit finite‐difference methods for solving partial differential equations on shared‐memory multicore and manycore systems. Specifically, the improved OpenMP‐based version is a strategy that synchronizes adjacent threads and eliminates the implicit barriers of a naïve OpenMP‐based implementation. The experiments show that the most suitable approach depends on several characteristics related to the nonuniform memory access (NUMA) effect and load balancing, such as the size of the MPI domain and the number of synchronization points used in the parallel implementation. In algorithms that use four and five synchronization points, hybrid MPI/OpenMP approaches yielded better speedups than the other versions did in runs performed on both systems. The pure MPI‐based strategy, however, achieved better results than the other proposed approaches did in the method that employs only one synchronization point.  相似文献   

3.
基于SMP集群的多层次并行编程模型与并行优化技术*   总被引:4,自引:0,他引:4  
详细描述了适用于SMP集群这种多层次并行体系结构的混合并行编程模型MPI/OpenMP,它提供了实现SMP节点间和节点内多层次并行的机制。在此基础上结合实用的性能评价方法,分别介绍了MPI,OpenMP和单处理器三个层次上的一些常用和有效的并行优化技术,并指出单处理器性能优化是提高并行程序性能一个不容忽视的问题。  相似文献   

4.
MPI+OpenMP混合并行编程模型应用研究   总被引:13,自引:0,他引:13       下载免费PDF全文
多处理器结点集群在高性能计算市场上日趋流行,如何在多处理器上编写出高效的并行代码成为研究的热点。MPI+OpenMP为多处理器结点集群提供了一种有效的并行策略,结点内部共享内存空间编程模式适合 OpenMP并行,消息传递模型MPI被用在集群的结点与结点之间,这样就实现了并行的层次结构化。  相似文献   

5.
嵌入式零树小波压缩算法是图像压缩技术中有效的压缩算法,但其压缩时间较长.对该算法进行了研究,并在多核机群系统下实现了该算法的并行算法,提高了算法的性能.实现了MPI和MPI+OpenMP两种并行算法,并将串行算法、MPI并行算法与MPI+OpenMP并行算法进行比较.结果显示,随着数据量的增多,MPI并行算法和MPI+OpenMP并行算法相对于串行算法的运行效率都有明显提高,其中MPI+OpenMP并行算法的效率更好.  相似文献   

6.
基于SMP集群系统的并行编程模式研究与分析   总被引:4,自引:1,他引:4  
并行计算技术是计算机技术发展的重要方向之一,SMP与集群是当前主流的并行体系结构。当前并行程序设计方法主要采用基于消息传递模型的MPI和基于共享存储模型的OpenMP,两种编程模式各有特点和适用范围。对SMP集群以及MPI和OpenMP的特点进行了分析,介绍了在SMP集群系统中利用MPI和OpenMP混合编程的可行性方法。  相似文献   

7.
胡晓力  田有先 《微计算机信息》2007,23(33):252-253,233
介绍了采用双核处理器的共享存储多处理机(SMP)作为计算节点时,高性能并行计算集群的结构。研究了此类系统的并行计算粒度和优化方法,描述了该集群MPI+OpenMP的混合编程平台构建方法。利用此平台,实现了求解现行方程组的Mann迭代算法,通过数值测试,表明此类集群具有良好的计算性能。此系统已用于实际工作中,取得了良好的效果。  相似文献   

8.
Clusters of SMPs are hybrid-parallel architectures that combine the main concepts of distributed-memory and shared-memory parallel machines. Although SMP clusters are widely used in the high performance computing community, there exists no single programming paradigm that allows exploiting the hierarchical structure of these machines. Most parallel applications deployed on SMP clusters are based on MPI, the standard API for distributed-memory parallel programming, and thus may miss a number of optimization opportunities offered by the shared memory available within SMP nodes. In this paper we present extensions to the data parallel programming language HPF and associated compilation techniques for optimizing HPF programs on clusters of SMPs. The proposed extensions enable programmers to control key aspects of distributed-memory and shared-memory parallelization at a high-level of abstraction. Based on these language extensions, a compiler can adopt a hybrid parallelization strategy which closely reflects the hierarchical structure of SMP clusters by automatically exploiting shared-memory parallelism based on OpenMP within cluster nodes and distributed-memory parallelism utilizing MPI across nodes. We describe the implementation of these features in the VFC compiler and present experimental results which show the effectiveness of these techniques.  相似文献   

9.
本文分析了非结构网格多群粒子输运Sn方程求解的并行性,拟合多核机群系统的特点,设计了MPI/OpenMP混合程序,针对空间网格点采用区域分解划分,计算结点间基于消息传递MPI编程,每个MPI计算进程在计算过程中碰到关于能群的计算,就生成多个OpenMP线程,计算结点内针对能群进行多线程并行计算。数值测试结果表明,非结构网格上的粒子输运问题的混合并行计算能较好地匹配多核机群系统的硬件结构,具有良好的可扩展性,可以扩展到1024个CPU核。  相似文献   

10.
This article focuses on the effect of both process topology and load balancing on various programming models for SMP clusters and iterative algorithms. More specifically, we consider nested loop algorithms with constant flow dependencies, that can be parallelized on SMP clusters with the aid of the tiling transformation. We investigate three parallel programming models, namely a popular message passing monolithic parallel implementation, as well as two hybrid ones, that employ both message passing and multi-threading. We conclude that the selection of an appropriate mapping topology for the mesh of processes has a significant effect on the overall performance, and provide an algorithm for the specification of such an efficient topology according to the iteration space and data dependencies of the algorithm. We also propose static load balancing techniques for the computation distribution between threads, that diminish the disadvantage of the master thread assuming all inter-process communication due to limitations often imposed by the message passing library. Both improvements are implemented as compile-time optimizations and are further experimentally evaluated. An overall comparison of the above parallel programming styles on SMP clusters based on micro-kernel experimental evaluation is further provided, as well.  相似文献   

11.
    
The purpose of this paper is to investigate the scalability and performance of seven, simple OpenMP test programs and to compare their performance with equivalent MPI programs on an SGI Origin 2000. Data distribution directives were used to make sure that the OpenMP implementation had the same data distribution as the MPI implementation. For the matrix‐times‐vector (test 5) and the matrix‐times‐matrix (test 7) tests, the syntax allowed in OpenMP 1.1 does not allow OpenMP compilers to be able to generate efficient code since the reduction clause is not currently allowed for arrays. (This problem is corrected in OpenMP 2.0.) For the remaining five tests, the OpenMP version performed and scaled significantly better than the corresponding MPI implementation, except for the right shift test (test 2) for a small message. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

12.
    
Parallel loop self‐scheduling on parallel and distributed systems has been a critical problem and it is becoming more difficult to deal with in the emerging heterogeneous cluster computing environments. In the past, some self‐scheduling schemes have been proposed as applicable to heterogeneous cluster computing environments. In recent years, multicore computers have been widely included in cluster systems. However, previous researches into parallel loop self‐scheduling did not consider certain aspects of multicore computers; for example, it is more appropriate for shared‐memory multiprocessors to adopt Open Multi‐Processing (OpenMP) for parallel programming. In this paper, we propose a performance‐based approach using hybrid OpenMP and MPI parallel programming, which partition loop iterations according to the performance weighting of multicore nodes in a cluster. Because iterations assigned to one MPI process are processed in parallel by OpenMP threads run by the processor cores in the same computational node, the number of loop iterations allocated to one computational node at each scheduling step depends on the number of processor cores in that node. Experimental results show that the proposed approach performs better than previous schemes. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

13.
    
Workload‐aware loop schedulers were introduced to deliver better performance than classical loop scheduling strategies. However, they presented limitations such as inflexible built‐in workload estimators and suboptimal chunk scheduling. Targeting these challenges, we proposed previously a workload‐aware scheduling strategy called BinLPT, which relies on three features: (i) user‐supplied estimations of the workload of the loop; (ii) a greedy heuristic that adaptively partitions the iteration space in several chunks; and (iii) a scheduling scheme based on the Longest Processing Time (LPT) rule and on‐demand technique. In this paper, we present two new contributions to the state‐of‐the‐art. First, we introduce a multiloop support feature to BinLPT, which enables the reuse of estimations across loops. Based on this feature, we integrated BinLPT into a real‐world elastodynamics application, and we evaluated it running on a supercomputer. Second, we present an evaluation of BinLPT using simulations as well as synthetic and application kernels. We carried out this analysis on a large‐scale NUMA machine under a variety of workloads. Our results revealed that BinLPT is better at balancing the workloads of the loop iterations and this behavior improves as the algorithmic complexity of the loop increases. Overall, BinLPT delivers up to 37.15% and 9.11% better performance than well‐known loop scheduling strategies, for the application kernels and the elastodynamics simulation, respectively.  相似文献   

14.
采用CUDA+MPI+OpenMP的三级并行编程模式,实现节点间的粗粒度并行,节点内的细粒度并行以及将GPU作为并行计算设备的CUDA编程模型.这种新的三级并行混合编程模式为SMP机群提供了一种更为高效的并行策略.本文讨论了三级并行编程环境的快速搭建以及多粒度混合并行编程方法,并在多个节点的机群环境中完成测试工作.  相似文献   

15.
吕海  邸瑞华  龚华 《计算机科学》2012,39(1):305-310
通过对基于MPI编程模型实现的开源有限元计算分析软件在多核集群计算平台中的程序性能的分析,找出程序瓶颈及其原因,实现了基于MPI编程模型的并行程序在多核计算环境中的性能优化。根据程序性能瓶颈的分析,提出了基于MPI/OpenMP混合并行编程模型的大规模线性/非线性方程组求解和多线程多进程同时进行消息通信的两种程序性能优化方案。不同计算规模的实验结果表明,在多核集群计算平台中,MPI/OpenMP混合编程模型实现的大规模非线性方程组求解器相对于单纯基于MPI编程模型实现的并行程序,其性能有2倍到3倍的提升;多线程多进程同时消息传递的优化方案虽然对程序能够起到性能优化作用,但是对解决程序消息通信瓶颈的问题不是最好的方法。两个方案总体性能分析结果表明,基于MPI/OpenMP混合编程模型实现的并行程序,在多核集群计算平台中能够更好地发挥硬件系统的计算能力。  相似文献   

16.
    
In this paper we describe the development of a program that aims at achieving the optimal integration of observed data in an oceanographic model describing the water transport phenomena in the Agulhas area at the tip of South Africa. Two parallel implementations, MPI and OpenMP, are described and experiments with respect to speed and scalability on a Compaq AlphaServer SC and an SGI Origin3000 are reported. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

17.
Nested OpenMP parallelism allows an application to spawn teams of nested threads. This hierarchical nature of thread creation and usage poses problems for performance measurement tools that must determine thread context to properly maintain per-thread performance data. In this paper we describe the problem and a novel solution for identifying threads uniquely. Our approach has been implemented in the TAU performance system and has been successfully used in profiling and tracing OpenMP applications with nested parallelism. We also describe how extensions to the OpenMP standard can help tool developers uniquely identify threads.  相似文献   

18.
基于二维/轴对称高精度可压缩多相流计算流体力学方法 MuSiC-CCASSIM的结构化网格部分,设计了区域并行分解方法;针对各处理器边界数据的通信,设计了阻塞式通信与非阻塞式通信并行算法;为了减少通信开销,设计了MPI/OpenMP混合并行优化算法。在天河二号超级计算机上进行了测试,每个核固定网格规模为625*250,最多调用8 192核。测试数据表明,采用MPI/OpenMP混合并行算法、纯MPI非阻塞式通信并行算法和纯MPI阻塞式通信并行算法的程序的平均并行效率分别达到86%、83%和77%,三种算法都具有良好的可扩展性。  相似文献   

19.
简要综述并行计算的思想,并对分布式内存结构和共享式内存结构两种结构的特点进行比较。叙述如何应用OpenMP和MPI进行混合编程,以及相应的混合编程模型。通过一个实例,针对多核CPU组成的SMP构架的集群,实现OpenMP和MPI混合编程的性能对比和结论分析。  相似文献   

20.
王洁  衷璐洁  曾宇 《计算机科学》2011,38(10):281-284
多核处理器的新特性使多核机群的存储层次更加复杂,同时也给MPI程序带来了新的优化空间。国内外学者提出了许多多核机群下MPI程序的优化方法和技术。测试了3个不同多核机群的通信性能,并分别在Intel与AMD多核机群下实验评估了几种具有普遍意义的优化技术:混合MPI/OpcnMP、优化MPI运行时参数以及优化MPI进程摆放,同时对实验结果和优化性能进行了分析。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号