首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到17条相似文献,搜索用时 145 毫秒
1.
为了研究GPU的通用计算能力和适合SMP集群的编程模型,首次提出MPI+CUDA多粒度混合并行编程的新方法,节点间采用MPI实现粗粒度并行,节点内采用CUDA实现细粒度并行的混合编程方式.利用此方法在搭建的3节点SMP集群环境中,测试了大规模矩阵乘问题的并行计算能力.实验结果表明,该方法能够显著提升并行效率,同时证明MPI+CUDA混合编程模型能够充分发挥SMP集群节点间分布式存储和节点内共享内存的优势,为装有CUDA-enabled GPU的SMP集群提供了一种有效的并行策略.  相似文献   

2.
为了提高分子动力学模拟在对称多处理(SMP)集群上的计算速度,在分子动力学并行方法中引入MPI+TBB的混合并行编程模型。基于该模型,在分子动力学软件LAMMPS中设计并实现混合并行算法,在节点间采用MPI及空间分解技术实施进程级并行,节点内采用TBB及临界区技术实施线程级并行。在SMP集群中的测试表明,该方法在体系较大以及节点数较多时可以明显减少通信时间,使加速比在纯MPI模型上提高45%。结果表明,MPI+TBB混合并行编程模型可促进分子动力学并行模拟且效率明显提升。  相似文献   

3.
采用CUDA+MPI+OpenMP的三级并行编程模式,实现节点间的粗粒度并行,节点内的细粒度并行以及将GPU作为并行计算设备的CUDA编程模型.这种新的三级并行混合编程模式为SMP机群提供了一种更为高效的并行策略.本文讨论了三级并行编程环境的快速搭建以及多粒度混合并行编程方法,并在多个节点的机群环境中完成测试工作.  相似文献   

4.
SMP机群混合编程模型研究   总被引:12,自引:0,他引:12  
研究了适用于 SMP机群的混合编程模型 ,并把它划分为 Open MP MPI和 Thread MPI两类 .通过研究指出 ,Open MP MPI优于 Thread MPI.在此基础上 ,重点研究了 Open MP MPI的实现机制、粗粒度和细粒度并行化方法、循环选择、优化措施以及注意事项等 ,得出细粒度并行化的 Open MP MPI是 SMP机群编程模型的一个较好选择的结论  相似文献   

5.
基于SMP集群的混合并行编程模型研究   总被引:9,自引:3,他引:6       下载免费PDF全文
提出一种适用于SMP集群的混合MPI+OpenMP并行编程模型。该模型贴近于SMP集群的体系结构且综合了消息传递和共享内存2种编程模型的优势,能获得较好的性能。讨论该混合模型的实现机制以及MPI消息传递模型的特点。实验结果表明,在一定条件下,该混合并行编程模型是SMP集群的最优选择。  相似文献   

6.
本文针对代数多重网格(algebraic multigrid,AMG)并行实现中的稀疏矩阵-向量乘,建立了稀疏矩阵新的分布和数据存储模式,提出了一类具有最小通信量以及隐藏通信的新稀疏矩阵-向量乘并行算法,并实现了基于K-循环迭代的求解阶段并行算法.针对现代多核处理器,结合细粒度的并行编程模型,实现了MPI+OpenMP混合编程并行算法.通过同hypre软件包测试比较,在深腾7000集群上求解三维Laplace方程并行规模达到512核心时,并行求解阶段运行时间较hypre(high performance preconditioners)软件包提高了56%,在元集群上提高了39%,验证了算法的有效性.  相似文献   

7.
基于对称三对角特征问题的分而治之方法,提出了一个适合SMP集群环境的多级混合并行算法。SMP节点内的并行求解采用了粗粒度和细粒度两种OpenMP并行。为了改善纯MPI算法中的负载不平衡,混合并行算法使用了动态任务分配方法。在深腾6800上的试验表明,混合并行算法具有好的扩展性和加速比。 关键词:SMP集群;MPI+OpenMP;混合并行;并行求解器  相似文献   

8.
周杰  李文敬 《计算机科学》2017,44(Z11):586-591, 595
为解决多核机群Petri网并行化过程中,运用MPI+OPenMP混合编程实现同步会出现死锁的问题,提出了基于三层混合编程模型的Petri网并行算法。首先,根据事务内存的同步优势,在多核机群环境下构建MPI+OPenMP+STM的三层编程模型;然后,对Petri网的几何模型与代数模型的并行化进行分析,建立MPI+OPenMP+STM三层结构的Petri网并行模型,并对三层混合编程模型的Petri网并行算法进行设计与分析;最后,通过示例进行编程验证,该算法的运行效率明显优于其他编程模式,而且Petri网的规模越大,其并行计算的效果就越明显。因此,该算法是多核机群环境下模拟Petri网并行运行的一种高效且可行的算法。  相似文献   

9.
针对遥感卫星图像数据量大、系统几何校正计算复杂的问题,提出了基于SMP机群的系统几何校正多级并行算法。该算法利用MPI+OpenMP并行编程技术,节点间实现进程级粗粒度的并行,节点内实现线程级细粒度的并行。采用基于冗余存储的数据划分方式,保证了各个节点的负载均衡,减少了数据定位的复杂度;利用并行文件系统进行数据分配,避免了节点间的数据搬移,实现了数据并行读写,节点内部的并行,进一步细化了算法的并行粒度。在SMP机群系统上对资源三号卫星正视相机图像进行算法验证。结果表明,该算法充分利用了SMP机群的计算资源,具有良好的并行性能。  相似文献   

10.
对采用多核处理器作为SMP集群系统的计算节点的系统上的一种混合编程模型-MPI+OpenMP混合编程模型进行了深入的研究.建立了两个矩阵乘的混合并行算法,在多核集群平台上与纯MPI算法分别进行了实验,并进行了性能方面的比较.试验表明,混合编程具有更好的性能.  相似文献   

11.
The Earth Simulator (ES) is an SMP cluster system. There are two types of parallel programming models available on the ES. One is a flat programming model, in which a parallel program is implemented by MPI interfaces only, both within an SMP node and among nodes. The other is a hybrid programming model, in which a parallel program is written by using thread programming within an SMP node and MPI programming among nodes simultaneously. It is generally known that it is difficult to obtain the same high level of performance using the hybrid programming model as can be achieved with the flat programming model.

In this paper, we have evaluated scalability of the code for direct numerical simulation of the Navier–Stokes equations on the ES. The hybrid programming model achieves the sustained performance of 346.9 Gflop/s, while the flat programming model achieves 296.4 Gflop/s with 16 PNs of the ES for a DNS problem size of 2563. For small scale problems, however, the hybrid programming model is not as efficient because of microtasking overhead. It is shown that there is an advantage for the hybrid programming model on the ES for the larger size problems.  相似文献   


12.
多核集群的层次化并行编程模型一直是高性能计算的研究热点。以SMP集群为例,从硬件上可分为节点间和节点内的两层架构。阐述了层次化并行编程的实现技术,针对N体问题算法进行了基于Hybrid并行编程模型的并行化研究。提出了一种块同步MPI/OpenMP细粒度N体问题的优化算法。基于曙光TC5000A集群,将该算法与传统的N体并行算法进行了执行时间与加速比的比较,得出了几句总结性具体论述。  相似文献   

13.
This article focuses on the effect of both process topology and load balancing on various programming models for SMP clusters and iterative algorithms. More specifically, we consider nested loop algorithms with constant flow dependencies, that can be parallelized on SMP clusters with the aid of the tiling transformation. We investigate three parallel programming models, namely a popular message passing monolithic parallel implementation, as well as two hybrid ones, that employ both message passing and multi-threading. We conclude that the selection of an appropriate mapping topology for the mesh of processes has a significant effect on the overall performance, and provide an algorithm for the specification of such an efficient topology according to the iteration space and data dependencies of the algorithm. We also propose static load balancing techniques for the computation distribution between threads, that diminish the disadvantage of the master thread assuming all inter-process communication due to limitations often imposed by the message passing library. Both improvements are implemented as compile-time optimizations and are further experimentally evaluated. An overall comparison of the above parallel programming styles on SMP clusters based on micro-kernel experimental evaluation is further provided, as well.  相似文献   

14.
SMP集群系统上矩阵特征问题并行求解器的有效算法   总被引:2,自引:0,他引:2  
对称矩阵三对角化和三对角对称矩阵的特征值求解是稠密对称矩阵特征问题并行求解器的关键步 .针对SMP集群系统的多级体系结构,基于Householder变换的矩阵三对角化和三对角矩阵特征值问题的分而治之算法,给出了它们的MPI OpenMP混合并行算法 .算法研究集中在SMP集群系统环境下的负载平衡、通信开销和性能评价 .混合并行算法的设计结合了粗粒度线程并行模式和任务共享的动态调用方法,改善了MPI算法中的负载平衡问题、降低了通信开销 .在深腾6800上的实验表明,基于混合并行算法的求解器比纯MPI版本的求解器具有更好的性能和可扩展性 .  相似文献   

15.
针对水声传播模型的计算量大,难以满足实时化、精细化水下声传播信息保障需求的难题,基于MPI+OpenMP混合并行编程方法,开展了WKBZ简正波模型混合并行计算方法研究,实现了水下声场2级混合并行计算。该方法通过节点间消息传递、节点内内存共享的方式,有效克服了MPI并行编程模型通信开销大和OpenMP并行编程环境可扩展性差的缺点,较好地解决了水下声传播快速计算的问题。测试结果表明,该方法能够较好地利用SMP集群节点间和节点内多级并行机制,充分发挥消息传递编程模型和共享内存编程模型各自的优势,大幅降低MPI进程间通信带来的时间开销,有效提升程序的可扩展性和并行效率。  相似文献   

16.
In this paper, we propose a new algorithm that analyzes the data dependency pattern in the first-order linear recurrence (FOLR) and transforms it into algebraically equivalent expanded form so that it can be processed in parallel using the threads on symmetric multiprocessor (SMP) machines. The transformation aims to eliminate the data dependencies in the naive nested form of the FOLR. However, as this transformation may result in extra multiplication operations, our algorithm examines the immanent overhead of the expanded form of the FOLR and generates a new hybrid form of the FOLR. The hybrid form combines nested and appropriately expanded form in order to make it suitable for parallel processing. The parallel algorithm based on the hybrid form of the FOLR is analytically examined and tested through implementation on SMP machines. The implementation details, such as the workload balancing between processors and the optimization of cache performance, are also discussed. The experimental results show that the parallel algorithm based on the hybrid form of the FOLR considerably improves the performance on SMP machines that have three of more processors.  相似文献   

17.
Clusters of SMPs are hybrid-parallel architectures that combine the main concepts of distributed-memory and shared-memory parallel machines. Although SMP clusters are widely used in the high performance computing community, there exists no single programming paradigm that allows exploiting the hierarchical structure of these machines. Most parallel applications deployed on SMP clusters are based on MPI, the standard API for distributed-memory parallel programming, and thus may miss a number of optimization opportunities offered by the shared memory available within SMP nodes. In this paper we present extensions to the data parallel programming language HPF and associated compilation techniques for optimizing HPF programs on clusters of SMPs. The proposed extensions enable programmers to control key aspects of distributed-memory and shared-memory parallelization at a high-level of abstraction. Based on these language extensions, a compiler can adopt a hybrid parallelization strategy which closely reflects the hierarchical structure of SMP clusters by automatically exploiting shared-memory parallelism based on OpenMP within cluster nodes and distributed-memory parallelism utilizing MPI across nodes. We describe the implementation of these features in the VFC compiler and present experimental results which show the effectiveness of these techniques.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号