首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper presents several algorithmic innovations and a hybrid programming style that lead to highly scalable performance using shared memory for a new computational fluid dynamics flow solver. This hybrid model is then converted to a strict message-passing implementation, and performance results for the two are compared. Results show that using this hybrid approach our OpenMP implementation is actually marginally faster than the MPI version, with parallel speedups of up to 599 out of 640 using OpenMP and 486 with MPI.  相似文献   

2.
耗散粒子动力学(DPD)模拟是一种重要的研究流体动力学特性的计算模拟方法,基于Intel MIC平台设计实现了面向大规模耗散粒子动力学模拟,充分结合了DPD模拟本身的特性和MIC平台的特征。对DPD模拟中的近邻列表构建和短程作用力关键代码实现了向量化优化,在CPU和MIC协处理器之间采用任务计算负载平衡机制,支持MPI进程内线程数量负载平衡控制。分别在原型程序上和LAMMPS集成中做了性能对比分析,实验结果显示了引入相关优化技术的有效性,为进一步研究面向MIC众核平台的分子动力学相关工作奠定了基础。  相似文献   

3.
Optimization techniques of a plasma turbulence simulation code GKV for improved strong scaling are presented. This work is motivated by multi-scale plasma turbulence extending over multiple spatio-temporal scales of electrons and ions, whose simulations based on the gyrokinetic theory require huge calculations of five-dimensional (5D) computational fluid dynamics by means of spectral and finite difference methods. First, we present the multi-layer domain decomposition of the multi-dimensional and multi-species problem, and segmented MPI-process mapping on 3D torus interconnects, which fully utilizes the bi-section bandwidth for data transpose and reduces the conflicts of simultaneous point-to-point communications. These techniques reduce the inter-node communication cost drastically. Second, pipelined computation-communication overlaps are implemented by using the OpenMP/MPI hybrid parallelization, which effectively mask the communication cost. Careful regulations of the pipeline length and of the thread granularity respectively suppress latencies of MPI, load imbalance and scheduling overheads of OpenMP. Thanks to the above optimizations, GKV achieves excellent strong scaling up to ∼600k cores with high computational performance 782.4 TFlops (8.29% of the theoretical peak) and high effective parallelization rate ∼99.99994% on K, which demonstrates its applicability and efficiency toward a million of cores. The optimized code realizes multi-scale plasma turbulence simulations covering electron and ion scales, and reveals cross-scale interactions of electron- and ion-scale turbulence.  相似文献   

4.
In this paper we describe the parallelization of the multi-zone code versions of the NAS Parallel Benchmarks employing multi-level OpenMP parallelism. For our study, we use the NanosCompiler that supports nesting of OpenMP directives and provides clauses to control the grouping of threads, load balancing, and synchronization. We report the benchmark results, compare the timings with those of different hybrid parallelization paradigms (MPI+OpenMP and MLP) and discuss OpenMP implementation issues that affect the performance of multi-level parallel applications.  相似文献   

5.
为了提高分子动力学模拟在对称多处理(SMP)集群上的计算速度,在分子动力学并行方法中引入MPI+TBB的混合并行编程模型。基于该模型,在分子动力学软件LAMMPS中设计并实现混合并行算法,在节点间采用MPI及空间分解技术实施进程级并行,节点内采用TBB及临界区技术实施线程级并行。在SMP集群中的测试表明,该方法在体系较大以及节点数较多时可以明显减少通信时间,使加速比在纯MPI模型上提高45%。结果表明,MPI+TBB混合并行编程模型可促进分子动力学并行模拟且效率明显提升。  相似文献   

6.
SMP集群系统上矩阵特征问题并行求解器的有效算法   总被引:2,自引:0,他引:2  
对称矩阵三对角化和三对角对称矩阵的特征值求解是稠密对称矩阵特征问题并行求解器的关键步 .针对SMP集群系统的多级体系结构,基于Householder变换的矩阵三对角化和三对角矩阵特征值问题的分而治之算法,给出了它们的MPI OpenMP混合并行算法 .算法研究集中在SMP集群系统环境下的负载平衡、通信开销和性能评价 .混合并行算法的设计结合了粗粒度线程并行模式和任务共享的动态调用方法,改善了MPI算法中的负载平衡问题、降低了通信开销 .在深腾6800上的实验表明,基于混合并行算法的求解器比纯MPI版本的求解器具有更好的性能和可扩展性 .  相似文献   

7.
张丹丹  徐莹  徐磊 《计算机科学》2012,39(4):296-298,303
对CPU+GPU异构平台下的多种并行编程模式进行了研究,并针对格子Boltzmann方法实现了CUDA,MPI+CUDA,MPI+OpenMP+CUDA多级并行算法。结果表明,算法具有较好的加速性能;提出的根据计算量比例参数调节CPU和GPU之间负载均衡的方法,对于在异构平台上实现多级并行处理及资源的有效利用具有一定的参考和应用价值。  相似文献   

8.
目的 空间位置检索是遥感影像检索中的关键步骤,为进一步提高海量遥感影像编目数据定位检索效率,降低误检率,提出一种基于MPI和OpenMP混合编程模型对射线法进行多层次并行化实现。方法 首先完善传统射线法处理点在多边形边上以及射线与边的端点相交的情况;其次采用MPI实现基于程序层面多机并行,OpenMP实现算法层面单机多线程并行,通过开启多个线程同时处理多边形的各个点,判断它们是否在另一个多边形的内部。结果 当系统中所有节点开启线程数之和等于主节点的最佳线程数时,全局计算速度达到最佳。混合并行算法相比串行算法检索时间减少50%以上,效率更高。结论 MPI+OpenMP混合并行比普通的串行执行、单纯MPI并行或单纯OpenMP并行执行空间定位检索算法效率显著提高,这种并行方案普遍适用于集群环境下的并行程序,并且可以进一步拓展到其他图像处理算法领域。  相似文献   

9.
基于二维/轴对称高精度可压缩多相流计算流体力学方法 MuSiC-CCASSIM的结构化网格部分,设计了区域并行分解方法;针对各处理器边界数据的通信,设计了阻塞式通信与非阻塞式通信并行算法;为了减少通信开销,设计了MPI/OpenMP混合并行优化算法。在天河二号超级计算机上进行了测试,每个核固定网格规模为625*250,最多调用8 192核。测试数据表明,采用MPI/OpenMP混合并行算法、纯MPI非阻塞式通信并行算法和纯MPI阻塞式通信并行算法的程序的平均并行效率分别达到86%、83%和77%,三种算法都具有良好的可扩展性。  相似文献   

10.
Important components of molecular modeling applications are estimation and minimization of the internal energy of a molecule. For macromolecules such as proteins and amino acids, energy estimation is performed using empirical equations known as force fields. Over the past several decades, much effort has been directed towards improving the accuracy of these equations, and the resulting increased accuracy has come at the expense of greater computational complexity. For example, the interactions between a protein and surrounding water molecules have been modeled with improved accuracy using the generalized Born solvation model, which increases the computational complexity to O (n 3). Fortunately, many force-field calculations are amenable to parallel execution. This paper describes the steps that were required to transform the Born calculation from a serial program into a parallel program suitable for parallel execution in both the OpenMP and MPI environments. Measurements of the parallel performance on a symmetric multiprocessor reveal that the Born calculation scales well for up to 144 processors. In some cases the OpenMP implementation scales better than the MPI implementation, but in other cases the MPI implementation scales better than the OpenMP implementation. However, in all cases the OpenMP implementation performs better than the MPI implementation, and requires less programming effort as well. Trademark Legend Sun, Sun Microsystems, SPARC, UltraSPARC, Sun Fire, Sun Performance Library and Sun HPC Cluster Tools are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States and other countries.  相似文献   

11.
SMP机群混合编程模型研究   总被引:12,自引:0,他引:12  
研究了适用于 SMP机群的混合编程模型 ,并把它划分为 Open MP MPI和 Thread MPI两类 .通过研究指出 ,Open MP MPI优于 Thread MPI.在此基础上 ,重点研究了 Open MP MPI的实现机制、粗粒度和细粒度并行化方法、循环选择、优化措施以及注意事项等 ,得出细粒度并行化的 Open MP MPI是 SMP机群编程模型的一个较好选择的结论  相似文献   

12.
We propose and analyze threading algorithms for hybrid MPI/OpenMP parallelization of a molecular-dynamics simulation, which are scalable on large multicore clusters. Two data-privatization thread scheduling algorithms via nucleation-growth allocation are introduced: (1) compact-volume allocation scheduling (CVAS); and (2) breadth-first allocation scheduling (BFAS). The algorithms combine fine-grain dynamic load balancing and minimal memory-footprint data privatization threading. We show that the computational costs of CVAS and BFAS are bounded by Θ(n 5/3 p ?2/3) and Θ(n), respectively, for p threads working on n particles on a multicore compute node. Memory consumption per node of both algorithms scales as O(n+n 2/3 p 1/3), but CVAS has smaller prefactors due to a geometric effect. Based on these analyses, we derive the selection criterion between the two algorithms in terms of the granularity, n/p. We observe that memory consumption is reduced by 75 % for p=16 and n=8,192 compared to a naïve data privatization, while maintaining thread imbalance below 5 %. We obtain a strong-scaling speedup of 14.4 with 16-way threading on a four quad-core AMD Opteron node. In addition, our MPI/OpenMP code achieves 2.58× and 2.16× speedups over the MPI-only implementation on 32,768 cores of BlueGene/P for 0.84 and 1.68 million particle systems, respectively.  相似文献   

13.
We discuss the computational bottlenecks in molecular dynamics (MD) and describe the challenges in parallelizing the computation-intensive tasks. We present a hybrid algorithm using MPI (Message Passing Interface) with OpenMP threads for parallelizing a generalized MD computation scheme for systems with short range interatomic interactions. The algorithm is discussed in the context of nano-indentation of Chromium films with carbon indenters using the Embedded Atom Method potential for Cr–Cr interaction and the Morse potential for Cr–C interactions. We study the performance of our algorithm for a range of MPI–thread combinations and find the performance to depend strongly on the computational task and load sharing in the multi-core processor. The algorithm scaled poorly with MPI and our hybrid schemes were observed to outperform the pure message passing scheme, despite utilizing the same number of processors or cores in the cluster. Speed-up achieved by our algorithm compared favorably with that achieved by standard MD packages.  相似文献   

14.
为了准确分析OpenMP程序的负载均衡问题,详细分析了在同步点之间进行测量的恰当位置,定义了性能分析单元,给出了负载不均衡程度的计算公式,并提出了一种以性能分析单元为分析对象来测量OpenMP并行程序负载平衡的方法。该方法利用Opari对OpenMP源程序自动插入POMP性能监控函数,通过在相关的性能函数中插入定时器的方式,以分析单元为基本对象来收集程序的负载情况。该方法已在一个OpenMP性能分析工具中得到了实现,能够有效地帮助用户找出程序中负载不均衡的瓶颈。  相似文献   

15.
In this paper, a new hybrid parallelisable low order algorithm, developed by the authors for multibody dynamics analysis, is implemented numerically on a distributed memory parallel computing system. The presented implementation can currently accommodate the general spatial motion of chain systems, but key issues for its extension to general tree and closed loop systems are discussed. Explicit algebraic constraints are used to increase coarse grain parallelism, and to study the influence of the dimension of system constraint load equations on the computational efficiency of the algorithm for real parallel implementation using the Message Passing Interface (MPI). The equation formulation parallelism and linear system solution strategies which are used to reduce communication overhead are addressed. Numerical results indicate that the algorithm is scalable, that significant speed-up can be obtained, and that a quasi-logarithmic relation exists between time needed for a function call and numbers of processors used. This result agrees well with theoretical performance predictions. Numerical comparisons with results obtained from independently developed analysis codes have validated the correctness of the new hybrid parallelisable low order algorithm, and demonstrated certain computational advantages.  相似文献   

16.
介绍了MPI并行编程环境和MPI并行程序设计的特点,讨论了在MPI并行程序设计中实现动态负载平衡的方法,提出一种根据计算节点的计算能力和实时负载情况进行任务迁移的动态负载平衡策略。  相似文献   

17.
网格生成是计算流体力学中非常重要的一环,大规模数值模拟过程中对网格精度要求的提高会导致网格生成所耗的时间增加.文中基于OpenFoam开源软件中的网格生成算法,主要研究多面体网格的并行生成,并提出OpenMP和MPI混合并行的多面体网格生成方法.通过理论分析得到,使用混合并行方法生成相同质量的网格时,混合并行方法生成网...  相似文献   

18.
基于SMP集群的混合并行编程模型研究   总被引:9,自引:3,他引:6       下载免费PDF全文
提出一种适用于SMP集群的混合MPI+OpenMP并行编程模型。该模型贴近于SMP集群的体系结构且综合了消息传递和共享内存2种编程模型的优势,能获得较好的性能。讨论该混合模型的实现机制以及MPI消息传递模型的特点。实验结果表明,在一定条件下,该混合并行编程模型是SMP集群的最优选择。  相似文献   

19.
Zhu  Zijie  Wang  Yongxian  Zhu  Xiaoqian  Liu  Wei  Lan  Qiang  Xiao  Wenbin  Cheng  Xinghua 《The Journal of supercomputing》2021,77(5):4988-5018

The three-dimensional wedge-shaped underwater acoustic propagation model exists analytical solution, which provides verification for models like FOR3D propagation model under certain situation. However, the solving process of a three-dimensional complex underwater sound field problem is hindered by intensive computing and long calculation times. In this paper, we exploit a hybrid parallel programing model, such as MPI and OpenMP, to accelerate the computation, design various optimization methods to improve the overall performance, and then carry out the performance and optimization analysis on the Tianhe-2 platform. Experiments show that the optimized implementation of the three-dimensional wedge-shaped underwater acoustic propagation model achieves a 46.5 speedup compared to the original serial program, thereby illustrating a substantial performance improvement. We also carried out scalability tests and parallel optimization experiments for large-scale practical examples.

  相似文献   

20.
基于对称三对角特征问题的分而治之方法,提出了一个适合SMP集群环境的多级混合并行算法。SMP节点内的并行求解采用了粗粒度和细粒度两种OpenMP并行。为了改善纯MPI算法中的负载不平衡,混合并行算法使用了动态任务分配方法。在深腾6800上的试验表明,混合并行算法具有好的扩展性和加速比。 关键词:SMP集群;MPI+OpenMP;混合并行;并行求解器  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号