首页 | 本学科首页   官方微博 | 高级检索  
     

高可扩展可容错的无网格/粒子程序petaPar及其测试
作者姓名:黎雷生  田荣
作者单位:1. 中国科学院计算技术研究所, 北京 100190; 2. 中国科学院大学, 北京 100049
摘    要:petaPar 粒子模拟程序面向千万亿次级计算,在统一框架下实现两种广受关注的粒子模拟算法:光滑粒子流体动力学(Smoothed Particle Hydrodynamics,SPH)和物质点法(Material Point Method,MPM)。代码支持多种材料模型、强度模型和失效模型,适合模拟大变形、高应变率和流固耦合问题。支持纯 MPI 和 MPI+X 混合两种并行模型。系统具有可容错性,支持无人值守变进程重启。在Titan 上测试表明,petaPar 可线性扩展到 26 万 CPU 核,SPH 和 MPM 算法并行效率相对 8 192 核分别为 87% 和 90%。

关 键 词:千万亿次计算  粒子模拟  SPH  MPM  可扩展性  可容错性  
收稿时间:2013-06-18

A Scalable and Fault Tolerant Particle Simulation Code
Authors:Li Leisheng  Tian Rong
Affiliation:1. Institute of Computing Technology Chinese Academy of Sciences, Beijing 100190, China; 2. University of Chinese Academy of Sciences, Beijing 100049, China
Abstract:Powered by petaflop supercomputers, numerical simulation steps into a completely new era, a new generation of simulation code is expected to explore the parallelism of hundreds of thousands of processor cores. petaPar is targeted at petascale particle simulation on petaflop systems. It unifies two most popular and powerful particle methods, the Smoothed Particle Hydrodynamics (SPH) and the Material Point Method (MPM). The code supports a number of material models, strength models and failure models, and is suitable for large deformation, high strain rates and fluid-solid interaction. Parallel implementations support both flat MPI and MPI+X hybrid parallel models. The code is highly fault tolerant in the sense that it can support unattended process restart from any time step. Scalability tests on Titan shows that the code is linearly scaled up to 260K CPU cores and delivers 87% and 90% parallel efficiency relative to 8 192 CPU cores for MPM and SPH respectively.
Keywords:petascale computing  particle simulation  SPH  MPM  scalability  fault tolerance  
点击此处可从《》浏览原始摘要信息
点击此处可从《》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号