首页 | 本学科首页   官方微博 | 高级检索  
     


MPI + MPI: a new hybrid approach to parallel programming with MPI plus shared memory
Authors:Torsten Hoefler  James Dinan  Darius Buntinas  Pavan Balaji  Brian Barrett  Ron Brightwell  William Gropp  Vivek Kale  Rajeev Thakur
Affiliation:1. ETH Zurich, Zurich, Switzerland
2. Argonne National Laboratory, Argonne, IL, USA
3. Sandia National Laboratories, Sandia, NM, USA
4. University of Illinois at Urbana-Champaign, Urbana, IL, USA
Abstract:Hybrid parallel programming with the message passing interface (MPI) for internode communication in conjunction with a shared-memory programming model to manage intranode parallelism has become a dominant approach to scalable parallel programming. While this model provides a great deal of flexibility and performance potential, it saddles programmers with the complexity of utilizing two parallel programming systems in the same application. We introduce an MPI-integrated shared-memory programming model that is incorporated into MPI through a small extension to the one-sided communication interface. We discuss the integration of this interface with the MPI 3.0 one-sided semantics and describe solutions for providing portable and efficient data sharing, atomic operations, and memory consistency. We describe an implementation of the new interface in the MPICH2 and Open MPI implementations and demonstrate an average performance improvement of 40 % to the communication component of a five-point stencil solver.
Keywords:
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号