首页 | 本学科首页   官方微博 | 高级检索  
     


Using hybrid MPI and OpenMP programming to?optimize communications in parallel loop self-scheduling schemes for multicore PC clusters
Authors:Chao-Chin Wu  Lien-Fu Lai  Chao-Tung Yang  Po-Hsun Chiu
Affiliation:1. Department of Computer Science and Information Engineering, National Changhua University of Education, Changhua City, 500, Taiwan
2. High-Performance Computing Laboratory, Department of Computer Science and Information Engineering, Tunghai University, Taichung, 40704, Taiwan
Abstract:Recently, a series of parallel loop self-scheduling schemes have been proposed, especially for heterogeneous cluster systems. However, they employed the MPI programming model to construct the applications without considering whether the computing node is multicore architecture or not. As a result, every processor core has to communicate directly with the master node for requesting new tasks no matter the fact that the processor cores on the same node can communicate with each other through the underlying shared memory. To address the problem of higher communication overhead, in this paper we propose to adopt hybrid MPI and OpenMP programming model to design two-level parallel loop self-scheduling schemes. In the first level, each computing node runs an MPI process for inter-node communications. In the second level, each processor core runs an OpenMP thread to execute the iterations assigned for its resident node. Experimental results show that our method outperforms the previous works.
Keywords:
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号