首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 125 毫秒
1.
现有CPU加速的高性能Linpack基准测试程序(HPL)一般采用基于实际运算能力的动态负载均衡算法来实现。然而该算法在单节点多GPU的平台上表现不佳,其原因是单节点多GPU平台上单个GPU计算量小,并且GPU与CPU的总性能差距较大。为此,提出了经验指导的动态负载均衡算法以及多GPU自适应负载均衡算法,并且在单节点多GPU平台上进行了验证,结果显示,其比现有的基于NVIDIA费米GPU的HPI有6.3%的加速效果。  相似文献   

2.
多体问题(N-body)是力学的基本问题之一,研究N个质点互相作用的运动规律。结合分子动力学计算模拟软件LAMMPS和天体多体物理模拟软件Gadget-2这两个有广泛应用的多体并行计算软件,分析其基本算法和实现,讨论这两个有代表性的并行计算软件在GPU等加速部件上移植的基本思路。  相似文献   

3.
应用GPU集群加速计算蛋白质分子场   总被引:1,自引:2,他引:1  
针对生物化学计算中采用量子化学理论计算蛋白质分子场所带来的巨大计算量的问题,搭建起一个GPU集群系统,用来加速计算基于量子化学的蛋白质分子场.该系统采用消息传递并行编程环境(MPI)连接集群各结点,以开放多线程OpenMP编程标准作为多核CPU编程环境,以CUDA语言作为GPU编程环境,提出并实现了集群系统结点中GPU和多核CPU协同计算的并行加速架构优化设计.在保持较高计算精度的前提下,结合MPI,OpenMP和CUDA混合编程模式,大大提高了系统的计算性能,并对不同体系和规模的蛋白质分子场模拟进行了计算分析.与相应的CPU集群、GPU单机和CPU单机计算方法对比,该GPU集群大幅度地提高了高分辨率复杂蛋白质分子场模拟的计算效率,比CPU集群的平均计算加速比提高了7.5倍.  相似文献   

4.
在多核中央处理器(CPU)—图形处理器(GPU)异构并行体系结构上,采用OpenMP和计算统一设备架构(CUDA)编程实现了基于AMBER力场的蛋白质分子动力学模拟程序。通过合理地将程序划分为CPU单线程、CPU多线程和GPU多线程执行部分,高效地利用了计算机的处理能力。性能测试结果表明,相对于优化后的CPU串行计算,多核CPU-GPU异构并行计算模型有强大的性能优势,特别是将占整个程序执行时间90%的作用力的计算移植到GPU上执行,获得了最高可达12倍的计算加速比。  相似文献   

5.
分子动力学模拟通常用于晶体硅热力学性质的研究,因原子间采用复杂的多体作用势,分子模拟通常面临较高的计算负载,导致计算的时间和空间尺度受限。图形处理器(GPU)采用并行多线程技术,用于计算密集型处理任务,在分子动力学模拟领域中显示巨大的应用潜力。因此,充分利用GPU硬件架构特性提升固态共价晶体硅分子动力学模拟的时空尺度对晶体硅导热机制的研究具有重要意义。基于固态共价晶体硅分子动力学模拟算法,提出面向GPU计算平台的固定邻居算法设计与优化。利用数据结构、分支结构优化等方法解决分子动力学模拟的固定邻居算法全局访存和分支结构的耗时问题,降低数据访存消耗和分支冲突,通过改变线程并行调度方式,在GPU计算平台上实现高性能并行计算,有效解决计算负载问题。实验结果表明,LAMMPS双精度固态晶体硅分子动力学模拟与双精度固定邻居算法的加速比为11.62,HOOMD-blue双精度固态晶体硅分子动力学模拟与双精度固定邻居算法和单精度固定邻居算法的加速比分别为9.39和12.18。  相似文献   

6.
在集群与GPU组成的异构并行计算平台上,使用MPI+CUDA混合编程模型,实现基于ABEEMσπ模型的分子动力学模拟中电荷分布的计算.通过对电荷分布分布求解中的计算部分移植到GPU上进行,并针对算法中通信开销大和资源未充分利用的问题,通过异构平台的异步并发方法进行优化,提高了求解效率.性能测试结果表明,相比于单纯MPI并行算法,优化后GPU加速的异构并行算法,在化学大分子模型电荷分布计算上,有着明显的性能优势.  相似文献   

7.
分子动力学(MD)模拟是研究硅纳米薄膜热力学性质的主要方法,但存在数据处理量大、计算密集、原子间作用模型复杂等问题,限制了MD模拟的深入应用。针对晶硅分子动力学模拟算法中数据访问不连续和大量分支判断造成并行资源浪费、线程等待等问题,结合Nvidia Tesla V100 GPU硬件体系结构特点,对晶硅MD模拟算法进行设计。通过全局内存的合并访存、循环展开、原子操作等优化方法,利用GPU强大并行计算和浮点运算能力,减少显存访问及算法执行过程中的分支冲突和判断指令,提升算法整体计算性能。测试结果表明,优化后的晶硅MD模拟算法的计算速度相比于优化前提升了1.69~1.97倍,相比于国际上主流的GPU加速MD模拟软件HOOMDblue和LAMMPS分别提升了3.20~3.47倍和17.40~38.04倍,具有较好的模拟加速效果。  相似文献   

8.
分子动力学模拟作为获得液体、固体性质的重要计算手段,广泛应用于化学、物理、生物、医药、材料等众多领域。模拟体系的复杂性和精确性的需求,使得计算量巨大,耗费时间长。并行计算是加速大规模分子动力学模拟的霍要途径。GPU以几百GFlops甚至上I}Flops的运算能力,为分子动力学模拟等的计算密集型应用提供了新的加速方案。提出了一种基于GPU的分子动力学模拟并行算法—oApT-AD,并在OpenCL和CUDA框架下加以实现。,r}能测试显示,在Tesla C1060显卡上,该算法在OpcnCL框架下的实现相对于CPU的串行实现,最高达到120倍加遥比。通过对比发现,该算法在CUDA上的性能与()pcnCI、基本相当。同时,该算法还可以扩展到两块及以上的GPU上,具有良好的可扩展性。  相似文献   

9.
作为高性能科学计算的典型应用,利用GPU并行加速分子动力学模拟是2007年以来计算化学领域高性能计算的热点。本文概述了支持GPU加速的不同MD软件的特点和其研究进展,重点分析了Amber、GROMACS、ACEMD三个代表性软件的单GPU卡和多GPU卡计算性能,结果表明在配置相同数目GPU卡的情况下,单节点比多节点在计算性能上较有优势,桌面工作站配多块GPU卡是性价比相对较好的MD模拟计算模式。本文还考察了单精度和双精度GPU加速MD的模拟计算结果的准确性,与CPU的计算结果进行了比较,结果表明,GPU的计算结果总体而言是可信的。最后,本文对GPU并行加速MD模拟的研究现状进行总结并对未来发展做了展望。  相似文献   

10.
分子动力学模拟是对微观分子原子体系在时间与空间上的运动模拟,是从微观本质上认识体系宏观性质的有力方法.针对如何提升分子动力学并行模拟性能的问题,本文以著名软件GROMACS为例,分析其在分子动力学模拟并行计算方面的实现策略,结合分子动力学模拟关键原理与测试实例,提出MPI+OpenMP并行环境下计算性能的优化策略,为并行计算环境下实现分子动力学模拟的最优化计算性能提供理论和实践参考.对GPU异构并行环境下如何进行MPI、OpenMP、GPU搭配选择以达到性能最优,本文亦给出了一定的理论和实例参考.  相似文献   

11.
GPU以及集成式的CPU-GPU架构凭借其强大的并行处理能力和可编程流水线方式,已经成为数据库领域的研究热点。为充分利用异构平台的并行计算能力,提升列存储系统的查询性能,在研究异构平台结构特性的基础上,首先提出了GPU多线程平台上进行连接的数据划分策略--ICMD(Improved CMD),利用GPU流处理器并行处理各个子空间上的连接,然后利用任务评估分配模型实现查询负载的动态分配,使得查询操作能在多核CPU、GPU上高效并行执行。同时利用片上全局同步机制、局部内存重用技术优化ICMD连接算法。最后采用SSB基准测试集测试,结果表明:Intel? HD Graphics 4600平台上并行连接查询相比于CPU版本获得了35%的性能提升,较GPU查询引擎的Ocelot性能上提升了18%。  相似文献   

12.
CPU/GPU异构系统具有很大的发展潜力,深入研究CPU/GPU异构平台的并行优化,可实现系统整体计算能力的最大化。通过对CPU/GPU任务划分的优化来平衡CPU和GPU的负载,可提高计算资源的利用率,缩短计算任务的执行时间;通过对GPU线程划分的优化,可使GPU获得更高的速度。从而提高系统整体性能。  相似文献   

13.
Gadget is a simulation application for N‐body and smoothed particle hydrodynamics problems in cosmology, and it is widely applied in solving series of cosmological problems. N‐body focuses on the motion of the interaction of N particles, and smoothed particle hydrodynamics is a fluid simulation algorithm that studies the movement of fluid through particle simulation. Most scholars focus their attention on accelerating Gadget on multi‐core CPU or graphics processing units (GPUs) platforms. However, these research activities failed to achieve CPU–GPU hybrid computing, which resulted in tremendous waste of CPU computing resources. In this paper, we propose a CPU–GPU hybrid parallel strategy to accelerate Gadget‐2, a massively parallel structure formation code for cosmological simulations. This strategy uses CPU and GPU to process the calculation of short‐range force. To ensure CPU and GPU workload balance, a dynamic task allocation scheme is proposed according to the computational performance difference between the CPU and GPU. Experimental results showed that our CPU–GPU hybrid parallel strategy achieved an overall speedup factor of 18.6 and a partial speedup factor for short‐range force calculation of 28.35 compared with a single‐core CPU implementation for particles in million‐size magnitudes. Moreover, compared with a GPU platform that contained 12 CPU cores and one GPU, our hybrid parallel strategy obtained overall speedup and partial speedup factors of 6% and 20%, respectively. Furthermore, the scalability of the hybrid strategy is very fine – its performance will be enhanced when the problem scale is increasing. However, this strategy also has its limitation that the performance enhancement will be decreasing if the ratio(the number of CPU cores divides that of the GPU cards) reduces. Finally, in our hybrid strategy, the CPU coefficient of utilization improved by 17.14% or better. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

14.
The use of Graphics Processing Units (GPUs) for high‐performance computing has gained growing momentum in recent years. Unfortunately, GPU‐programming platforms like Compute Unified Device Architecture (CUDA) are complex, user unfriendly, and increase the complexity of developing high‐performance parallel applications. In addition, runtime systems that execute those applications often fail to fully utilize the parallelism of modern CPU‐GPU systems. Typically, parallel kernels run entirely on the most powerful device available, leaving other devices idle. These observations sparked research in two directions: (1) high‐level approaches to software development for GPUs, which strike a balance between performance and ease of programming; and (2) task partitioning to fully utilize the available devices. In this paper, we propose a framework, called PSkel, that provides a single high‐level abstraction for stencil programming on heterogeneous CPU‐GPU systems, while allowing the programmer to partition and assign data and computation to both CPU and GPU. Our current implementation uses parallel skeletons to transparently leverage Intel Threading Building Blocks (Intel Corporation, Santa Clara, CA, USA) and NVIDIA CUDA (Nvidia Corporation, Santa Clara, CA, USA). In our experiments, we observed that parallel applications with task partitioning can improve average performance by up to 76% and 28% compared with CPU‐only and GPU‐only parallel applications, respectively. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

15.
In this paper, we present two new parallel algorithms to solve large instances of the scheduling of independent tasks problem. First, we describe a parallel version of the Min–min heuristic. Second, we present GraphCell, an advanced parallel cellular genetic algorithm (CGA) for the GPU. Two new generic recombination operators that take advantage of the massive parallelism of the GPU are proposed for GraphCell. A speedup study shows the high performance of the parallel Min–min algorithm in the GPU versus several CPU versions of the algorithm (both sequential and parallel using multiple threads). GraphCell improves state-of-the-art solutions, especially for larger problems, and it provides an alternative to our GPU Min–min heuristic when more accurate solutions are needed, at the expense of an increased runtime.  相似文献   

16.
归约算法在科学计算和图像处理等领域有着十分广泛的应用,是并行计算的基本算法之一,因此对归约算法进行加速具有重要意义。为了充分挖掘异构计算平台下GPU的计算能力以对归约算法进行加速,文中提出基于线程内归约、work-group内归约和work-group间归约3个层面的归约优化方法,并打破以往相关工作将优化重心集中在work-group内归约上的传统思维,通过论证指出线程内归约才是归约算法的瓶颈所在。实验结果表明,在不同的数据规模下,所提归约算法与经过精心优化的OpenCV库的CPU版本相比,在AMD W8000和NVIDIA Tesla K20M平台上分别达到了3.91~15.93和2.97~20.24的加速比; 相比于OpenCV库的CUDA版本与OpenCL版本,在NVIDIA Tesla K20M平台上分别达到了2.25~5.97和1.25~1.75的加速比;相比于OpenCL版本,在AMD W8000平台上达到了1.24~5.15的加速比。文中工作不仅实现了归约算法在GPU计算平台上的高性能,而且实现了在不同GPU计算平台间的性能可移植。  相似文献   

17.
The widespread adoption of traditional heterogeneous systems has substantially improved the computing power available and, in the meantime, raised optimisation issues related to the processing of task streams across both CPU and GPU cores in heterogeneous systems. Similar to the heterogeneous improvement gained in traditional systems, cloud computing has started to add heterogeneity support, typically through GPU instances, to the conventional CPU-based cloud resources. This optimisation of cloud resources will arguably have a real impact when running on-demand computationally-intensive applications.In this work, we investigate the scaling of pattern-based parallel applications from physical, “local” mixed CPU/GPU-clusters to a public cloud CPU/GPU infrastructure. Specifically, such parallel patterns are deployed via algorithmic skeletons to exploit a peculiar parallel behaviour while hiding implementation details.We propose a systematic methodology to exploit approximated analytical performance/cost models, and an integrated programming framework that is suitable for targeting both local and remote resources to support the offloading of computations from structured parallel applications to heterogeneous cloud resources, such that performance values not available on local resources may be actually achieved with the remote resources. The amount of remote resources necessary to achieve a given performance target is calculated through the performance models in order to allow any user to hire the amount of cloud resources needed to achieve a given target performance value. Thus, it is therefore expected that such models can be used to devise the optimal proportion of computations to be allocated on different remote nodes for Big Data computations.We present different experiments run with a proof-of-concept implementation based on FastFlow  on small departmental clusters as well as on a public cloud infrastructure with CPU and GPU using the Amazon Elastic Compute Cloud. In particular, we show how CPU-only and mixed CPU/GPU computations can be offloaded to remote cloud resources with predictable performances and how data intensive applications can be mapped to a mix of local and remote resources to guarantee optimal performances.  相似文献   

18.
塔台模拟机冲突检测算法是一种耗时大的并行算法。针对其导致塔台模拟系统核心服务器CPU负担过重的缺点,在常用冲突检测算法的基础上,提出一种基于统一设备构架(CUDA)的塔台模拟机冲突检测实现方案。首先介绍GPU并行运算的体系结构基础,并将基于卡尔曼滤波的目标物体跟踪技术的分层冲突检测算法移植到GPU。然后利用相同价格的CPU和GPU对比运算效果。实验结果表明:与相同算法的CPU实现方案相比,GPU实现方案将计算效率提高10~50倍。使用此方案,极大地减轻了核心服务器的负担,使塔台模拟机的性能得到质的提高。  相似文献   

19.
Face tracking is an important computer vision technology that has been widely adopted in many areas, from cell phone applications to industry robots. In this paper, we introduce a novel way to parallelize a face contour detecting application based on the color-entropy preprocessed Chan–Vese model utilizing a total variation G-norm. This particular application is a complicated and unsupervised computational method requiring a large amount of calculations. Several core parts therein are difficult to parallelize due to heavily correlated data processing among iterations and pixels.We develop a novel approach to parallelize the data-dependent core parts and significantly improve the runtime performance of the model computation. We implement the parallelized program on OpenCL for both multi-core CPU and GPU. For 640 * 480 input images, the parallelized program on a NVidia GTX970 GPU, a NVidia GTX660 GPU, and an AMD FX8530 8-core CPU is on average 18.6, 12.0 and 4.40 times faster than its single-thread C version on the AMD FX8530 CPU, respectively. Some parallelized routines have much higher performance improvement compared to the whole program. For instance, on the NVidia GTX970 GPU, the parallelized entropy filter routine is on average 74.0 times faster than its single-thread C version on the AMD FX8530 8-core CPU. We discuss the parallelization methodologies in detail, including the scalability, thread models, as well as synchronization methods for both multi-core CPU and GPU.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号