首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
分子动力学模拟作为获得液体、固体性质的重要计算手段,广泛应用于化学、物理、生物、医药、材料等众多领域。模拟体系的复杂性和精确性的需求,使得计算量巨大,耗费时间长。并行计算是加速大规模分子动力学模拟的霍要途径。GPU以几百GFlops甚至上I}Flops的运算能力,为分子动力学模拟等的计算密集型应用提供了新的加速方案。提出了一种基于GPU的分子动力学模拟并行算法—oApT-AD,并在OpenCL和CUDA框架下加以实现。,r}能测试显示,在Tesla C1060显卡上,该算法在OpcnCL框架下的实现相对于CPU的串行实现,最高达到120倍加遥比。通过对比发现,该算法在CUDA上的性能与()pcnCI、基本相当。同时,该算法还可以扩展到两块及以上的GPU上,具有良好的可扩展性。  相似文献   

2.
The subset‐sum problem is a well‐known non‐deterministic polynomial‐time complete (NP‐complete) decision problem. This paper proposes a novel and efficient implementation of a parallel two‐list algorithm for solving the problem on a graphics processing unit (GPU) using Compute Unified Device Architecture (CUDA). The algorithm is composed of a generation stage, a pruning stage, and a search stage. It is not easy to effectively implement the three stages of the algorithm on a GPU. Ways to achieve better performance, reasonable task distribution between CPU and GPU, effective GPU memory management, and CPU–GPU communication cost minimization are discussed. The generation stage of the algorithm adopts a typical recursive divide‐and‐conquer strategy. Because recursion cannot be well supported by current GPUs with compute capability less than 3.5, a new vector‐based iterative implementation mechanism is designed to replace the explicit recursion. Furthermore, to optimize the performance of the GPU implementation, this paper improves the three stages of the algorithm. The experimental results show that the GPU implementation has much better performance than the CPU implementation and can achieve high speedup on different GPU cards. The experimental results also illustrate that the improved algorithm can bring significant performance benefits for the GPU implementation. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

3.
Graphics processor units (GPU) that are originally designed for graphics rendering have emerged as massively-parallel “co-processors” to the central processing unit (CPU). Small-footprint multi-GPU workstations with hundreds of processing elements can accelerate compute-intensive simulation science applications substantially. In this study, we describe the implementation of an incompressible flow Navier–Stokes solver for multi-GPU workstation platforms. A shared-memory parallel code with identical numerical methods is also developed for multi-core CPUs to provide a fair comparison between CPUs and GPUs. Specifically, we adopt NVIDIA’s Compute Unified Device Architecture (CUDA) programming model to implement the discretized form of the governing equations on a single GPU. Pthreads are then used to enable communication across multiple GPUs on a workstation. We use separate CUDA kernels to implement the projection algorithm to solve the incompressible fluid flow equations. Kernels are implemented on different memory spaces on the GPU depending on their arithmetic intensity. The memory hierarchy specific implementation produces significantly faster performance. We present a systematic analysis of speedup and scaling using two generations of NVIDIA GPU architectures and provide a comparison of single and double precision computational performance on the GPU. Using a quad-GPU platform for single precision computations, we observe two orders of magnitude speedup relative to a serial CPU implementation. Our results demonstrate that multi-GPU workstations can serve as a cost-effective small-footprint parallel computing platform to accelerate computational fluid dynamics (CFD) simulations substantially.  相似文献   

4.
以曲面轮为基础发展的高精度曲面建模方法(HASM)可以建立具有高精度的数字高程模型,但使用该方法需要求解偏微分方程离散产生的大规模线性方程组,计算量巨大,严重制约了对大规模数据的模拟应用;而现代GPU技术的发展使GPU越来越广泛地应用于通用计算加速。为了提高HASM方法的模拟速度,把高精度曲面模拟与GPU通用技术相结合,提出了GPU加速的高精度曲面建模方法。把HASM模拟过程中的有限差分离散、离散后的大规模线性系统求解分别使用GPU进行分解,使用共轭梯度(CG)和预处理共轭梯度方法(PCG)将求解任务分解为可以并行处理的独立的多任务,使得计算任务并行化,同时并行运行大规模线程,每个线程执行一个独立的任务,充分利用了现代GPU强大的通用计算能力,并行处理以获得加速。利用并行化加速的高精度曲面建模算法使用英伟达公司的统一计算开发架构(CUDA)编程实现,GPU采用该公司的Quadro 2000。分别应用该算法进行了数值实验和实际项目区数字高程模型(DEM)模拟实验。实验结果表明,充分利用GPU的并行处理能力加速后的HASM方法,在保证达到相同曲面模拟的精度条件下,和传统的CPU方法相比,算法可以获得超过一个数量级的加速。  相似文献   

5.
张丹丹  徐莹  徐磊 《计算机科学》2012,39(4):296-298,303
对CPU+GPU异构平台下的多种并行编程模式进行了研究,并针对格子Boltzmann方法实现了CUDA,MPI+CUDA,MPI+OpenMP+CUDA多级并行算法。结果表明,算法具有较好的加速性能;提出的根据计算量比例参数调节CPU和GPU之间负载均衡的方法,对于在异构平台上实现多级并行处理及资源的有效利用具有一定的参考和应用价值。  相似文献   

6.
The one‐step leapfrog alternative‐direction‐implicit finite‐difference time‐domain (ADI‐FDTD), free from the Courant‐Friedrichs‐Lewy (CFL) stability condition and sub‐step computations, is efficient when dealing with fine grid problems. However, solution of the numerous tridiagonal systems still imposes a great computational burden and makes the method hard to execute in parallel. In this paper, we proposed an efficient graphic processing unit (GPU)‐based parallel implementation of the one‐step leapfrog ADI‐FDTD for the far‐field EM scattering simulation of objects, in which we present and analyze the manners of calculation area division and thread allocation and a data layout transformation of z components is proposed to achieve better memory access mode, which is a key factor affecting GPU execution efficiency. The simulation experiment is carried out to verify the accuracy and efficiency of the GPU‐based implementation. The simulation results show that there is a good agreement between the proposed one‐step leapfrog ADI‐FDTD method and Yee's FDTD in solving the far‐field scattering problem and huge benefits in performance were encountered when the method was accelerated using GPU technology.  相似文献   

7.
RSA算法的CUDA高效实现技术   总被引:1,自引:1,他引:0       下载免费PDF全文
CUDA(Compute Unified Device Architecture)作为一种支持GPU通用计算的新型计算架构,在大规模数据并行计算方面得到了广泛的应用。RSA算法是一种计算密集型的公钥密码算法,给出了基于CUDA的RSA算法并行化高效实现技术,其关键为引入大量独立并发的Montgomery模乘线程,并给出了具体的线程组织、数据存储结构以及基于共享内存的性能优化实现技术。根据RSA算法CUDA实现方法,在某款GPU上测试了RSA算法的运算性能和吞吐率。实验结果表明,与RSA算法的通用CPU实现方式相比,CUDA实现能够实现超过40倍的性能加速。  相似文献   

8.
By using the method of Finite Difference Time Domain (FDTD) and the technology of Compute Unified Device Architecture (CUDA), the propagation characteristics of electromagnetic waves in Left‐Handed Materials (LHM) have been studied in this paper. The LHM slab was matched with the free space and the secondary focusing phenomenon of LHM was simulated. Compared with the serial FDTD program, our work showed that this method had a high accuracy. The phase compensation effect and the inverse Snell effect of LHM were also discussed by using the parallel FDTD method based on CUDA, which further proved that our results were consistent with the theoretical study. By comparing the calculation time of traditional FDTD program with that of the CUDA based parallel FDTD program, we conclude that the latter is more efficient than the former. This parallel method can be used as a more efficient way to study LHM.  相似文献   

9.
Sorting is a very important task in computer science and becomes a critical operation for programs making heavy use of sorting algorithms. General‐purpose computing has been successfully used on Graphics Processing Units (GPUs) to parallelize some sorting algorithms. Two GPU‐based implementations of the quicksort were presented in literature: the GPU‐quicksort, a compute‐unified device architecture (CUDA) iterative implementation, and the CUDA dynamic parallel (CDP) quicksort, a recursive implementation provided by NVIDIA Corporation. We propose CUDA‐quicksort an iterative GPU‐based implementation of the sorting algorithm. CUDA‐quicksort has been designed starting from GPU‐quicksort. Unlike GPU‐quicksort, it uses atomic primitives to perform inter‐block communications while ensuring an optimized access to the GPU memory. Experiments performed on six sorting benchmark distributions show that CUDA‐quicksort is up to four times faster than GPU‐quicksort and up to three times faster than CDP‐quicksort. An in‐depth analysis of the performance between CUDA‐quicksort and GPU‐quicksort shows that the main improvement is related to the optimized GPU memory access rather than to the use of atomic primitives. Moreover, in order to assess the advantages of using the CUDA dynamic parallelism, we implemented a recursive version of the CUDA‐quicksort. Experimental results show that CUDA‐quicksort is faster than the CDP‐quicksort provided by NVIDIA, with better performance achieved using the iterative implementation. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

10.
The numerical solution of shallow water systems is useful for several applications related to geophysical flows, but the big dimensions of the domains suggests the use of powerful accelerators to obtain numerical results in reasonable times. This paper addresses how to speed up the numerical solution of a first order well-balanced finite volume scheme for 2D one-layer shallow water systems by using modern Graphics Processing Units (GPUs) supporting the NVIDIA CUDA programming model. An algorithm which exploits the potential data parallelism of this method is presented and implemented using the CUDA model in single and double floating point precision. Numerical experiments show the high efficiency of this CUDA solver in comparison with a CPU parallel implementation of the solver and with respect to a previously existing GPU solver based on a shading language.  相似文献   

11.
We present initial comparison performance results for Intel many integrated core (MIC), Sandy Bridge (SB), and graphical processing unit (GPU). A 1D explicit electrostatic particle‐in‐cell code is used to simulate a two‐stream instability in plasma. We compare the computation times for various number of cores/threads and compiler options. The parallelization is implemented via OpenMP with a maximum thread number of 128. Parallelization and vectorization on the GPU is achieved with modifying the code syntax for compatibility with CUDA. We assess the speedup due to various auto‐vectorization and optimization level compiler options. Our results show that the MIC is several times slower than SB for a single thread, and it becomes faster than SB when the number of cores increases with vectorization switched on. The compute times for the GPU are consistently about six to seven times faster than the ones for MIC. Compared with SB, the GPU is about two times faster for a single thread and about an order of magnitude faster for 128 threads. The net speedup, however, for MIC and GPU are almost the same. An initial attempt to offload parts of the code to the MIC coprocessor shows that there is an optimal number of threads where the speedup reaches a maximum. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

12.
A parallel implementation via CUDA of the dynamic programming method for the knapsack problem on NVIDIA GPU is presented. A GTX 260 card with 192 cores (1.4 GHz) is used for computational tests and processing times obtained with the parallel code are compared to the sequential one on a CPU with an Intel Xeon 3.0 GHz. The results show a speedup factor of 26 for large size problems. Furthermore, in order to limit the communication between the CPU and the GPU, a compression technique is presented which decreases significantly the memory occupancy.  相似文献   

13.
Classification using Ant Programming is a challenging data mining task which demands a great deal of computational resources when handling data sets of high dimensionality. This paper presents a new parallelization approach of an existing multi-objective Ant Programming model for classification, using GPUs and the NVIDIA CUDA programming model. The computational costs of the different steps of the algorithm are evaluated and it is discussed how best to parallelize them. The features of both the CPU parallel and GPU versions of the algorithm are presented. An experimental study is carried out to evaluate the performance and efficiency of the interpreter of the rules, and reports the execution times and speedups regarding variable population size, complexity of the rules mined and dimensionality of the data sets. Experiments measure the original single-threaded and the new multi-threaded CPU and GPU times with different number of GPU devices. The results are reported in terms of the number of Giga GP operations per second of the interpreter (up to 10 billion GPops/s) and the speedup achieved (up to 834× vs CPU, 212× vs 4-threaded CPU). The proposed GPU model is demonstrated to scale efficiently to larger datasets and to multiple GPU devices, which allows the expansion of its applicability to significantly more complicated data sets, previously unmanageable by the original algorithm in reasonable time.  相似文献   

14.
This paper addresses the speedup of the numerical solution of shallow-water systems in 2D domains by using modern graphics processing units (GPUs). A first order well-balanced finite volume numerical scheme for 2D shallow-water systems is considered. The potential data parallelism of this method is identified and the scheme is efficiently implemented on GPUs for one-layer shallow-water systems. Numerical experiments performed on several GPUs show the high efficiency of the GPU solver in comparison with a highly optimized implementation of a CPU solver.  相似文献   

15.
陈颖  林锦贤  吕暾 《计算机应用》2011,31(3):851-855
随着图形处理器(GPU)性能的大幅度提升以及可编程性的发展,已经有许多算法成功地移植到GPU上.LU分解和Laplace算法是科学计算的核心,但计算量往往很大,由此提出了一种在GPU上加速计算的方法.使用Nvidia公司的统一计算设备架构(CUDA)编程模型实现这两个算法,通过对CPU与GPU进行任务划分,同时利用GP...  相似文献   

16.
For microdosimetric calculations event-by-event Monte Carlo (MC) methods are considered the most accurate. The main shortcoming of those methods is the extensive requirement for computational time. In this work we present an event-by-event MC code of low projectile energy electron and proton tracks for accelerated microdosimetric MC simulations on a graphic processing unit (GPU). Additionally, a hybrid implementation scheme was realized by employing OpenMP and CUDA in such a way that both GPU and multi-core CPU were utilized simultaneously. The two implementation schemes have been tested and compared with the sequential single threaded MC code on the CPU. Performance comparison was established on the speed-up for a set of benchmarking cases of electron and proton tracks. A maximum speedup of 67.2 was achieved for the GPU-based MC code, while a further improvement of the speedup up to 20% was achieved for the hybrid approach. The results indicate the capability of our CPU–GPU implementation for accelerated MC microdosimetric calculations of both electron and proton tracks without loss of accuracy.  相似文献   

17.
The hybrid implicit‐explicit (HIE) finite‐difference time‐domain (FDTD) method with the convolutional perfectly matched layer (CPML) is extended to a full three‐dimensional scheme in this article. To demonstrate the application of the CPML better, the entire derivation process is presented, in which the fine scale structure is changed from y‐direction to z‐direction of the propagation innovatively. The numerical examples are adopted to verify the efficiency and accuracy of the proposed method. Numerical results show that the HIE‐FDTD with CPML truncation has the similar relative reflection error with the FDTD with CPML method, but it is much better than the methods with Mur absorbing boundary. Although Courant‐Friedrich‐Levy number climbs to 8, the maximum relative error of the proposed HIE‐CPML remains more below than ?71 dB, and CPU time is nearly 72.1% less than the FDTD‐CPML. As an example, a low‐pass filter is simulated by using the FDTD‐CPML and HIE‐CPML methods. The curves obtained are highly fitted between two methods; the maximum errors are lower than ?79 dB. Furthermore, the CPU time saved much more, accounting for only 26.8% of the FDTD‐CPML method while the same example simulated.  相似文献   

18.
Wang–Landau sampling is implemented on the Graphics Processing Unit (GPU) with the Compute Unified Device Architecture (CUDA). Performances on three different GPU cards, including the new generation Fermi architecture card, are compared with that on a Central Processing Unit (CPU). The parameters for massively parallel Wang–Landau sampling are tuned in order to achieve fast convergence. For simulations of the water cluster systems, we obtain an average of over 50 times speedup for a given workload.  相似文献   

19.
The use of Graphics Processing Units (GPUs) for high‐performance computing has gained growing momentum in recent years. Unfortunately, GPU‐programming platforms like Compute Unified Device Architecture (CUDA) are complex, user unfriendly, and increase the complexity of developing high‐performance parallel applications. In addition, runtime systems that execute those applications often fail to fully utilize the parallelism of modern CPU‐GPU systems. Typically, parallel kernels run entirely on the most powerful device available, leaving other devices idle. These observations sparked research in two directions: (1) high‐level approaches to software development for GPUs, which strike a balance between performance and ease of programming; and (2) task partitioning to fully utilize the available devices. In this paper, we propose a framework, called PSkel, that provides a single high‐level abstraction for stencil programming on heterogeneous CPU‐GPU systems, while allowing the programmer to partition and assign data and computation to both CPU and GPU. Our current implementation uses parallel skeletons to transparently leverage Intel Threading Building Blocks (Intel Corporation, Santa Clara, CA, USA) and NVIDIA CUDA (Nvidia Corporation, Santa Clara, CA, USA). In our experiments, we observed that parallel applications with task partitioning can improve average performance by up to 76% and 28% compared with CPU‐only and GPU‐only parallel applications, respectively. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

20.
基于CUDA的GMM模型快速训练方法及应用   总被引:1,自引:1,他引:0  
由于能够很好地近似描述任何分布,混合高斯模型(GMM)在模式在识别领域得到了广泛的应用.GMM模型参数通常使用迭代的期望最大化(EM)算法训练获得,当训练数据量非常庞大及模型混合数很大时,需要花费很长的训练时间.NVIDIA公司推出的统一计算设备架构(Computed unified device architecture,CUDA)技术通过在图形处理单元(GPU)并发执行多个线程能够实现大规模并行快速计算.本文提出一种基于CUDA,适用于特大数据量的GMM模型快速训练方法,包括用于模型初始化的K-means算法的快速实现方法,以及用于模型参数估计的EM算法的快速实现方法.文中还将这种训练方法应用到语种GMM模型训练中.实验结果表明,与Intel DualCore PentiumⅣ3.0 GHz CPU的一个单核相比,在NVIDIA GTS250 GPU上语种GMM模型训练速度提高了26倍左右.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号