首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
赫姆霍兹方程求解是GRAPES数值天气预报系统动力框架中的核心部分,可转换为大规模稀疏线性系统的求解问题,但受限于硬件资源和数据规模,其求解效率成为限制系统计算性能提升的瓶颈。分别通过MPI、MPI+OpenMP、CUDA三种并行方式实现求解大规模稀疏线性方程组的广义共轭余差法,并利用不完全分解LU预处理子(ILU)优化系数矩阵的条件数,加快迭代法收敛。在CPU并行方案中,MPI负责进程间粗粒度并行和通信,OpenMP结合共享内存实现进程内部的细粒度并行,而在GPU并行方案中,CUDA模型采用数据传输、访存合并及共享存储器方面的优化措施。实验结果表明,通过预处理优化减少迭代次数对计算性能提升明显,MPI+OpenMP混合并行优化较MPI并行优化性能提高约35%,CUDA并行优化较MPI+OpenMP混合并行优化性能提高约50%,优化性能最佳。  相似文献   

2.
ABEEMσπ(Atom BondElectronegativityEqualizationσπModel)模型中,原串行程序求静电相互作用能的方法非常耗时,致使研究问题的效率降低.针对原程序中多个循环相互嵌套的求解部分,采用带状卷帘存储迭代分配的MPI(MessagePassingInter face)并行化处理;对体系中所有原子、σ键、孤对电子、π键位点之间的静电相互作用能采用多线程CUDA(ComputerUnifiedDevice Architecture)并行化处理.传统MPI+CUDA环境中,GPU和CPU之间的数据传输开销大,导致整体性能下降以及各种粒子间计算串行调用CUDA,致使时间浪费.针对上述情况,使用GPU核心的缓存机制解决传输开销大的问题,并利用多CUDA流技术实现多个循环异步进行计算,从而缩短了运行时间.然后选取多个不同类型的大分子体系进行测试,结果表明,利用改进的MPI+CUDA并行模型进行动力学模拟,并行加速比显著提高,大幅度缩减了求解静电相互作用能的时间,并得到与串行一致的结果.  相似文献   

3.
基于CUDA的并行粒子群优化算法的设计与实现   总被引:1,自引:0,他引:1  
针对处理大量数据和求解大规模复杂问题时粒子群优化(PSO)算法计算时间过长的问题, 进行了在显卡(GPU)上实现细粒度并行粒子群算法的研究。通过对传统PSO算法的分析, 结合目前被广泛使用的基于GPU的并行计算技术, 设计实现了一种并行PSO方法。本方法的执行基于统一计算架构(CUDA), 使用大量的GPU线程并行处理各个粒子的搜索过程来加速整个粒子群的收敛速度。程序充分使用CUDA自带的各种数学计算库, 从而保证了程序的稳定性和易写性。通过对多个基准优化测试函数的求解证明, 相对于基于CPU的串行计算方法, 在求解收敛性一致的前提下, 基于CUDA架构的并行PSO求解方法可以取得高达90倍的计算加速比。  相似文献   

4.
CUDA架构下大规模稠密线性方程组的并行求解   总被引:1,自引:0,他引:1       下载免费PDF全文
在Gauss-Jordan消去法的基础上,给出了一种适应于CUDA架构的改进Gauss-Jordan消去并行算法。通过分析该方法的处理过程以及CUDA架构的相应限制,在CUDA的grid-block-thread三层组织结构的基础上,从算法构造的角度提出了grid-strip-group-block-thread五层结构,给出了基础行以及全局基础行等概念,并构建了适应于CUDA架构的Gauss-Jordan消去法的并行版本,在最高维数为4 000维的大规模稠密线性方程组的算例求解上与串行Gauss-Jordan消去法进行了比较,实验结果表明,该算法能够充分利用GPU的硬件特性,有效地降低了大规模稠密线性方程组的求解时间。  相似文献   

5.
韩琪  蔡勇 《计算机仿真》2015,32(4):221-226,304
针对进行大规模拓扑优化问题计算量庞大且计算效率低的问题,设计并实现了一种基于图形处理器(GPU)的并行拓扑优化方法.采用双向渐进结构拓扑优化(BESO)为基础优化算法,采用一种基于节点计算的共轭梯度求解方法用于有限元方程组求解.通过对原串行算法的研究,并结合GPU的计算特点,实现了迭代过程全流程的并行计算.上述方法的程序设计和编写采用统一计算架构(CUDA),提出了基于单元和基于节点的两种并行策略.编写程序时充分使用CUDA自带的各种数学运算库,保证了程序的稳定性和易用性.数值算例证明,并行计算方法稳定并且高效,在优化结果一致的前提下,采用GTX580显卡可以取得巨大的计算加速比.  相似文献   

6.
赵巍  苏明 《计算机仿真》2009,26(7):306-310
为了便于混合系统的建模与仿真,提出了一个可以用通用开发平台实现的框架:敏捷仿真应用框架(AAFS).AAFS基于方程建模,消除了模型的因果性,提高了模型的复用性;提取了构成仿真的模型、问题和数值求解器三个抽象,降低了AAFS内部各组成部分的耦合性;使用模型间的关联和连接器组合模型、搭建系统,使构成系统的慨念和方程可以被层次化的管理:用UML符号给出了AAFS的设计,说明了AAFS易于被当前面向对象的编程语言实现;定义了模型与求解器的接口,增加了对数值解法的选择性;给出了一个用AAFS实现的弹簧、质子系统的建模仿真例子,说明了AAFS的使用方法及其带给使用者的便利性.  相似文献   

7.
通过求解Euler方程获得运动翼段的非定常流场,并用CUDA语言对流场求解器进行GPU并行计算.使用ARMA(auto-regressive-moving-average)模型对非定常气动力进行辨识,由系统辨识模型得到的结果与全阶CFD计算结果十分吻合.基于降阶气动模型与结构的耦合,计算了具有S型颤振边界的气动弹性标准算例-Isogai Wing的跨音速颤振.本文给出的方法可以在保证气动弹性计算精度的前提下大幅提高计算效率.  相似文献   

8.
针对现代优化算法在处理相对复杂问题中所面临的求解时间复杂度较高的问题,引入基于GPU的并行处理解决方法。首先从宏观角度阐释了基于计算统一设备架构CUDA的并行编程模型,然后在GPU环境下给出了基于CUDA架构的5种典型现代优化算法(模拟退火算法、禁忌搜索算法、遗传算法、粒子群算法以及人工神经网络)的并行实现过程。通过对比分析在不同环境下测试的实验案例统计结果,指出基于GPU的单指令多线程并行优化策略的优势及其未来发展趋势。  相似文献   

9.
CUDA并行技术与数字图像几何变换   总被引:2,自引:0,他引:2  
CUDA是GPU通过并发执行多个线程以实现大规模快速并行计算能力的技术,它能使对GPU编程变得更容易。介绍了CUDA基本特性及主要编程模型,在此基础上,提出并实现了基于NVIDIA CUDA技术的图像快速几何变换。采用位置偏移增量代替原变换算法中大量乘法运算,并把CUDA技术的快速并行计算能力应用到数字图像几何变换中,解决了基于CPU的传统图像几何变换运算效率低下的问题。实验结果证明使用CUDA技术,随着处理图像尺寸的增加,对数字图像几何变换处理效率最高能够提高到近100倍。  相似文献   

10.
RSA算法的CUDA高效实现技术   总被引:1,自引:1,他引:0       下载免费PDF全文
CUDA(Compute Unified Device Architecture)作为一种支持GPU通用计算的新型计算架构,在大规模数据并行计算方面得到了广泛的应用。RSA算法是一种计算密集型的公钥密码算法,给出了基于CUDA的RSA算法并行化高效实现技术,其关键为引入大量独立并发的Montgomery模乘线程,并给出了具体的线程组织、数据存储结构以及基于共享内存的性能优化实现技术。根据RSA算法CUDA实现方法,在某款GPU上测试了RSA算法的运算性能和吞吐率。实验结果表明,与RSA算法的通用CPU实现方式相比,CUDA实现能够实现超过40倍的性能加速。  相似文献   

11.
The numerical solution of shallow water systems is useful for several applications related to geophysical flows, but the big dimensions of the domains suggests the use of powerful accelerators to obtain numerical results in reasonable times. This paper addresses how to speed up the numerical solution of a first order well-balanced finite volume scheme for 2D one-layer shallow water systems by using modern Graphics Processing Units (GPUs) supporting the NVIDIA CUDA programming model. An algorithm which exploits the potential data parallelism of this method is presented and implemented using the CUDA model in single and double floating point precision. Numerical experiments show the high efficiency of this CUDA solver in comparison with a CPU parallel implementation of the solver and with respect to a previously existing GPU solver based on a shading language.  相似文献   

12.
邓亮  徐传福  刘巍  张理论 《计算机应用》2013,33(10):2783-2786
交替方向隐格式(ADI)是常见的偏微分方程离散格式之一,目前对ADI格式在计算流体力学(CFD)实际应用中的GPU并行工作开展较少。从一个有限体积CFD应用出发,通过分析ADI解法器的特点和计算流程,基于统一计算架构(CUDA)编程模型设计了基于网格点与网格线的两类细粒度GPU并行算法,讨论了若干性能优化方法。在天河-1A系统上,采用128×128×128网格规模的单区结构网格算例,无粘项、粘性项及ADI迭代计算的GPU并行性能相对于单CPU核,分别取得了100.1、40.1和10.3倍的加速比,整体ADI CFD解法器的GPU并行加速比为17.3  相似文献   

13.
CUDA-based solver for large-scale groundwater flow simulation   总被引:1,自引:0,他引:1  
This article presents a parallel simulation solver for groundwater flow on CUDA. Preconditioned conjugate gradient (PCG) algorithm is used to solve the large linear systems arising from the finite-difference discretization of three-dimensional groundwater flow problems. CUDA implementing methods for the two most time-consuming operations in PCG, sparse matrix–vector multiplication and vector inner-product, are given. The experimental results show that CUDA can speed up the solving process of the groundwater simulation significantly. 1.8–3.7 speedup can be achieved with different GPUs for a transient groundwater flow problem.  相似文献   

14.
Hybrid CUDA, OpenMP, and MPI parallel programming on multicore GPU clusters   总被引:2,自引:0,他引:2  
Nowadays, NVIDIA's CUDA is a general purpose scalable parallel programming model for writing highly parallel applications. It provides several key abstractions – a hierarchy of thread blocks, shared memory, and barrier synchronization. This model has proven quite successful at programming multithreaded many core GPUs and scales transparently to hundreds of cores: scientists throughout industry and academia are already using CUDA to achieve dramatic speedups on production and research codes. In this paper, we propose a parallel programming approach using hybrid CUDA OpenMP, and MPI programming, which partition loop iterations according to the number of C1060 GPU nodes in a GPU cluster which consists of one C1060 and one S1070. Loop iterations assigned to one MPI process are processed in parallel by CUDA run by the processor cores in the same computational node.  相似文献   

15.
The exact evaluation of the disconnected diagram contributions to the flavor-singlet pseudo-scalar meson mass, the nucleon σ-term and the nucleon electromagnetic form factors is carried out utilizing GPGPU technology with the NVIDIA CUDA platform. The disconnected loops are also computed using stochastic methods with several noise reduction techniques. Various dilution schemes as well as the truncated solver method are studied. We make a comparison of these stochastic techniques to the exact results and show that the number of noise vectors depends on the operator insertion in the fermion loop.  相似文献   

16.
As a general purpose scalable parallel programming model for coding highly parallel applications, CUDA from NVIDIA provides several key abstractions: a hierarchy of thread blocks, shared memory, and barrier synchronization. It has proven to be rather effective at programming multithreaded many-core GPUs that scale transparently to hundreds of cores; as a result, scientists all over the industry and academia are using CUDA to dramatically expedite on production and codes. GPU-based clusters are likely to play an essential role in future cloud computing centers, because some computation-intensive applications may require GPUs as well as CPUs. In this paper, we adopted the PCI pass-through technology and set up virtual machines in a virtual environment; thus, we were able to use the NVIDIA graphics card and the CUDA high performance computing as well. In this way, the virtual machine has not only the virtual CPU but also the real GPU for computing. The performance of the virtual machine is predicted to increase dramatically. This paper measured the difference of performance between physical and virtual machines using CUDA, and investigated how virtual machines would verify CPU numbers under the influence of CUDA performance. At length, we compared CUDA performance of two open source virtualization hypervisor environments, with or without using PCI pass-through. Through experimental results, we will be able to tell which environment is most efficient in a virtual environment with CUDA.  相似文献   

17.
In this paper, we present a novel method to couple Smoothed Particle Hydrodynamics (SPH) and nonlinear FEM to animate the interaction of fluids and deformable solids in real time. To accurately model the coupling, we generate proxy particles over the boundary of deformable solids to facilitate the interaction with fluid particles, and develop an efficient method to distribute the coupling forces of proxy particles to FEM nodal points. Specifically, we employ the Total Lagrangian Explicit Dynamics (TLED) finite element algorithm for nonlinear FEM because of many of its attractive properties such as supporting massive parallelism, avoiding dynamic update of stiffness matrix computation, and efficient solver. Based on a predictor‐corrector scheme for both velocity and position, different normal and tangential conditions can be realized even for shell‐like thin solids. Our coupling method is entirely implemented on modern GPUs using CUDA. We demonstrate the advantage of our two‐way coupling method in computer animation via various virtual scenarios.  相似文献   

18.
本文首先对GPGPU模型CUDA进行了简单的介绍,描述了IDEA密码体制加、解密密钥生成过程,最后通过使用CUDA架构在GPU上实现了IDEA密码体制加、解密密钥的生成过程.  相似文献   

19.
Graphics processor units (GPU) that are originally designed for graphics rendering have emerged as massively-parallel “co-processors” to the central processing unit (CPU). Small-footprint multi-GPU workstations with hundreds of processing elements can accelerate compute-intensive simulation science applications substantially. In this study, we describe the implementation of an incompressible flow Navier–Stokes solver for multi-GPU workstation platforms. A shared-memory parallel code with identical numerical methods is also developed for multi-core CPUs to provide a fair comparison between CPUs and GPUs. Specifically, we adopt NVIDIA’s Compute Unified Device Architecture (CUDA) programming model to implement the discretized form of the governing equations on a single GPU. Pthreads are then used to enable communication across multiple GPUs on a workstation. We use separate CUDA kernels to implement the projection algorithm to solve the incompressible fluid flow equations. Kernels are implemented on different memory spaces on the GPU depending on their arithmetic intensity. The memory hierarchy specific implementation produces significantly faster performance. We present a systematic analysis of speedup and scaling using two generations of NVIDIA GPU architectures and provide a comparison of single and double precision computational performance on the GPU. Using a quad-GPU platform for single precision computations, we observe two orders of magnitude speedup relative to a serial CPU implementation. Our results demonstrate that multi-GPU workstations can serve as a cost-effective small-footprint parallel computing platform to accelerate computational fluid dynamics (CFD) simulations substantially.  相似文献   

20.
针对并行处理H.264标准视频流解码问题,提出基于CPU/GPU的协同运算算法。以统一设备计算架构(CUDA)语言作为GPU编程模型,实现DCT逆变换与帧内预测在GPU中的加速运算。在保持较高计算精度的前提下,结合CUDA混合编程,提高系统的计算性能。利用NIVIDIA提供的CUDA语言,在解码过程中使DCT逆变换和帧内预测在GPU上并行实现,将并行算法与CPU单机实现进行比较,并用不同数量的视频流验证并行解码算法的加速效果。实验结果表明,该算法可大幅提高视频流的编解码效率,比CPU单机的平均计算加速比提高10倍。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号