首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
基于GPGPU的数字图像并行化预处理   总被引:2,自引:0,他引:2  
首先简要介绍了统一设备架构CUDA(Compute Unified Device Architecture)技术的背景、特点、内存模型,利用通用计算图形处理单元GPGPU(General Purpose GPU)及CUDA技术,实现了图像直方图均衡化和薄云去除的并行化处理,与传统的基于CPU的方法相比,两个基于GPGPU的图像预处理操作的执行效率分别提高了40倍与80倍左右,在大规模实时性图像处理操作中,有很大的实用价值。  相似文献   

2.
使用GPU加速BLAST算法初探   总被引:1,自引:1,他引:0       下载免费PDF全文
应用GPU通用高性能编程技术实现了一种加速BLAST算法的新方法。BLAST是目前最常用的用于生物序列查询比对的算法和软件包,其处理速度受到串行化执行和磁盘I/O等因素的影响。本文通过实验分析了BLAST软件包中的典型程序BLASTN的运行热点,并选定关键热点模块,应用CUDA编程技术对其进行并行化改造。对比实验结果表明,对于平均序列长度较大的序列库,应用GPGPU并行化可明显缩短该模块的运行时间,获得超过35倍的加速比。这说明,我们可以利用GPGPU对BLAST进行并行化加速,以满足高性能生物序列查询的需求。  相似文献   

3.
作为应用软件模型和计算机硬件之间的桥梁,编程模型在计算机领域的重要性不言而喻.但随着具备细粒度并行计算能力的图形处理器(GPU)进入主流市场,与之相适应的编程模型发展却相对滞后.Nvidia在GeForce 8系列显卡上推出的统一计算设备架构(CUDA)技术,使得通用计算图形处理单元(GPGPU)从图形硬件流水线和高级绘制语言中解放出来,开发人员无须掌握图形学编程方法即可在单任务多数据模式(SIMD)下完成高性能并行计算.论文从特性、组成和并行架构等几个方面对CUDA并行计算模型进行了研究,充分表明基于GPU进行高性能并行计算,是适应目前大规模计算需求的一个重要发展途径.  相似文献   

4.
利用GPGPU(General Purpose GPU)强大的并行处理能力,基于NVIDIA CUDA框架对已有的稀疏磁共振(Sparse MRI)重建算法进行了并行化改造,使其能够适应实际应用的要求。稀疏磁共振成像的重建算法包含大量的浮点运算,计算耗时严重,难以应用于实际,必须对其进行加速和优化。实验结果显示,NVIDIA GTX275 GPU使运算时间从4分多钟缩短到3.4秒左右,与Intel Q8200 CPU相比,达到了76倍的加速。  相似文献   

5.
为解决时域有限差分(FDTD)算法应用于电大尺寸目标仿真的巨大耗时问题,应用FDTD算法的并行特性和通用图形处理器(GPGPU)技术,实现了一种基于计算统一设备架构(CUDA)的三维FDTD并行计算方法,采用了时域卷积完全匹配层(CPML)吸收边界条件模拟开域空间,对不同网格数目标仿真计算。进一步结合FDTD算法和CUDA的特点进行了优化,当计算空间元胞数在十万数量级及以上时,优化前后GPU运算相对于同时期的CPU分别可获得10和25倍以上的加速,结果表明该方法较适合用于实际电磁问题的仿真。  相似文献   

6.
随着网络不断地社会化、普及化,网络社区的规模也越来越大,这给社会化网络关系的计算带来了巨大的计算量。这些计算包括个人关系及计算与生成、全局关系计算与生成以及关系的挖掘等。虽然这些工作的计算量很大,但却适合并行处理。基于此,本文通过详细分析GPU的高性能运算及其在CUDA编程模型上的具体实现,讨论利用基于CUDA硬件架构的GPU来进行社区用户关系的并行计算。  相似文献   

7.
使用CUDA平台,提出在通用图形处理器(GPGPU)上实现并行的全选主元、归一和消去等操作,加速实现并行全选主元高斯-约当消去法求解线性方程组的一种基本方法。该方法在CPU上完成解向量的恢复。根据NVIDIA公司最新Fermi架构图形处理器的特点,通过一系列的优化设计,使通用GPGPU相对Intel最新架构CPU的加速比超过了6.5倍,比Intel上一代CPU的加速比超过了10倍。  相似文献   

8.
应用GPU通用高性能编程技术实现一种加速地震叠前时间偏移的新方法.该技术是地震勘探处理的常规流程,其核心算法具有计算密集、数据独立性强、并行性高等特点.通过性能剖析获得其计算热点,通过CUDA技术对其进行并行化改造,并利用CUDA的流技术实现CPU到GPU的异步传输.通过集群环境下的性能测试,应用GPU并行化的PSTM程序可明显缩短运行时间.  相似文献   

9.
针对目前基于普通DSP的FIR算法速度低、扩展性差的缺点,提出并实现基于CUDA平台实现的FIR滤波算法。由于在CUDA中程序可以直接操作数据而无需借助于图形系统的API,使开发者能够在GPU 强大计算能力的基础上建立起一种效率更高的密集数据计算解决方案。该算法将CUDA用于FIR滤波器输入输出关系计算,采用矩阵乘法的并行运算技术,在GPU上建立并行滤波模型,并对算法进行了优化。实验结果表明,在Tesla C1060平台上,和传统的基于DSP的FIR滤波算法计算速度相比,基于CUDA平台计算FIR滤波算法时,其加速比可接近30,解决了传统基于DSP计算FIR滤波算法速度较慢、扩展性差的问题。  相似文献   

10.
CUDA软硬件环境简介   总被引:1,自引:0,他引:1  
《程序员》2008,(3):36-37
CUDA是用于GPU计算的开发环境,它是一个全新的软硬件架构,可以将GPU视为一个并行数据计算的设备,对所进行的计算进行分配和管理。在CUDA的架构中,这些计算不再像过去所谓的GPGPU架构那样必须将计算映射到图形API(OpenGL和Direct3D)中,因此对于开发者来说,CUDA的开发门槛大大降低了。CUDA的GPU编程语言基于标准的C语言,因此任何有C语言基础的用户都很容易地开发CUDA的应用程序。  相似文献   

11.
基于Hadoop的高性能海量数据处理平台研究   总被引:2,自引:0,他引:2  
海量数据高性能计算蕴藏着巨大的应用价值,但是目前云计算体系只具有海量数据处理能力,而不具有足够的高性能计算能力。将具有超强并行计算能力的CPU与云计算相融合,提出了基于CPU/GPU协同的异构高性能云计算体系结构。以开源Hadoop为基础,采用注释码的形式对MapReduce函数中需要并行的部分进行标记。通过 定制GPU类加载器,将被标记代码转换为CUDA代码并动态编译运行。该平台将GPU的计算能力融合到MapReduce框架中,可高效处理海量数据。  相似文献   

12.
This paper reports an application dependent network design for extreme scale high performance computing (HPC) applications. Traditional scalable network designs focus on fast point-to-point transmission of generic data packets. The proposed network focuses on the sustainability of high performance computing applications by statistical multiplexing of semantic data objects. For HPC applications using data-driven parallel processing, a tuple is a semantic object. We report the design and implementation of a tuple switching network for data parallel HPC applications in order to gain performance and reliability at the same time when adding computing and communication resources. We describe a sustainability model and a simple computational experiment to demonstrate extreme scale application’s sustainability with decreasing system mean time between failures (MTBF). Assuming three times slowdown of statistical multiplexing and 35% time loss per checkpoint, a two-tier tuple switching framework would produce sustained performance and energy savings for extreme scale HPC application using more than 1024 processors or less than 6 hour MTBF. Higher processor counts or higher checkpoint overheads accelerate the benefits.  相似文献   

13.
As a general purpose scalable parallel programming model for coding highly parallel applications, CUDA from NVIDIA provides several key abstractions: a hierarchy of thread blocks, shared memory, and barrier synchronization. It has proven to be rather effective at programming multithreaded many-core GPUs that scale transparently to hundreds of cores; as a result, scientists all over the industry and academia are using CUDA to dramatically expedite on production and codes. GPU-based clusters are likely to play an essential role in future cloud computing centers, because some computation-intensive applications may require GPUs as well as CPUs. In this paper, we adopted the PCI pass-through technology and set up virtual machines in a virtual environment; thus, we were able to use the NVIDIA graphics card and the CUDA high performance computing as well. In this way, the virtual machine has not only the virtual CPU but also the real GPU for computing. The performance of the virtual machine is predicted to increase dramatically. This paper measured the difference of performance between physical and virtual machines using CUDA, and investigated how virtual machines would verify CPU numbers under the influence of CUDA performance. At length, we compared CUDA performance of two open source virtualization hypervisor environments, with or without using PCI pass-through. Through experimental results, we will be able to tell which environment is most efficient in a virtual environment with CUDA.  相似文献   

14.
Power efficiency investigation has been required in each level of a High Performance Computing (HPC) system because of the increasing computation demands of scientific and engineering applications. Focusing on handling the critical design constraints in the software level that run beyond a parallel system composed of huge numbers of power-hungry components, we optimize HPC program design in order to achieve the best possible power performance on the target hardware platform. The power performance of a CUDA Processing Element (PE) is determined by both hardware factors including power features of each component including with CPU, GPU, main memory and PCI buses, and their interconnection architecture; and software factors including algorithm design and the character of executable instructions performed on it. In this paper, approaches to model and evaluate the power consumption of large scale SIMD computation by CUDA PEs on multi-core and GPU platforms are introduced. The model allows obtaining design characteristic values at the early programming stage, thus benefitting programmers by providing the necessary environment information for choosing the best power-efficient alternative. Based on the model, CPU Dynamic frequency scaling (DFS) can be applied on CUDA PE architecture that adjusts CPU frequency to enhance power efficiency of the entire PE without compromising its computing performance. The power model and power efficiency improvements of the new designs have been validated by measuring the new programs on the real GPU multiprocessing system.  相似文献   

15.
浅析高性能计算应用的需求与发展   总被引:3,自引:0,他引:3  
高性能计算应用在高性能计算技术的支持下为科技创新做出了巨大贡献,并且和高性能计算技术在相辅相成中不断发展.自2004年以来,中国科学院计算机网络信息中心超级计算中心针对中国科学院在“十一五”期间的高性能计算需求在全院范围内开展了多次调研活动,对中国科学院在“十一五”期间高性能计算的整体需求及各应用领域需求的分布情况有了比较全面的了解,其调研结果对“十一五”中国科学院高性能计算环境建设和高性能计算应用的发展具有良好的借鉴作用.首先介绍了国内外高性能计算应用的发展现状,并结合中国科学院高性能计算环境建设和高性能计算应用的发展情况,分析了“十一五”中国科学院高性能计算的应用需求,最后对我国高性能计算应用的发展前景进行了展望.  相似文献   

16.
GPGPUs are increasingly being used to as performance accelerators for HPC (High Performance Computing) applications in CPU/GPU heterogeneous computing systems, including TianHe-1A, the world’s fastest supercomputer in the TOP500 list, built at NUDT (National University of Defense Technology) last year. However, despite their performance advantages, GPGPUs do not provide built-in fault-tolerant mechanisms to offer reliability guarantees required by many HPC applications. By analyzing the SIMT (single-instruction, multiple-thread) characteristics of programs running on GPGPUs, we have developed PartialRC, a new checkpoint-based compiler-directed partial recomputing method, for achieving efficient fault recovery by leveraging the phenomenal computing power of GPGPUs. In this paper, we introduce our PartialRC method that recovers from errors detected in a code region by partially re-computing the region, describe a checkpoint-based faulttolerance framework developed on PartialRC, and discuss an implementation on the CUDA platform. Validation using a range of representative CUDA programs on NVIDIA GPGPUs against FullRC (a traditional full-recomputing Checkpoint-Rollback-Restart fault recovery method for CPUs) shows that PartialRC reduces significantly the fault recovery overheads incurred by FullRC, by 73.5% when errors occur earlier during execution and 74.6% when errors occur later on average. In addition, PartialRC also reduces error detection overheads incurred by FullRC during fault recovery while incurring negligible performance overheads when no fault happens.  相似文献   

17.

Applications that exploit the architectural details of high-performance computing (HPC) systems have become increasingly invaluable in academia and industry over the past two decades. The most important hardware development of the last decade in HPC has been the general purpose graphics processing unit (GPGPU), a class of massively parallel devices that now contributes the majority of computational power in the top 500 supercomputers. As these systems grow, small costs such as latency—due to the fixed cost of memory accesses and communication—accumulate in a large simulation and become a significant barrier to performance. The swept time-space decomposition rule is a communication-avoiding technique for time-stepping stencil update formulas that attempts to reduce latency costs. This work extends the swept rule by targeting heterogeneous, CPU/GPU architectures representing current and future HPC systems. We compare our approach to a naive decomposition scheme with two test equations using an MPI+CUDA pattern on 40 processes over two nodes containing one GPU. The swept rule produces a factor of 1.9 to 23 speedup for the heat equation and a factor of 1.1 to 2.0 speedup for the Euler equations, using the same processors and work distribution, and with the best possible configurations. These results show the potential effectiveness of the swept rule for different equations and numerical schemes on massively parallel compute systems that incur substantial latency costs.

  相似文献   

18.
张佳康  陈庆奎 《计算机工程》2010,36(15):179-181
针对具有高浮点运算能力的流处理器设备GPU对神经网络的适用性问题,提出卷积神经网络的并行化识别算法,采用计算统一设备架构(CUDA)技术,并定义其上的并行化数据结构,描述计算任务到CUDA的映射机制。实验结果证明,在GTX200硬件架构的GPU上实现的并行识别算法的平均浮点运算能力峰值较CPU上串行算法提高了近60倍,更适用于神经网络的相关应用。  相似文献   

19.
第一性原理计算软件在密度泛函理论的发展中起着重要作用。相比平面波方法,局域轨道法更适合处理大规模多体问题。随着问题规模的不断增大和计算机计算能力的提升,软件的并行加速成为一个重要课题,MPI(message passing interface)结合GPU(graphic processing unit)实现的异构并行是一个新的解决方案。基于局域轨道法的第一性原理计算软件MESIA(massive electronic simulation based on systematically improvable atomic bases)经过MPI+OpenMP+CUDA三级并行,单GPU取得了约15倍的加速比,同时表现出了良好的可扩展性。测试结果同时验证了使用GPU计算可以保证计算精度。  相似文献   

20.
Recent development in Graphics Processing Units (GPUs) has enabled inexpensive high performance computing for general-purpose applications. Compute Unified Device Architecture (CUDA) programming model provides the programmers adequate C language like APIs to better exploit the parallel power of the GPU. Data mining is widely used and has significant applications in various domains. However, current data mining toolkits cannot meet the requirement of applications with large-scale databases in terms of speed. In this paper, we propose three techniques to speedup fundamental problems in data mining algorithms on the CUDA platform: scalable thread scheduling scheme for irregular pattern, parallel distributed top-k scheme, and parallel high dimension reduction scheme. They play a key role in our CUDA-based implementation of three representative data mining algorithms, CU-Apriori, CU-KNN, and CU-K-means. These parallel implementations outperform the other state-of-the-art implementations significantly on a HP xw8600 workstation with a Tesla C1060 GPU and a Core-quad Intel Xeon CPU. Our results have shown that GPU + CUDA parallel architecture is feasible and promising for data mining applications.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号