首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 203 毫秒
1.
连续的数据无关是指计算目标矩阵连续的元素时使用的源矩阵元素之间没有关系且也为连续的,访存密集型是指函数的计算量较小,但是有大量的数据传输操作。在OpenCL框架下,以bitwise函数为例,研究和实现了连续数据无关访存密集型函数在GPU平台上的并行与优化。在考察向量化、线程组织方式和指令选择优化等多个优化角度在不同的GPU硬件平台上对性能的影响之后,实现了这个函数的跨平合性能移植。实验结果表明,在不考虑数据传输的前提下,优化后的函数与这个函数在OpenCV库中的CPU版本相比,在AMD HD 5850 GPU达到了平均40倍的性能加速比;在AMD HD 7970 GPU达到了平均90倍的性能加速比;在NVIDIA Tesla 02050 CPU上达到了平均60倍的性能加速比;同时,与这个函数在OpenCV库中的CUDA实现相比,在NVIDIA Tesla 02050平台上也达到了1.5倍的性能加速。  相似文献   

2.
基于PCI Express总线的高速数据传输卡设计与实现   总被引:1,自引:0,他引:1  
通过设计PCI Express高速数据传输卡实现了地面控制台与计算机之间的高速数据传输。高速数据传输卡采用PLX公司的接口芯片PEX8311来实现PCI Express总线的接口逻辑,数据传输采用DMA方式,通过对信号源的自检验证了传输卡能够实时无误地传输数据。在硬件设计部分,主要对差分传输、PCI Express接口电路和FPGA逻辑控制模块进行了描述和设计。  相似文献   

3.
利用GPU进行加速的归一化差分植被指数(Normalized Differential Vegetation Index,NDVI)提取算法通常采用GPU多线程并行模型,存在弱相关计算之间以及CPU与GPU之间数据传输耗时较多等问题,影响了加速效果的进一步提升。针对上述问题,根据NDVI提取算法的特性,文中提出了一种基于GPU多流并发并行模型的NDVI提取算法。通过CUDA流和Hyper-Q特性,GPU多流并发并行模型可以使数据传输与弱相关计算、弱相关计算与弱相关计算之间达到重叠,从而进一步提高算法并行度及GPU资源利用率。文中首先通过GPU多线程并行模型对NDVI提取算法进行优化,并对优化后的计算过程进行分解,找出包含数据传输及弱相关性计算的部分;其次,对数据传输和弱相关计算部分进行重构,并利用GPU多流并发并行模型进行优化,使弱相关计算之间、弱相关计算和数据传输之间达到重叠的效果;最后,以高分一号卫星拍摄的遥感影像作为实验数据,对两种基于GPU实现的NDVI提取算法进行实验验证。实验结果表明,与传统基于GPU多线程并行模型的NDVI提取算法相比,所提算法在影像大于12000*12000像素时平均取得了约1.5倍的加速,与串行提取算法相比取得了约260倍的加速,具有更好的加速效果和并行性。  相似文献   

4.
《计算机教育》2005,(5):71-71
2005年4月11日,PCI-SIG宣布扩展PCI Express架构性能并推出PCI Express“ExpressModule”规范。PCI-SIG原名PCI特别兴趣小组,1992年成立,2000年发展成为非盈利性组织并正式更名为“PCI-SIG”。随着本地I/O需求的发展,PCI-SIG定义并实施全新的业界标准I/O(输入/输出)规范,全球900多家业界公司都是其有效会员。PCI Express架构是业界标准的高性能通用串行I/O互连结构,PCI等传统并行总线越来越难以满足新兴计算和通信平台的互连需求。例如10GHz及更高的CPU速度、快速存储器、高端图形千兆位联网和存储接口以及高性能个人…  相似文献   

5.
应用GPU集群加速计算蛋白质分子场   总被引:1,自引:2,他引:1  
针对生物化学计算中采用量子化学理论计算蛋白质分子场所带来的巨大计算量的问题,搭建起一个GPU集群系统,用来加速计算基于量子化学的蛋白质分子场.该系统采用消息传递并行编程环境(MPI)连接集群各结点,以开放多线程OpenMP编程标准作为多核CPU编程环境,以CUDA语言作为GPU编程环境,提出并实现了集群系统结点中GPU和多核CPU协同计算的并行加速架构优化设计.在保持较高计算精度的前提下,结合MPI,OpenMP和CUDA混合编程模式,大大提高了系统的计算性能,并对不同体系和规模的蛋白质分子场模拟进行了计算分析.与相应的CPU集群、GPU单机和CPU单机计算方法对比,该GPU集群大幅度地提高了高分辨率复杂蛋白质分子场模拟的计算效率,比CPU集群的平均计算加速比提高了7.5倍.  相似文献   

6.
PCI Express作为第三代高性能I/O互连技术具有很多技术优势,如基于报文交换、点对点连接、LVDS高速串行互连、高带宽等。但是,PCI Express技术更多地应用于通用高性能计算机领域,鲜有将其应用于嵌入式系统设计中的实例。本文基于自行研制的一款嵌入式多核SoC系统YHFT-QDSP,根据系统设计需求,结合PCI Express技术特点,采用基于IP裁剪的快速设计方法将PCI Express技术应用于系统片间互连模块的设计中,缩短了设计周期并获得了良好的设计效果。采用0.13μm工艺单元库实现,PCI Express片间互连模块总面积为0.65mm2,其中协议转换模块面积为0.12mm2,片间数据传输有效带宽可达1.63Gb/s。  相似文献   

7.
亮亮 《电脑》2004,(6):28-29
从ISA到PCI,从PCI到历代AGP,总线在不经意间慢慢地升级。然而比起CPU、GPU等技术的革新速度,总线的更新换代显然落后一步,甚至很大程度上成为了系统性能的主要瓶颈。为此,业界已经提出多项技术方案用于改变这个现状,而得到Intel、IBM、ATi等业界巨头支持的PCI Express无疑拥有十分美好的前景。面对已经勿庸置疑的PCI Express时代,nVIDIA与ATi采取了截然不同的态度。  相似文献   

8.
分子动力学模拟(MD)是分子模拟的一类常用方法,为生物体系的模拟提供了重要途径。由于计算强度大,目前MD可模拟的时空尺度还不能满足真实物理过程的需要。作为CPU的加速设备,近年来,GPU为提高MD计算能力提供了新的可能。GPU编程难点主要在于如何将计算任务分解并映射到GPU端并合理组织线程及存储器,细致地平衡数据传输和指令吞吐量以发挥GPU的最大计算性能。静电效应是长程作用,广泛存在于生物现象的各个方面,对其精确模拟是MD的重要组成部分。Particle-Mesh-Ewald(PME)方法是公认的精确处理静电作用的算法之一。本文介绍在本实验室已建立的GPU加速分子动力学模拟程序GMD的基础上,基于NVIDIACUDA,采用GPU实现PME算法的策略,针对算法中组成静电作用的三个部分即实空间、傅立叶空间和能量修正项,分别采用不同的计算任务组织策略以提升整体性能。使用事实上的标准算例dhfr进行的测试结果表明,实现PME的GMD程序,性能分别是Gromacs4.5.3版单核CPU的3.93倍,8核CPU的1.5倍,基于OpenMM2.0加速的Gromacs4.5.3GPU版本的1.87倍。  相似文献   

9.
在多核中央处理器(CPU)—图形处理器(GPU)异构并行体系结构上,采用OpenMP和计算统一设备架构(CUDA)编程实现了基于AMBER力场的蛋白质分子动力学模拟程序。通过合理地将程序划分为CPU单线程、CPU多线程和GPU多线程执行部分,高效地利用了计算机的处理能力。性能测试结果表明,相对于优化后的CPU串行计算,多核CPU-GPU异构并行计算模型有强大的性能优势,特别是将占整个程序执行时间90%的作用力的计算移植到GPU上执行,获得了最高可达12倍的计算加速比。  相似文献   

10.
基于GPGPU的生物序列快速比对   总被引:1,自引:0,他引:1       下载免费PDF全文
在CPU-GPU异构平台下,提出一种高效的生物序列比对方案。该方案利用GPU的并行处理能力,通过对读延迟、写延迟、重组函数及数据传输进行优化,在OpenCL框架下重构Smith-Waterman算法,加快生物序列比对速度。实验结果证明,与CPU上传统的串行算法相比,该算法最高可获得约100倍的性能提升。  相似文献   

11.
Despite the increasing investment in integrated GPU and next-generation interconnect research,discrete GPU connected by PCIe still account for the dominant position of the market,the management of data communication between CPU and GPU continues to evolve.Initially,the programmer explicitly controls the data transfer between CPU and GPU.To simplify programming and enable systemwide atomic memory operations,GPU vendors have developed a programming model that provides a single,virtual address space for accessing all CPU and GPU memories in the system.The page migration engine in this model automatically migrates pages between CPU and GPU on demand.To meet the needs of high-performance workloads,the page size tends to be larger.Limited by low bandwidth and high latency interconnects compared to GDDR,larger page migration has longer delay,which may reduce the overlap of computation and transmission,waste time to migrate unrequested data,block subsequent requests,and cause serious performance decline.In this paper,we propose partial page migration that only migrates the requested part of a page to reduce the migration unit,shorten the migration latency,and avoid the performance degradation of the full page migration when the page becomes larger.We show that partial page migration is possible to largely hide the performance overheads of full page migration.Compared with programmer controlled data transmission,when the page size is 2MB and the PCIe bandwidth is 16GB/sec,full page migration is 72.72×slower,while our partial page migration achieves 1.29×speedup.When the PCIe bandwidth is changed to 96GB/sec,full page migration is 18.85×slower,while our partial page migration provides 1.37×speedup.Additionally,we examine the performance impact that PCIe bandwidth and migration unit size have on execution time,enabling designers to make informed decisions.  相似文献   

12.
Graphics processing units (GPU) have taken an important role in the general purpose computing market in recent years.At present,the common approach to programming GPU units is to write GPU specific cod...  相似文献   

13.
The InfiniBand architecture is an industry standard that offers low latency and high bandwidth as well as advanced features such as remote direct memory access (RDMA), atomic operations, multicast, and quality of service. InfiniBand products can achieve a latency of several microseconds for small messages and a bandwidth of 700 to 900 Mbytes/s. As a result, it is becoming increasingly popular as a high-speed interconnect technology for building high-performance clusters. The Peripheral Component Interconnect (PCI) has been the standard local-I/O-bus technology for the last 10 years. However, more applications require lower latency and higher bandwidth than what a PCI bus can provide. As an extension, PCI-X offers higher peak performance and efficiency. InfiniBand host channel adapters (HCAs) with PCI Express achieve 20 to 30 percent lower latency for small messages compared with HCAs using 64-bit, 133-MHz PCI-X interfaces. PCI Express also improves performance at the MPI level, achieving a latency of 4.1/spl mu/s for small messages. It can also improve MPI collective communication and bandwidth-bound MPI application performance.  相似文献   

14.
Heterogeneous platforms composed of multiple different types of computing devices (such as CPUs, GPUs, and Intel MICs) have been widely used recently. However, most of parallel applications developed in such a heterogeneous platform usually only utilize a certain kind of computing device due to the lack of easy-to-use heterogeneous cooperative parallel programming models. To reduce the difficulty of heterogeneous cooperative parallel programming, a directive-based heterogeneous cooperative parallel programming framework called HeteroPP is proposed. HeteroPP provides an easier way for programmers to fully exploit multiple different types of computing devices to concurrently and cooperatively perform data-parallel applications on heterogeneous platforms. An extension to OpenMP directives and clauses is proposed to make it possible for programmers to easily offload a data-parallel compute kernel to multiple different types of computing devices. A source-to-source compiler is designed to help programmers to automatically generate multiple device-specific compute kernels that can be concurrently and cooperatively performed on heterogeneous platforms. Many experiments are conducted with 12 typical data-parallel applications implemented with HeteroPP on a heterogeneous CPU-GPU-MIC platform. The results show that HeteroPP not only greatly simplifies the heterogeneous cooperative parallel programming, but also can fully utilize the CPUs, GPU, and MIC to efficiently perform these applications.  相似文献   

15.
In light of GPUs’ powerful floating-point operation capacity,heterogeneous parallel systems incorporating general purpose CPUs and GPUs have become a highlight in the research field of high performance computing(HPC).However,due to the complexity of programming on GPUs,porting a large number of existing scientific computing applications to the heterogeneous parallel systems remains a big challenge.The OpenMP programming interface is widely adopted on multi-core CPUs in the field of scientific computing.To effectively inherit existing OpenMP applications and reduce the transplant cost,we extend OpenMP with a group of compiler directives,which explicitly divide tasks among the CPU and the GPU,and map time-consuming computing fragments to run on the GPU,thus dramatically simplifying the transplantation.We have designed and implemented MPtoStream,a compiler of the extended OpenMP for AMD’s stream processing GPUs.Our experimental results show that programming with the extended directives deviates from programming with OpenMP by less than 11% modification and achieves significant speedup ranging from 3.1 to 17.3 on a heterogeneous system,incorporating an Intel Xeon E5405 CPU and an AMD FireStream 9250 GPU,over the execution on the Xeon CPU alone.  相似文献   

16.
针对基于PCI Express总线的宽带测向系统,以PEX8114接口芯片为例设计了Windows XP系统下的WDM驱动程序,重点阐述了高速数据传输的关键——DMA中断传输。通过实践验证,宽带测向系统稳定的数据传输速率超过160 MBps,此驱动程序可以方便地移植到其他高速数据传输系统中。  相似文献   

17.
科学与工程应用对计算性能要求的不断增加使得异构计算得到了迅速发展,然而CPU与加速单元之间没有共享内存的特点增加了异构编程难度,编程人员必须显式地指定数据在不同设备之间的传递情况.全局数组(global arrays, GA)模型基于聚合远程内存拷贝接口(ARMCI)为分布式存储系统提供异步单边通信、共享内存的编程环境,但ARMCI接口拓展的复杂性使得GA不能根据特定计算平台的特点迅速在该平台上实现.CoGA模型是对GA模型的异构拓展,旨在为CPU+英特尔至强融核(MIC)的异构系统提供全局数组结构,隐藏数据传输细节从而简化异构编程难度.CoGA基于MIC上的对称传输接口(SCIF)实现对CPU和MIC的内存管理,并结合SCIF远程内存访问特点优化CPU与MIC间的数据传输性能.最后,通过数据传输带宽、通信延迟和稀疏矩阵乘问题的测试,证明了CoGA简化编程并优化数据传输性能的有效性和实用性.  相似文献   

18.
As a general purpose scalable parallel programming model for coding highly parallel applications, CUDA from NVIDIA provides several key abstractions: a hierarchy of thread blocks, shared memory, and barrier synchronization. It has proven to be rather effective at programming multithreaded many-core GPUs that scale transparently to hundreds of cores; as a result, scientists all over the industry and academia are using CUDA to dramatically expedite on production and codes. GPU-based clusters are likely to play an essential role in future cloud computing centers, because some computation-intensive applications may require GPUs as well as CPUs. In this paper, we adopted the PCI pass-through technology and set up virtual machines in a virtual environment; thus, we were able to use the NVIDIA graphics card and the CUDA high performance computing as well. In this way, the virtual machine has not only the virtual CPU but also the real GPU for computing. The performance of the virtual machine is predicted to increase dramatically. This paper measured the difference of performance between physical and virtual machines using CUDA, and investigated how virtual machines would verify CPU numbers under the influence of CUDA performance. At length, we compared CUDA performance of two open source virtualization hypervisor environments, with or without using PCI pass-through. Through experimental results, we will be able to tell which environment is most efficient in a virtual environment with CUDA.  相似文献   

19.
Although designed as a cross-platform parallel programming model, OpenCL remains mainly used for GPU programming. Nevertheless, a large amount of applications are parallelized, implemented, and eventually optimized in OpenCL. Thus, in this paper, we focus on the potential that these parallel applications have to exploit the performance of multi-core CPUs. Specifically, we analyze the method to systematically reuse and adapt the OpenCL code from GPUs to CPUs. We claim that this work is a necessary step for enabling inter-platform performance portability in OpenCL.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号