首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
基于图形处理器的并行方体计算   总被引:1,自引:0,他引:1  
方体(cube)计算是数据仓库和联机分析处理(Online analytical processing,OLAP)领域的核心问题,如何提高方体计算性能获得了学术界和工业界的广泛关注,但目前大部分方体算法都没有考虑最新的处理器架构.近年来,处理器从单一计算核心进化为多个或许多个计算核心,如多核CPU、图形处理器(Graphic Processing Units, GPU)等.为了充分利用现代处理器的多核资源,该文提出了基于GPU的并行方体算法GPU-Cubing,算法采用自底向上、广度优先的划分策略,每次并行完成一个cuboid的计算并输出;在计算cuboid过程中多个分区同步处理,分区内多线程并行.GPU-Cubing算法适合GPU体系结构,并行度高.与BUC算法相比,基于真实数据集的完全方体计算可以获得一个数量级以上的加速比,冰山方体获得至少2倍以上的加速.  相似文献   

2.
QR分解作为一个基本计算模块,广泛应用在图像处理、信号处理、通信工程等众多领域.传统的并行QR分解算法只能挖掘计算过程中的数据级并行.在分析快速Givens Rotation分解特征的基础上,提出了一种多层次并行算法,能够同时挖掘计算过程中的任务级并行和数据级并行,非常适合于以图形处理器(GPU)为代表的大规模并行处理器.同时,采用GPU的并行QR分解算法可以作为基本运算模块被GPU平台上的众多应用程序直接调用.实验结果显示,与CPU平台上使用OpenMP实现的算法相比,基于GPU的多层次并行算法能够获得5倍以上的性能提升,而调用QR分解模块的奇异值分解(SVD)应用可以获得3倍以上的性能提升.  相似文献   

3.
一种基于GPU加速的细粒度并行蚁群算法   总被引:1,自引:0,他引:1  
为改善蚁群算法对大规模旅行商问题的求解性能,提出一种基于图形处理器(GPU)加速的细粒度并行蚁群算法.将并行蚁群算法求解过程转化为统一计算设备架构的线程块并行执行过程,使得蚁群算法在GPU中加速执行.实验结果表明,该算法能提高全局搜索能力,增大细粒度并行蚁群算法的蚂蚁规模,从而提高了算法的运算速度.  相似文献   

4.
提升小波变换算法在图像去噪中有广泛的应用,但是对于海量数据流该算法计算速度缓慢无法达到实时性.为了提高计算速度,提出一种基于图形处理器(GPU)的并行计算策略,把传统提升小波变换算法映射到CUDA编程模型,利用具有大规模并行计算特征的GPU作为计算设备,结合GPU存储器的优势实现了基于滑动窗口的提升小波变换并行算法.实验的测试结果表明,在现有的实验条件下,随着图像的增加,提升小波变换并行算法可以把计算速度提高50倍,效率提高明显.本文提出的方法也可以用其他图像处理算法的并行化.  相似文献   

5.
目前目标识别领域,在人体检测中精确度最高的算法就是可变形部件模型(DPM)算法,针对DPM算法计算量大的缺点,提出了一种基于图形处理器(GPU)的并行化解决方法.采用GPU编程模型OpenCL,对DPM算法的整个算法的实现细节采用了并行化的思想进行重新设计实现,优化算法实现的内存模型和线程分配.通过对OpenCV库和采用GPU重新实现的程序进行对比,在保证了检测效果的前提下,使得算法的执行效率有了近8倍的提高.  相似文献   

6.

为改善蚁群算法对大规模旅行商问题的求解性能,提出一种基于图形处理器(GPU)加速的细粒度并行蚁群算法.将并行蚁群算法求解过程转化为统一计算设备架构的线程块并行执行过程,使得蚁群算法在GPU 中加速执行.实验结果表明,该算法能提高全局搜索能力,增大细粒度并行蚁群算法的蚂蚁规模,从而提高了算法的运算速度.

  相似文献   

7.
一种基于GPU 加速细粒度并行遗传算法的实现方法   总被引:1,自引:0,他引:1  
为改善遗传算法对大规模多变量求解的性能,提出一种基于图形处理器(GPU)加速细粒度并行遗传算法的实现方法.将并行遗传算法求解过程转化为GPU纹理渲染过程,使得遗传算法在GPU中加速执行.实验结果表明,该算法抑制了早熟现象,增大了并行遗传算法的种群规模,提高了算法的运算速度,并为普通用户研究并行遗传算法提供了一种可行的方法.  相似文献   

8.
基于CUDA的快速图像压缩   总被引:1,自引:0,他引:1  
为了进一步提高JPEG编码效率,对JPEG压缩算法进行研究,分析得出JPEG核心步骤可以并行化处理.因此,实现平台宜采用以并行计算为优势的GPU,而不是以串行计算为主的CPU.NVIDIA新推出的CUDA(计算统一设备架构)为此实现提供了软硬件环境.CUDA是基于GPU进行通用计算的开发平台,非常适合大规模的并行数据计算.在GPU流处理器架构下用CUDA技术实现编码并行化,并针对流处理器架构特点进行内存读写等方面的优化,提高了JPEG编码的速度.实验结果表明了CUDA技术在并行处理方面的优越性,JPEG编码效率得到了极大提高.  相似文献   

9.
蔡勇  李胜 《计算机应用》2016,36(3):628-632
针对传统并行计算方法实现结构拓扑优化快速计算的硬件成本高、程序开发效率低的问题,提出了一种基于Matlab和图形处理器(GPU)的双向渐进结构优化(BESO)方法的全流程并行计算策略。首先,探讨了Matlab编程环境中实现GPU并行计算的三种途径的优缺点和适用范围;其次,分别采用内置函数直接并行的方式实现了拓扑优化算法中向量和稠密矩阵的并行化计算,采用MEX函数调用CUSOLVER库的形式实现了稀疏格式有限元方程组的快速求解,采用并行线程执行(PTX)代码的方式实现了拓扑优化中单元敏度分析等优化决策的并行化计算。数值算例表明,基于Matlab直接开发GPU并行计算程序不仅编程效率高,而且还可以避免不同编程语言间的计算精度差异,最终使GPU并行程序可以在保持计算结果不变的前提下取得可观的加速比。  相似文献   

10.
GPU通用计算平台上中心差分格式显式有限元并行计算   总被引:3,自引:0,他引:3  
显式有限元是解决平面非线性动态问题的有效方法.由于显式有限元算法的条件稳定性,对于大规模的有限元问题的求解需要很长的计算时间.图形处理器(GPU)作为一种高度并行化的通用计算处理器,可以很好解决大规模科学计算的速度问题.统一计算架构(CUDA)为实现GPU通用计算提供了高效、简便的方法.因此,建立了基于GPU通用计算平台的中心差分格式的显式有限元并行计算方法.该方法针对GPU计算的特点,对串行算法的流程进行了优化和调整,通过采用线程与单元或节点的一一映射策略,实现了迭代过程的完全并行化.通过数值算例表明,在保证计算精度一致的前提下,采用NVIDIA GTX 460显卡,该方法能够大幅度提高计算效率,是求解平面非线性动态问题的一种高效简便的数值计算方法.  相似文献   

11.
Real-Time Weighted Pose-Space Deformation on the GPU   总被引:1,自引:0,他引:1  
  相似文献   

12.
View-dependent multiresolution rendering places a heavy load on CPU. This paper presents a new method on view-dependent refinement of multiresolution meshes by using the computation power of modern programmable graphics hardware (GPU). Two rendering passes using this method are included. During the first pass, the level of detail selection is performed in the fragment shaders. The resultant buffer from the first pass is taken as the input texture to the second rendering pass by vertex texturing, and then the node culling and triangulation can be performed in the vertex shaders. Our approach can generate adaptive meshes in real-time, and can be fully implemented on GPU. The method improves the efficiency of mesh simplification, and significantly alleviates the computing load on CPU.  相似文献   

13.
We present a method for stochastic fiber tract mapping from diffusion tensor MRI (DT-MRI) implemented on graphics hardware. From the simulated fibers we compute a connectivity map that gives an indication of the probability that two points in the dataset are connected by a neuronal fiber path. A Bayesian formulation of the fiber model is given and it is shown that the inversion method can be used to construct plausible connectivity. An implementation of this fiber model on the graphics processing unit (GPU) is presented. Since the fiber paths can be stochastically generated independently of one another, the algorithm is highly parallelizable. This allows us to exploit the data-parallel nature of the GPU fragment processors. We also present a framework for the connectivity computation on the GPU. Our implementation allows the user to interactively select regions of interest and observe the evolving connectivity results during computation. Results are presented from the stochastic generation of over 250,000 fiber steps per iteration at interactive frame rates on consumer-grade graphics hardware.  相似文献   

14.
We develop a novel approach for computing the circle Hough transform entirely on graphics hardware (GPU). A primary role is assigned to vertex processors and the rasterizer, overshadowing the traditional foreground of pixel processors and enhancing parallel processing. Resources like the vertex cache or blending units are studied too, with our set of optimizations leading to extraordinary peak gain factors exceeding 358x over a typical CPU execution. Software optimizations, like the use of precomputed tables or gradient information and hardware improvements, like hyperthreading and multicores are explored on CPUs as well. Overall, the GPU exhibits better scalability and much greater parallel performance to become a solid alternative for computing the classical circle Hough transform versus those optimal methods run on emerging multicore architectures.  相似文献   

15.
采用交错网格有限差分方法模拟二维地震弹性/粘弹性波场要花费大量的计算时间,为此利用GPU并行处理特点和绘制管道,将计算区域划分为内部区域和PML边界处理区域,整个计算过程由顶点编程和片段编程处理,采用FBO技术实现差分迭代结果的纹理转换。实验结果表明,与CPU实现相比,GPU方法提高了模拟效率,并且随着网格规模的增加,其效率不断提升,可以实现大规模的高效模拟。  相似文献   

16.
This paper presents a Graphics Processing Unit (GPU)-based implementation of a Bellman-Ford (BF) routing algorithm using NVIDIA’s Compute Unified Device Architecture (CUDA). In the proposed GPU-based approach, multiple threads run concurrently over numerous streaming processors in the GPU to dynamically update routing information. Instead of computing the individual vertex distances one-by-one, a number of threads concurrently update a larger number of vertex distances, and an individual vertex distance is represented in a single thread. This paper compares the performance of the GPU-based approach to an equivalent CPU implementation while varying the number of vertices. Experimental results show that the proposed GPU-based approach outperforms the equivalent sequential CPU implementation in terms of execution time by exploiting the massive parallelism inherent in the BF routing algorithm. In addition, the reduction in energy consumption (about 99 %) achieved by using the GPU is reflective of the overall merits of deploying GPUs across the entire landscape of IP routing for emerging multimedia communications.  相似文献   

17.
This paper presents a fast, high‐quality, GPU‐based isosurface rendering pipeline for implicit surfaces defined by a regular volumetric grid. GPUs are designed primarily for use with polygonal primitives, rather than volume primitives, but here we directly treat each volume cell as a single rendering primitive by designing a vertex program and fragment program on a commodity GPU. Compared with previous raycasting methods, ours has a more effective memory footprint (cache locality) and better coherence between multiple parallel SIMD processors. Furthermore, we extend and speed up our approach by introducing a new view‐dependent sorting algorithm to take advantage of the early‐z‐culling feature of the GPU to gain significant performance speed‐up. As another advantage, this sorting algorithm makes multiple transparent isosurfaces rendering available almost for free. Finally, we demonstrate the effectiveness and quality of our techniques in several real‐time rendering scenarios and include analysis and comparisons with previous work.  相似文献   

18.
A Flexible Kernel for Adaptive Mesh Refinement on GPU   总被引:3,自引:0,他引:3  
We present a flexible GPU kernel for adaptive on‐the‐fly refinement of meshes with arbitrary topology. By simply reserving a small amount of GPU memory to store a set of adaptive refinement patterns, on‐the‐fly refinement is performed by the GPU, without any preprocessing nor additional topology data structure. The level of adaptive refinement can be controlled by specifying a per‐vertex depth‐tag, in addition to usual position, normal, color and texture coordinates. This depth‐tag is used by the kernel to instanciate the correct refinement pattern, which will map a refined connectivity on the input coarse polygon. Finally, the refined patch produced for each triangle can be displaced by the vertex shader, using any kind of geometric refinement, such as Bezier patch smoothing, scalar valued displacement, procedural geometry synthesis or subdivision surfaces. This refinement engine does neither require multipass rendering nor any use of fragment processing nor special preprocess of the input mesh structure. It can be implemented on any GPU with vertex shading capabilities.  相似文献   

19.
GPU的通用计算应用研究   总被引:9,自引:0,他引:9  
由于图形处理器(GPU)最近几年迅速发展,国内外学者已经将基于GPU的通用计算作为一个新的研究领域。本文在研究国外最新文献的基础上,分析了GPU本身的特性,阐明了基于GPU的应用程序的结构,研究了GPU在编程方法上与普通CPU的差别,并以高斯滤波为实例详细描述了GPU编程的方法和过程。  相似文献   

20.
利用GPU的强大浮点数计算能力和并行处理能力,提出一种完全基于GPU的视点相关自适应细分内核进行快速细分计算的方法.在GPU中,依次实现视点相关的面片细分深度值计算、基于基函数表的细分表面顶点求值、细分表面绘制等核心步骤,无须与CPU端系统内存进行几何数据交换.视点相关的自适应细分准则在表面绘制精度保持不变的情况下,有效地降低了细分表面的细分深度和细分的计算量,在此基础上完全基于GPU的细分框架使得曲面细分具有快速高效的特点.该方法还可以在局部重要细节用较大深度值进行实时自适应细分,以逼近极限曲面.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号