首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
针对GPU上应用开发移植困难的问题,提出了一种串行计算源程序到并行计算源程序的映射方法。该方法从串行源程序中获得可并行化循环的层次信息,建立循环体结构与GPU线程的对应关系,生成GPU端核心函数代码;根据变量引用读写属性生成CPU端控制代码。基于该方法实现了一个编译原型系统,完成了C语言源程序到CUDA源程序的自动生成。对原型系统在功能和性能方面的测试结果表明,该系统生成的CUDA源程序与C语言源程序在功能上一致,其性能有显著提高,在一定程度上解决了计算密集型应用向CPU-GPU异构多核系统移植困难的问题。  相似文献   

2.
基于CUDA的并行布谷鸟搜索算法设计与实现   总被引:1,自引:0,他引:1  
布谷鸟搜索(cuckoo search,CS)算法是近几年发展起来的智能元启发式算法,已经被成功应用于多种优化问题中。针对CS算法在求解大数据、大规模复杂问题时,计算时间过长的问题,提出了一种基于统一计算设备架构(compute unified device architecture,CUDA)的并行布谷鸟搜索算法。该算法的并行实现采用任务并行与数据并行相结合的方式,利用图形处理器(graphic processing unit,GPU)线程块与线程分别映射布谷鸟个体与个体的每一维数据,并行实现CS算法中的鸟巢位置更新、个体适应度评估、鸟巢重建、寻找最优个体操作。整个CS算法的寻优迭代过程完全通过GPU实现,降低了算法计算过程中CPU与GPU的通信开销。对4个经典基准测试函数进行了仿真实验,结果表明,相比标准CS算法,基于CUDA架构的并行CS算法在求解收敛性一致的前提下,在求解速度上获得了高达110倍的计算加速比。  相似文献   

3.
A GPU implementation for LBG and SOM training   总被引:1,自引:1,他引:0  
Vector quantization (VQ) is an effective technique applicable in a wide range of areas, such as image compression and pattern recognition. The most time-consuming procedure of VQ is codebook training, and two of the frequently used training algorithms are LBG and self-organizing map (SOM). Nowadays, desktop computers are usually equipped with programmable graphics processing units (GPUs), whose parallel data-processing ability is ideal for codebook training acceleration. Although there are some GPU algorithms for LBG training, their implementations suffer from a large amount of data transfer between CPU and GPU and a large number of rendering passes within a training iteration. This paper presents a novel GPU-based training implementation for LBG and SOM training. More specifically, we utilize the random write ability of vertex shader to reduce the overheads mentioned above. Our experimental results show that our approach can run four times faster than the previous approach.  相似文献   

4.
张珩  崔强  侯朋朋  武延军  赵琛 《软件学报》2020,31(4):1225-1239
在复杂网络理论中,core分解是一种最基本的度量网络节点“重要性”并分析核心子图的方法.Core分解广泛应用于社交网络的用户行为分析、复杂网络的可视化、大型软件的代码静态分析等应用.随着复杂网络的图数据规模和复杂性的增大,现有研究工作基于多核CPU环境设计core分解并行算法,由于CPU核数和内存带宽的局限性,已经无法满足大数据量的高性能计算需求,严重影响了复杂网络的分析应用.通用GPU提供了1万以上线程数的高并行计算能力和高于100GB/s访存带宽,已被广泛应用于大规模图数据的高效并行分析,如广度优先遍历和最短路径算法等.为了实现更为高效的core分解,提出面向GPU平台下的复杂网络core分解的两种并行策略.第1种RLCore策略基于图遍历思想,利用GPU高并发计算能力对网络图结构自底向上遍历,逐步迭代设置各节点所属的core层;第2种ESCore策略基于局部收敛思想,对各节点从邻居节点当前值进行汇聚计算更新直至收敛.ESCore相比RLCore能够大大降低遍历过程中GPU线程更新同一节点的同步操作开销,而其算法的迭代次数受收敛率的影响.在真实网络图数据上的实验结果表明,所提出的两个策略在效率和扩展性方面能够大幅优于现有其他方法,相比单线程上的算法高达33.6倍性能提升,且遍历边的吞吐性能(TEPS)达到406万条/s,单轮迭代的ESCore的执行效率高于RLCore.  相似文献   

5.
图形处理器在数据管理领域的应用研究综述   总被引:1,自引:0,他引:1       下载免费PDF全文
比较了中央处理器和图形处理器体系结构的异同,并简要介绍了最新的图形处理器通用计算平台及不同体系结构间并行算法的异同。详细叙述了图形处理器在空间数据库、关系数据库、数据流和数据挖掘及信息检索等方面应用的技术特点;探讨了基于图形处理器的各种内外存排序算法及性能;描述了基于图形处理器的各种数据结构和索引技术;阐述了图形处理器算法优化方面的工作。最后,展望了图形处理器应用于数据管理的发展前景,并分析了这一领域未来所面临的挑战。  相似文献   

6.
Recently, compressive sensing (CS) has offered a new framework whereby a signal can be recovered from a small number of noisy non-adaptive samples. This is now an active area of research in many image-processing applications, especially super-resolution. CS algorithms are widely known to be computationally expensive. This paper studies a real time super-resolution reconstruction method based on the compressive sampling matching pursuit (CoSaMP) algorithm for hyperspectral images. CoSaMP is an iterative compressive sensing method based on the orthogonal matching pursuit (OMP). Multi-spectral images record enormous volumes of data that are required in practical modern remote-sensing applications. A proposed implementation based on the graphical processing unit (GPU) has been developed for CoSaMP using computed unified device architecture (CUDA) and the cuBLAS library. The CoSaMP algorithm is divided into interdependent parts with respect to complexity and potential for parallelization. The proposed implementation is evaluated in terms of reconstruction error for different state-of-the-art super-resolution methods. Various experiments were conducted using real hyperspectral images collected by Earth Observing-1 (EO-1), and experimental results demonstrate the speeding up of the proposed GPU implementation and compare it to the sequential CPU implementation and state-of-the-art techniques. The speeding up of the GPU-based implementation is up to approximately 70 times faster than the corresponding optimized CPU.  相似文献   

7.
Support Vector Machine (SVM) regression is an important technique in data mining. The SVM training is expensive and its cost is dominated by: (i) the kernel value computation, and (ii) a search operation which finds extreme training data points for adjusting the regression function in every training iteration. Existing training algorithms for SVM regression are not scalable to large datasets because: (i) each training iteration repeatedly performs expensive kernel value computations, which is inefficient and requires holding the whole training dataset in memory; (ii) the search operation used in each training iteration considers the whole search space which is very expensive. In this article, we significantly improve the scalability and efficiency of SVM regression by exploiting the high performance of Graphics Processing Units (GPUs) and solid state drives (SSDs). Our key ideas are as follows. (i) To reduce the cost of repeated kernel value computations and avoid holding the whole training dataset in the GPU memory, we precompute all the kernel values and store them in the CPU memory extended by the SSD; together with an efficient strategy to read the precomputed kernel values, reusing precomputed kernel values with an efficient retrieval is much faster than computing them on-the-fly. This also alleviates the restriction that the training dataset has to fit into the GPU memory, and hence makes our algorithm scalable to large datasets, especially for large datasets with very high dimensionality. (ii) To enhance the performance of the frequently used search operation, we design an algorithm that minimizes the search space and the number of accesses to the GPU global memory; this optimized search algorithm also avoids branch divergence (one of the causes for poor performance) among GPU threads to achieve high utilization of the GPU resources. Our proposed techniques together form a scalable solution to the SVM regression which we call SIGMA. Our extensive experimental results show that SIGMA is highly efficient and can handle very large datasets which the state-of-the-art GPU-based algorithm cannot handle. On the datasets of size that the state-of-the-art GPU-based algorithm can handle, SIGMA consistently outperforms the state-of-the-art GPU-based algorithm by an order of magnitude and achieves up to 86 times speedup.  相似文献   

8.
Driven by the increasing requirements of high-performance computing applications,supercomputers are prone to containing more and more computing nodes.Applications running on such a large-scale computing system are likely to spawn millions of parallel processes,which usually generate a burst of I/O requests,introducing a great challenge into the metadata management of underlying parallel file systems.The traditional method used to overcome such a challenge is adopting multiple metadata servers in the scale-out manner,which will inevitably confront with serious network and consistence problems.This work instead pursues to enhance the metadata performance in the scale-up manner.Specifically,we propose to improve the performance of each individual metadata server by employing GPU to handle metadata requests in parallel.Our proposal designs a novel metadata server architecture,which employs CPU to interact with file system clients,while offloading the computing tasks about metadata into GPU.To take full advantages of the parallelism existing in GPU,we redesign the in-memory data structure for the name space of file systems.The new data structure can perfectly fit to the memory architecture of GPU,and thus helps to exploit the large number of parallel threads within GPU to serve the bursty metadata requests concurrently.We implement a prototype based on BeeGFS and conduct extensive experiments to evaluate our proposal,and the experimental results demonstrate that our GPU-based solution outperforms the CPU-based scheme by more than 50%under typical metadata operations.The superiority is strengthened further on high concurrent scenarios,e.g.,the high-performance computing systems supporting millions of parallel threads.  相似文献   

9.
10.
BF算法是串匹配算法经典算法之一,但并不适合GPU这种并行体系结构.提出了基于统一计算设备架构(CUDA)架构的解决方案,通过对需要处理的数据增加一定比例的冗余信息,设计了适合CUDA计算数据的独立性特点的并行BF算法.实验结果表明,基于CUDA架构的并行串匹配算法比同等CPU算法获得约10倍的加速比.此外还对该算法性能的影响因子做了分析.  相似文献   

11.
In this paper, we revisit the design and implementation of Branch-and-Bound (B&B) algorithms for solving large combinatorial optimization problems on GPU-enhanced multi-core machines. B&B is a tree-based optimization method that uses four operators (selection, branching, bounding and pruning) to build and explore a highly irregular tree representing the solution space. In our previous works, we have proposed a GPU-accelerated approach in which only a single CPU core is used and only the bounding operator is performed on the GPU device. Here, we extend the approach (LL-GB&B) in order to minimize the CPU–GPU communication latency and thread divergence. Such an objective is achieved through a GPU-based fine-grained parallelization of the branching and pruning operators in addition to the bounding one. The second contribution consists in investigating the combination of a GPU with multi-core processing. Two scenarios have been explored leading to two approaches: a concurrent (RLL-GB&B) and a cooperative one (PLL-GB&B). In the first one, the exploration process is performed concurrently by the GPU and the CPU cores. In the cooperative approach, the CPU cores prepare and off-load to GPU pools of tree nodes using data streaming while the GPU performs the exploration. The different approaches have been extensively experimented on the Flowshop scheduling problem. Compared to a single CPU-based execution, LL-GB&B allows accelerations up to (××160) for large problem instances. Moreover, when combining multi-core and GPU, we figure out that using RLL-GB&B is not beneficial while PLL-GB&B enables an improvement up to 36% compared to LL-GB&B.  相似文献   

12.
Implementations of relational operators on GPU processors have resulted in order of magnitude speedups compared to their multicore CPU counterparts. Here we focus on the efficient implementation of string matching operators common in SQL queries. Due to different architectural features the optimal algorithm for CPUs might be suboptimal for GPUs. GPUs achieve high memory bandwidth by running thousands of threads, so it is not feasible to keep the working set of all threads in the cache in a naive implementation. In GPUs the unit of execution is a group of threads and in the presence of loops and branches, threads in a group have to follow the same execution path; if some threads diverge, then different paths are serialized. We study the cache memory efficiency of single- and multi-pattern string matching algorithms for conventional and pivoted string layouts in the GPU memory. We evaluate the memory efficiency in terms of memory access pattern and achieved memory bandwidth for different parallelization methods. To reduce thread divergence, we split string matching into multiple steps. We evaluate the different matching algorithms in terms of average- and worst-case performance and compare them against state-of-the-art CPU and GPU libraries. Our experimental evaluation shows that thread and memory efficiency affect performance significantly and that our proposed methods outperform previous CPU and GPU algorithms in terms of raw performance and power efficiency. The Knuth–Morris–Pratt algorithm is a good choice for GPUs because its regular memory access pattern makes it amenable to several GPU optimizations.  相似文献   

13.
Branch-and-Bound (B&B) algorithms are tree-based exploratory methods for solving combinatorial optimization problems exactly to optimality. These problems are often large in size and known to be NP-hard to solve. The construction and exploration of the B&B-tree are performed using four operators: branching, bounding, selection and pruning. Such algorithms are irregular which makes their parallel design and implementation on GPU challenging. Existing GPU-accelerated B&B algorithms perform only a part of the algorithm on the GPU and rely on the transfer of pools of subproblems across the PCI Express bus to the device. To the best of our knowledge, the algorithm presented in this paper is the first GPU-based B&B algorithm that performs all four operators on the device and subsequently avoids the data transfer bottleneck between CPU and GPU. The implementation on GPU is based on the Integer–Vector–Matrix (IVM) data structure which is used instead of a conventional linked-list to store and manage the pool of subproblems. This paper revisits the IVM-based B&B algorithm on the GPU, addressing the irregularity of the algorithm in terms of workload, memory access patterns and control flow. In particular, the focus is put on reducing thread divergence by making a judicious choice for the mapping of threads onto the data. Compared to a GPU-accelerated B&B based on a linked-list, the algorithm presented in this paper solves a set of standard flowshop instances on an average 3.3 times faster.  相似文献   

14.
基于GPU的串匹配算法研究   总被引:7,自引:0,他引:7  
BF算法是串匹配算法中最基础的算法,但它是串行算法,不适合图形处理器(Graphic Processing Unit, GPU)的体系结构。结合GPU的特殊体系结构,通过数据存取方式和计算策略的改进,充分利用了GPU的并行处理能力,从而基于GPU实现了BF算法。实验结果表明基于GPU的并行算法能够取得较好的加速比,同时也给出了在现有GPU架构上有效实现通用计算的瓶颈。  相似文献   

15.
Hardware/software partitioning is an essential step in hardware/software co-design. For large size problems, it is difficult to consider both solution quality and time. This paper presents an efficient GPU-based parallel tabu search algorithm (GPTS) for HW/SW partitioning. A single GPU kernel of compacting neighborhood is proposed to reduce the amount of GPU global memory accesses theoretically. A kernel fusion strategy is further proposed to reduce the amount of GPU global memory accesses of GPTS. To further minimize the transfer overhead of GPTS between CPU and GPU, an optimized transfer strategy for GPU-based tabu evaluation is proposed, which considers that all the candidates do not satisfy the given constraint. Experiments show that GPTS outperforms state-of-the-art work of tabu search and is competitive with other methods for HW/SW partitioning. The proposed parallelization is significant when considering the ordinary GPU platform.  相似文献   

16.
The Signature Quadratic Form Distance on feature signatures represents a flexible distance-based similarity model for effective content-based multimedia retrieval. Although metric indexing approaches are able to speed up query processing by two orders of magnitude, their applicability to large-scale multimedia databases containing billions of images is still a challenging issue. In this paper, we propose a parallel approach that balances the utilization of CPU and many-core GPUs for efficient similarity search with the Signature Quadratic Form Distance. In particular, we show how to process multiple distance computations and other parts of the search procedure in parallel, achieving maximal performance of the combined CPU/GPU system. The experimental evaluation demonstrates that our approach implemented on a common workstation with 2?GPU cards outperforms traditional parallel implementation on a high-end 48-core NUMA server in terms of efficiency almost by an order of magnitude. If we consider also the price of the high-end server that is ten times higher than that of the GPU workstation then, based on price/performance ratio, the GPU-based similarity search beats the CPU-based solution by almost two orders of magnitude. Although proposed for the SQFD, our approach of fast GPU-based similarity search is applicable for any distance function that is efficiently parallelizable in the SIMT execution model.  相似文献   

17.
近年来,基于图形处理器的通用计算获得了广泛关注,并在多个领域取得了进展.内存OLAP减少了磁盘I/O,但基于单核或多核CPU的计算能力及cache miss成为新的性能瓶颈,从而无法保证好的效率.而图形处理器由于其众多核和高带宽能够很好地适应OLAP计算特性.通过图形处理器来加速任一cuboid的计算,从而提高整个内存OLAP系统的性能.提出了基于图形处理器的分块并行算法,并对算法进行了优化及讨论了数据稀疏和数据分布倾斜等不同条件下的算法.算法通过扩展可以突破内存限制,组成磁盘、内存、显存三级流水线,适应海量数据计算;同时算法也可以作为计算整个cube的基础.通过实验比较,基于图形处理器的算法明显优于四核CPU算法.  相似文献   

18.
19.
大数乘法是公钥加密中最为核心的计算环节之一,快速实现大数乘法单元也是RSA、ElGamal、全同态等密码体制急需解决的问题之一。目前,基于C 的NTL GMP库函数虽然能在CPU上实现高精度的大数乘法,但其仍不能满足加密对实时性的要求。针对全同态加密应用需求,本文提出了一种基于Sch?nhage-Strassen算法的大数乘法GPU加速方法。通过比较相同实验平台下仅用CPU和GPU CPU异构方法实现的大数乘法运算,验证了本文设计方法的正确性和有效性。实验结果表明,采用本文方法实现的相同大数乘法运算所需的时间比在多核CPU平台实现所需的时间有12倍以上的加速。  相似文献   

20.
This paper presents a novel GPU-based multiresolution rendering on sole-cube maps (SCMs), which is a variant of geometry images built upon spherical parameterization. Given spherical parametrization of a manifold mesh, the sphere domain is gnomonically projected to a closed cube, which constitutes the 6-chart sole-cube maps. A quadtree structure of SCMs and normal map atlas are then constructed by using the regular re-sampling. Then, by packing the quadtree nodes into the SCMs texture atlas, a new parallel multiresolution rendering is processed on the latest GPU in two rendering passes: the multiresolution node selection in fragment shader; the triangulation in vertex shader followed by the node culling operation in geometry shader. The proposed approach generates adaptive mesh surfaces dynamically, and can be fully implemented in GPU parallelization. The proposed scheme alleviates the computing load of multiresolution mesh refinement on CPU, and our GPU-based multiresolution rendering is demonstrated with a variety of examples. Our user study confirmed that the visual quality of the SCMs multiresolution rendering, in comparison with the meshes/geometry images rendering, is also highly efficient especially for complex models in large-scale virtual environment.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号