首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 406 毫秒
1.
尤国华  刘媛  高东 《计算机应用研究》2020,37(12):3667-3670
为满足日益增加的服务器端的计算需求,更多的协处理器(如GPU和MIC)成为服务器端的新成员,参与服务器端计算,但是传统的服务器端软件(如Web服务器软件等)不能充分发挥协处理器的性能。为充分利用MIC的性能,提升单台Web服务器的服务质量,针对CPU+MIC的异构硬件体系提出了一种新的动态请求处理模型。该模型基于事件驱动模型和线程池模型,可将部分动态请求调度至MIC执行,并行处理动态请求,兼顾了CPU和MIC间的负载均衡。仿真实验表明,该模型在平均响应时间、吞吐量和99%响应时间等方面均优于现有的Web服务器软件模型。  相似文献   

2.
公平性考虑的短作业优先内存调度策略   总被引:2,自引:0,他引:2       下载免费PDF全文
针对片上多核平台下多线程访问共享内存资源的不公平性、低效性问题,提出公平性考虑的短作业优先内存调度策略,采用设置请求最大等待时间保证线程访问公平性,短作业优先策略缩小请求平均等待时间,关注线程本身固有的并行性.实验结果证明,该策略在多线程访问时IPC性能提升明显,最高性能提升达到43%.  相似文献   

3.
异构多核平台通过集成不同类型的处理核来为系统设计提供灵活性,从而使应用程序可以根据自身需求动态地选择不同类型的处理核来进行处理,实现应用程序的高效运行。随着半导体技术的发展,单芯片上集成的核心数量随之增加,使得现代多核处理器具有更高的功率密度,而这会导致芯片温度的升高,最终会对系统性能造成一定的负面影响。为了充分发挥出异构多核处理系统的性能优势,提出一种在满足温度安全功率的前提下,以最大化系统性能为目标的动态映射方法。该方法考虑异构多核系统的两种异构指标来确定映射方案:第一种异构指标是核心类型,不同类型的处理核具有不同的特征,因而它们适用于处理不同的应用程序;第二种异构指标是热感受性,芯片上不同的处理核位置具有不同的热感受性,越是中心位置的处理核受到的来自于其他处理核的热传递越多,因而温度也就越高。为此,提出一种基于神经网络性能预测器来对线程与处理核类型进行匹配,并利用热安全功率(TSP)模型将经过匹配后的线程映射到芯片上的具体位置。实验结果表明,所提出的方法与常见的轮询调度(RRS)相比,能在保证热安全约束的前提下将平均每个时钟周期内程序所执行的指令数,即指令/周期(IPC)提高53%左右。  相似文献   

4.
在异构资源环境中高效利用计算资源是提升任务效率和集群利用率的关键。Kuberentes作为容器编排领域的首选方案,在异构资源调度场景下调度器缺少GPU细粒度信息无法满足用户自定义需求,并且CPU/GPU节点混合部署下调度器无法感知异构资源从而导致资源竞争。综合考虑异构资源在节点上的分布及其硬件状态,提出一种基于Kubernetes的CPU/GPU异构资源细粒度调度策略。利用设备插件机制收集每个节点上GPU的详细信息,并将GPU资源指标提交给调度算法。在原有CPU和内存过滤算法的基础上,增加自定义GPU信息的过滤,从而筛选出符合用户细粒度需求的节点。针对CPU/GPU节点混合部署的情况,改进调度器的打分算法,动态感知应用类型,对CPU和GPU应用分别采用负载均衡算法和最小最合适算法,保证异构资源调度策略对不同类型应用的正确调度,并且在CPU资源不足的情况下充分利用GPU节点的碎片资源。通过对GPU细粒度调度和CPU/GPU节点混合部署情况下的调度效果进行实验验证,结果表明该策略能够有效进行GPU调度并且避免资源竞争。  相似文献   

5.
国产DCU采用单指令多线程(SIMT)的并行执行模型,在程序执行时核函数内会产生非一致控制流,导致线程束中的线程部分只能串行执行,即线程束分化。针对核函数的性能因线程束分化受到严重制约的问题,提出一种减少线程束分化时间的编译优化方法——部分控制流合并(PCFM)。首先,通过散度分析找到同构且含有大量相同指令和相似指令的可融合发散区域;其次,统计合并后节省的指令周期百分比,从而评估可融合发散区域的融合盈利;最后,查找对齐序列,并合并有收益的可融合发散区域。在DCU上使用PCFM测试从图形处理器(GPU)基准测试套件Rodinia和经典的排序算法中选择的测试用例,实验结果表明,PCFM对测试用例能够取得1.146的平均加速比,与分支融合+尾合并方法相比,使用PCFM的加速比平均提高了5.72%。可见,所提方法减少线程束分化的效果更好。  相似文献   

6.
张延松  张宇  王珊 《软件学报》2018,29(3):883-895
以MapD为代表的图分析数据库系统通过GPU、Phi等新型众核处理器来支持高性能分析处理,在面向复杂数据模式时连接操作仍然是重要的性能瓶颈.近年来,异构处理器逐渐成为高性能计算的主流平台,内存连接性能的研究从多核CPU平台扩展到新兴的众核处理器,但众多的研究成果并未系统地揭示连接算法性能、连接数据集大小、硬件架构之间的内在联系,难以为未来异构处理器平台的数据库提供连接平台优化选择策略.本文以面向多核CPU、Xeon Phi、GPU处理器平台的内存连接优化技术为目标,通过优化内存哈希表设计,实现以向量映射替代哈希映射操作,消除哈希代价对内存连接算法的影响,从而更加准确地测量内存连接算法在多核CPU的cache大小、Xeon Phi的cache大小、Xeon Phi的并发多线程、GPU的SIMT(单指令多线程)机制等硬件相关因素影响下的性能特征.实验结果表明,缓存与并发多线程机制是提高内存连接算法性能的重要影响因素.缓存机制对于满足cache大小的连接操作具有性能优势,而GPU的并发多线程机制则在较大表的连接操作中具有较高的性能,Xeon Phi则在满足其L2 cache大小的连接操作中具有最高性能.实验结果揭示了内存连接操作性能与异构处理器硬件特性的联系,为未来异构处理器平台内存数据库查询优化器提供了优化策略.  相似文献   

7.
魏赟  丁宇琛 《计算机应用研究》2020,37(10):3043-3047
由于并行计算框架Spark缓存替换算法的粗糙性,LRU(least recently used)算法并未考虑RDD的重复使用导致易把高重用数据块替换出内存且作业执行效率较低等问题。通过优化权重模型和改进替换策略,提出了一种高效RDD自主缓存替换策略(efficient RDD automatic cache,ERAC),包括高重用自主缓存算法和缓存替换分级算法,可实现高效RDD的自主缓存和缓存目标的分级替换。最后利用SNAP(Stanford network analysis project)提供的标准数据集将ERAC和LRU、RA(register allocation)等算法进行了对比实验,结果显示ERAC算法能够有效提高Spark的内存利用率和任务执行效率。  相似文献   

8.
为有效提高异构的CPU/GPU集群计算性能,提出一种支持异构集群的CPU与GPU协同计算的两级动态调度算法。根据各节点计算能力评测结果和任务请求动态分发数据,在节点内CPU和GPU之间动态调度任务,使用数据缓存和数据处理双队列机制,提高异构集群的传输和处理效率。该算法实现了集群各节点“能者多劳”,避免了单节点性能瓶颈造成的任务长尾现象。实验结果表明,该算法较传统MPI/GPU并行计算性能提高了11倍。  相似文献   

9.
贾刚勇  万健  李曦  蒋从锋  代栋 《软件学报》2014,25(7):1403-1415
多核系统中,内存子系统消耗大量的能耗并且比例还会继续增大.因此,解决内存的功耗问题成为系统功耗优化的关键.根据线程的内存地址空间和负载均衡策略将系统中的线程划分成不同的线程组,根据线程所属的组,给同一组内的线程分配相同内存rank中的物理页,然后,根据划分的线程组以组为单位进行调度.提出了结合页分配和组调度的内存功耗优化方法(CAS).CAS周期性地激活当前需要的内存rank,从而可以将暂时不使用的内存rank置为低功耗状态,同时延长低功耗内存rank的空闲时间.仿真实验结果显示:与其他同类方法相比,CAS方法能够平均降低10%的内存功耗,同时提高8%的性能.  相似文献   

10.
GPU集群已经成为高性能计算(HPC)领域的主流组件。随着处理单元的发展和集群节点的拓展,GPU集群将在节点层面趋于异构化。提出一套针对异构任务在节点异构GPU集群上的能量有效调度方案。形式化地描述其任务和资源模型以及能耗评估模型。通过特定的节点选择策略,减少空闲状态的能耗损失。通过任务类型划分和组合分配以及DVFS,增加CPU资源利用率。该方案从系统层面着手,能够与现有的算法和指令层面的优化方法兼容。  相似文献   

11.
GPUs provide megabytes of registers and shared memories to maintain the contexts for thousands of threads and enable fast data sharing amongst threads of a thread block, respectively. Besides, GPUs employ L1 cache to provide the high bandwidth service for memory requests. However, the average L1 cache capacity per thread is very limited, resulting in cache thrashing which in turn impairs the performance. Meanwhile, many registers and shared memories are unassigned to any warps or thread blocks. Moreover, registers and shared memories that are assigned can be idle when warps or thread blocks are finished. Exploiting the above insights, we propose Virtual-Cache to cost-effectively increase the effective size of L1 cache by utilizing the unassigned and released registers and shared memories as cache-lines in this paper. Specifically, we leverage the unassigned registers and shared memories to serve cache requests directly. Regarding the registers assigned to a warp, they can work as cache-lines after the warp completes the execution and before they are accessed again by a new launched warp. Regarding the shared memories of a thread block, they are enabled to serve cache requests when the thread block is finished till they are referenced by shared memory instructions of the relaunched thread block. The register file, shared memory and L1 cache are physically independent but logically unified as a large virtual cache with redesigned cache-line management. We develop the control and data path for the register file, making the register file accessible for cache requests by borrowing an operand collector to serve the cache requests. We also expand the control and data path for the shared memory to serve the cache requests. Our evaluation results show that Virtual-Cache makes the performance improved by 28% over the previously proposed cache management technique for cache-sensitive applications.  相似文献   

12.
General-purpose graphics processing unit (GPGPU) plays an important role in massive parallel computing nowadays. A GPGPU core typically holds thousands of threads, where hardware threads are organized into warps. With the single instruction multiple thread (SIMT) pipeline, GPGPU can achieve high performance. But threads taking different branches in the same warp violate SIMD style and cause branch divergence. To support this, a hardware stack is used to sequentially execute all branches. Hence branch divergence leads to performance degradation. This article represents the PDOM (post dominator) stack as a binary tree, and each leaf corresponds to a branch target. We propose a new PDOM stack called PDOM-ASI, which can schedule all the tree leaves. The new stack can hide more long operation latencies with more schedulable warps without the problem of warp over-subdivision. Besides, a multi-level warp scheduling policy is proposed, which lets part of the warps run ahead and creates more opportunities to hide the latencies. The simulation results show that our policies achieve 10.5% performance improvements over baseline policies with only 1.33% hardware area overhead.  相似文献   

13.
High single instruction multiple data (SIMD) efficiency and low power consumption have made graphic processing units (GPUs) an ideal platform for many complex computational applications. Thousands of threads can be created by programmers and grouped into fixed-size SIMD batches, known as warps. High throughput is then achieved by concurrently executing such warps with minimal control overhead. However, if a branch instruction occurs, which assigns different paths to different threads, one warp will be broken into multiple warps that have to be executed serially, consequently reducing the efficiency advantage of SIMD. In this paper, the contemporary fixed-size warp design is abandoned for a hybrid warp size (HWS) mechanism. Mixed-size warps are generated according to HWS and are scheduled and issued flexibly. The simulation results show that this mechanism yields an average speedup of 1.20 over the baseline architecture for a wide variety of general purpose GPU applications. The paper also integrates HWS with dynamic warp formation (DWF), which is a well-known branch handling mechanism used to improve SIMD utilization by forming new warps out of split warps in real time. The simulation results show that the combination of DWF and HWS generates an average speedup of 1.27 over the DWF-only platform with an estimated area increase of about 1% of DWF.  相似文献   

14.
Hardware parallelism should be exploited to improve the performance of computing systems. Single instruction multiple data (SIMD) architecture has been widely used to maximize the throughput of computing systems by exploiting hardware parallelism. Unfortunately, branch divergence due to branch instructions causes underutilization of computational resources, resulting in performance degradation of SIMD architecture. Graphics processing unit (GPU) is a representative parallel architecture based on SIMD architecture. In recent computing systems, GPUs can process general-purpose applications as well as graphics applications with the help of convenient APIs. However, contrary to graphics applications, general-purpose applications include many branch instructions, resulting in serious performance degradation of GPU due to branch divergence. In this paper, we propose concurrent warp execution (CWE) technique to reduce the performance degradation of GPU in executing general-purpose applications by increasing resource utilization. The proposed CWE enables selecting co-warps to activate more threads in the warp, leading to concurrent execution of combined warps. According to our simulation results, the proposed architecture provides a significant performance improvement (5.85 % over PDOM, 91 % over DWF) with little hardware overhead.  相似文献   

15.
随着处理器和存储器速度差距的不断拉大,访存指令尤其是频繁cache miss的指令成为影响性能的重要瓶颈。编译器由于无法得知访存指令动态执行的拍数,一般假定这些指令的延迟为cache命中或者cache miss的延迟,所以并不准确。我们引入cache profiling技术来收集访存指令运行时的cache miss或者命中的信息,利用这些信息来计算访存的延迟。乱序机器上硬件的指令调度对于发射窗口内的指令能进行很好的动态调度,编译器则对更长的范围内的指令调度更有优势。在reorder buffer中cache miss一旦发生,容易引起reorder buffer满,导致流水线阻塞。调度容易cache miss的指令。使其并行执行,从而隐藏cache miss的长延迟,就可以提高程序性能。因此,我们针对load指令,一方面修改频繁miss的指令的延迟,一方面修改调度策略,提高存储级并行度。实验证明,我们的调度对于bzip2有高达4.8%的提升,art有4%的提升,整体平均提高1.5%。  相似文献   

16.
《Parallel Computing》2014,40(5-6):59-69
We present a cache-aware method for accelerating texture-based volume rendering on a graphics processing unit (GPU). Because a GPU has hierarchical architecture in terms of processing and memory units, cache optimization is important to maximize performance for memory-intensive applications. Our method localizes texture memory reference according to the location of the viewpoint and dynamically selects the width and height of thread blocks (TBs) so that each warp, which is a series of 32 threads processed simultaneously, can minimize memory access strides. We also incorporate transposed indexing of threads to perform TB-level cache optimization for specific viewpoints. Furthermore, we maximize TB size to exploit spatial locality with fewer resident TBs. For viewpoints with relatively large strides, we synchronize threads of the same TB at regular intervals to realize synchronous ray propagation. Experimental results indicate that our cache-aware method doubles the worst rendering performance compared to those provided by the CUDA and OpenCL software development kits.  相似文献   

17.
Fang  Juan  Zhang  Xibei  Liu  Shijian  Chang  Zeqing 《The Journal of supercomputing》2019,75(8):4519-4528

When multiple processor (CPU) cores and a GPU integrated together on the same chip share the last-level cache (LLC), the competition for LLC is more serious. CPU and GPU have different memory access characteristics, so that they have differences in the sensitivity of LLC capacity. For many CPU applications, a reduced share of the LLC could lead to significant performance degradation. On the contrary, GPU applications have high number of concurrent threads and they can tolerate access latency. Taking into account the GPU program memory latency tolerance characteristics, we propose an LLC buffer management strategy (buffer-for-GPU, BFG) for heterogeneous multi-core. A buffer is added on the side of LLC to filtrate streaming requests of GPU. Cache-insensitive GPU messages directly access to buffer instead of accessing to LLC, thereby filtering the GPU request and freeing up the LLC space for the CPU application. Then, for the different characteristics of CPU and GPU applications, an improved LRU replacement taking into account the recent access time and access frequency of the cache block is adopted. The cache misses-aware algorithm dynamically selects the improved LRU or LRU algorithm to fit the current operating state by comparing the miss rate of cache in buffer so that the performance of the system will be improved significantly.

  相似文献   

18.
The aim of the hierarchical cache memories that are equipped for GPUs is the management of irregular memory access patterns for general purpose workloads. The level-1 data cache (L1D) of the GPU plays an important role for its ability in the provision of high bandwidth and low-latency data accesses. Unfortunately, the GPU L1D may become a performance bottleneck due to facing many performance challenges such as cache contention and resource congestion. These critical issues come from a large number of simultaneous requests from the SIMT cores to the limited-capacity L1D. We observe that many applications have a large number of requests with a very low reuse probability, resulting in the GPU performance degradation. To overcome these challenges, we propose an efficient cache bypassing mechanism that can periodically filter the access stream and make an accurate bypassing decision to improve the efficiency of the L1D. The proposed technique uses a small storage amount to save the tag array of the L1D for the early miss prediction before it makes the bypassing decision. The experiment results reveal that the proposed technique significantly increases the cache efficiency and the GPU performance.  相似文献   

19.
GPU已被广泛应用于当前的高性能计算系统中,但其性能却受到程序运行时不同控制流方向的严重制约。这一问题通常通过动态Warp重组技术来解决,即将一个或多个Warp内沿相同控制流执行的线程组合在一起,构成一个新的Warp。但是,这类方法普遍存在一些不必要的重组,引入了较大的额外性能开销。分析了线程重组的性能开销,并提出了一种称作"部分重组"的性能优化方法。这种方法在保证重组效率的前提下,避免了对包含活跃线程数量较多的Warp的重组,从而有效减少了线程重组引入的性能开销。测试结果表明,部分重组能够在保证重组效率的前提下带来较为明显的性能提升。  相似文献   

20.
Cache Profiling技术   总被引:1,自引:0,他引:1  
如何减少和隐藏cache失效的延迟,是人们关注的热点。编译器为了得到cache访问命中的情况,往往使用模拟器去跑一遍来得到结果,这样的速度很慢。为了克服以上缺点,提出了在编译器中作cache profiling来获取cache访问的信息。类似于value profiling和stride profiling,cache profiling对访存指令作插装,可以有效地提高速度,并且只需要编译器的支持即可。Cache profiling获得的信息可以用来改进指令调度、软件预取、生成cache hint和辅助线程等。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号