首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
帮助线程预取技术研究综述   总被引:1,自引:0,他引:1  
张建勋  古志民 《计算机科学》2013,40(7):19-23,39
帮助线程预取是当前多核平台提高非规则数据密集应用预取效果性能的关键技术之一,近年来已成为国内外的研究热点。针对非规则数据密集应用访存规律的非连续局部性特征,帮助线程预取技术利用CMP平台的最后一级共享缓存(LLC)将应用的非连续局部性转换为瞬时的连续时空局部性(即时局部性),从而达到通过线程级数据预取提高程序性能的目的。归纳了帮助线程预取技术的分类,概括和比较了不同帮助线程实现技术的优势和局限性,深入分析和探讨了现有的几种典型帮助线程技术的预取控制策略。最后从帮助线程实时控制、参数动态选取和优化方面指出了帮助线程预取技术的研究方向。  相似文献   

2.
对基因表达谱分块,使之符合GPU并行计算的线程结构特性,根据GPU线程结构特性设计双层并行模式,并利用纹理缓存实现访存高效;依据CPU二级缓存容量对基本块进一步细分成子块以提高缓存命中率,利用数据预取技术减少访存次数,利用线程绑定技术减少线程在核心之间的迁移;依据多核CPU和GPU的计算能力分配CPU和GPU的基因互信息计算任务以平衡CPU与GPU的计算负载;在设计新的阈值计算算法基础上,设计实现了访存高效的构建全局基因调控网络CPU/GPU并行算法.实验结果表明,与已有算法相比,本文算法加速更明显,并且能够构建更大规模的全局基因调控网络.  相似文献   

3.
结合访存失效队列状态的预取策略   总被引:1,自引:0,他引:1  
随着存储系统的访问速度与处理器的运算速度的差距越来越显著,访存性能已成为提高计算机系统性能的瓶颈.通过对指令Cache和数据Cache失效行为的分析,提出一种预取策略--结合访存失效队列状态的预取策略.该预取策略保持了指令和数据访问的次序,有利于预取流的提取.并将指令流和数据流的预取相分离,避免相互替换.在预取发起时机的选择上,不但考虑当前总线是否空闲,而且结合访存失效队列的状态,减小对处理器正常访存请求的影响.通过流过滤机制提高预取准确性,降低预取对访存带宽的需求.结果表明,采用结合访存失效队列状态的预取策略,处理器的平均访存延时减少30%,SPEC CPU2000程序的IPC值平均提高8.3%.  相似文献   

4.
针对现代计算机系统中的存储墙问题,提出一种适合于链式数据结构的数据预取方法——纯遍历推送方法。采用基于共享高速缓存的多核处理器平台CMP上的多线程技术,在主程序运行时分离出一个推送线程,由其将主线程需要的数据提前预取至处理器共享高速缓存中以隐藏主线程的存储器延迟。实验结果证明该方法在CMP架构下对以链式结构为主的内存受限程序的性能有一定的改进。  相似文献   

5.
硬件数据预取技术可以有效提升处理器的访存性能,但传统流预取策略存在预取不及时的问题。为此,提出一种双倍步长流预取策略,并设计对应的预取部件结构。预取部件自动检测数据流的固定步长并将该步长扩大为原有的2倍,以计算预取地址。实验结果表明,加入该预取部件后,运行SPEC2006测试集的整数应用与浮点应用时,处理器性能最高可分别提升45%与57%,针对Cache Miss率较高的应用,该预取部件可以有效隐藏访存延时。  相似文献   

6.
评测访存延迟对于优化应用访存模式和数据放置有重要的指导意义,然而数据Cache、多线程、数据预取等技术却严重干扰了访存延迟测量的精度。设计并实现了基于可变步长的访存延迟测量模型,在一块空间内根据用户指定的步长创建访问序列环,循环访问这个序列得出平均时间,即为访存延迟。最后对Intel的通用处理器和飞腾处理器在不同数据大小、步长、线程数等情况下的访存延迟进行了测量比较,该模型能够显示存储层次并精确显示测量延迟。  相似文献   

7.
评测访存延迟对于优化应用访存模式和数据放置有重要的指导意义,然而数据Cache、多线程、数据预取等技术却严重干扰了访存延迟测量的精度。设计并实现了基于可变步长的访存延迟测量模型,在一块空间内根据用户指定的步长创建访问序列环,循环访问这个序列得出平均时间,即为访存延迟。最后对Intel的通用处理器和飞腾处理器在不同数据大小、步长、线程数等情况下的访存延迟进行了测量比较,该模型能够显示存储层次并精确显示测量延迟。  相似文献   

8.
编译器在静态分析方式下很难对程序的非线性规律访存操作进行正确的数据预取 .但采用profiling技术可以得到程序运行时候的访存规律,利用这些信息可以精确地插入数据预取指令 .基于stride profiling技术,提出了新的信息收集类型stride iterative,更精确地反映程序执行时访存指令的实际行为,并结合别名分析的结果调整对同一cache行的数据预取,得到比普通数据预取更好的预取性能 .安腾2上运行CPU2000的12个整型测试例子平均有8.54%的性能提升,其中mcf性能提升达到了77.87%.  相似文献   

9.
多核计算机上非递归并行计算矩阵乘积   总被引:1,自引:1,他引:0  
提出延迟隐藏的数据预取模型,实现计算与访存的重叠操作,以达到共享二级缓存零缺失;给出基本块的概念,以简化算法的数据结构和减少存储开销;按基本块连续存储方式存储矩阵元素,从存储层次上优化算法,显著地减少页表缓冲缺失;采取非递归调度基本块的策略,充分利用多核计算机的共享二级缓存来减少访问主存的次数,并且不局限于某种特定的存储结构,实现算法缓存无关.多核计算机上的实验结果表明,给出的非递归计算矩阵乘积的线程级并行算法高效、可扩展.  相似文献   

10.
漆锋滨  王飞  李中升 《软件学报》2009,20(Z1):34-39
传统的基于静态编译的数据预取大多针对数组访问,现在的应用程序中大量出现由指针构成的链式数据结构,依赖传统的编译方法难以进行预取优化.反馈式数据预取优化技术是当前高性能计算技术中前沿的一种编译优化手段,可以很好地解决链式结构的预取问题.在研究ORC编译器反馈式编译优化技术的基础上,针对Alpha结构的特点,对针对链式结构的反馈式数据预取进行了优化.SPEC2000测试表明,平均性能提高了4.1%.  相似文献   

11.
Helper threaded prefetching based on chip multiprocessor has been shown to reduce memory latency and improve overall system performance, and has been explored in linked data structures accesses. In our earlier work, we had proposed an effective threaded prefetching technique that balances delinquent loads between main thread and helper thread to improve effectiveness of prefetching. In this paper, we analyze memory access characteristic of specific application to estimate effective prefetch distance range for our proposed threaded prefetching technique. The effect of hardware prefetchers on the estimation is also exploited. We discuss key design issues of our proposed method and present preliminary experimental results. Our experimental evaluations indicated that the bounded range of effective prefetch distance can be determined using our method, and the optimal prefetch distances can be determined based on the estimated effective prefetch distance range by few trial runs.  相似文献   

12.
一种基于线程的数据预取方法   总被引:1,自引:0,他引:1       下载免费PDF全文
多线程、多核处理器的推广受限于应用。目前,大部分应用尤其是桌面应用都是单线程程序,不能充分利用多线程处理器提供的多个现场并行执行来提高速度。使用空闲现场加速单线程应用是目前研究的一个热点,研究主要集中在提高传统串行应用存储访问的效率和分支预测的精度。在基于线程的数据预取方法中,数据预取线程是从主线程的执执行踪迹中提取的。它们使用空闲的现场,和主线程并行执行,在主线程需要数据之前把数据取到离处理器更近的存储层次。基于线程的数据预取方法能够有效地解决传统数据预取方法难以处理的诸多问题,如不规则内存访问模式。本文具体分析了应用程序中访存行为的特点,结合控制流处理,设计并验证了一种基于线程的数据预取方法TDP。模拟结果显示,使用TDP可以获得7%左右的性能提升。  相似文献   

13.
This paper studies a memory-side prefetching technique to hide latency incurred by inherently serial accesses to linked data structures (LDS). A programmable engine sits close to memory and traverses LDS independently from the processor. The engine can run ahead of the processor because of its low latency path to memory, allowing it to initiate data transfers earlier than the processor and pipeline multiple transfers over the network. We evaluate the proposed memory-side prefetching scheme for the Olden benchmarks on a processor-in-memory system. For the six benchmarks where LDS memory stall time is significant, the memory-side scheme reduces execution time by an average of 27% compared to a system without any prefetching. Compared to a state-of-the-art processor-side software prefetching scheme, the memory-side scheme reduces execution time in the range of 20–50% for three of the six applications, is about the same for two applications, and is worse by 18% for one application. We conclude that our memory-side scheme is effective, but a combination of the processor- and memory-side prefetching schemes is best and provide a qualitative framework to determine when either scheme should be used.  相似文献   

14.
Helper threaded prefetching based on Chip Multiprocessor is a well known approach to reducing memory latency and has been explored in linked data structures accesses. However, conventional helper threaded prefetching often suffers from useless prefetches and cache thrashing, which affect its effectiveness. In this paper, we first analyzed the shortcomings of conventional helper threaded prefetching for linked data structures. Then we proposed an improved helper threaded prefetching, Skip Helper Threaded Prefetching, for hotspots with two level data traversals. Our solution is to profile the applications and balance delinquent loads between main thread and prefetching thread based on the characteristic of operations in their hotspots. Evaluations show that the proposed solution improves average performance by 8.9% (-O2) and 8.5% (-O3) over the conventional helper threaded prefetching that greedily prefetches all delinquent loads. We also compare our proposal with the active threaded prefetching which synchronizes with main thread by semaphore, and find that our proposal provides better performance for the targeted applications.  相似文献   

15.
Lee  Minsuk  Min  Sang Lyul  Shin  Heonshik  Kim  Chong Sang  Park  Chang Yun 《Real-Time Systems》1997,13(1):47-65
Cache memories have been extensively used to bridge the speed gap between high speed processors and relatively slow main memory. However, they are not widely used in real-time systems due to their unpredictable performance. This paper proposes an instruction prefetching scheme called threaded prefetching as an alternative to instruction caching in real-time systems. In the proposed threaded prefetching, an instruction block pointer called a thread is assigned to each instruction memory block and is made to point to the next block on the worst case execution path that is determined by a compile-time analysis. Also, the thread is not updated throughout the entire program execution to guarantee predictability. This paper also compares the worst case performances of various previous instruction prefetching schemes with that of the proposed threaded prefetching. By analyzing several benchmark programs, we show that the worst case performance of the proposed scheme is significantly better than those of previous instruction prefetching schemes. The results also show that when the block size is large enough the worst case performance of the proposed threaded prefetching scheme is almost as good as that of an instruction cache with 100 % hit ratio.  相似文献   

16.
高带宽远程内存结构中的预取研究   总被引:1,自引:0,他引:1  
高速电路和光互联技术的发展极大地提高了网络的速度与带宽。因而,突破高性能计算机CPU与内存紧耦合的传统结构成为可能,CPU与内存的耦合不再受距离的限制,这必将引起体系结构的变革。文[1]提出DSAG结构——CPU与内存在空间上分离,每个CPU节点上仅留少量内存.将海量内存放在远程统一管理作为内存服务器,CPU节点和内存服务器之间通过高速网络互连。这种新的体系结构带来了更好的共享性和可扩展性,但同时也对我们解决CPU和内存之间的不平衡性问题带来了挑战。为了降低DSAG这种远程内存结构增加的访存时延,我们考虑到CPU正常访存没有充分利用网络的高带宽,因此可以利用剩余的网络带宽来进行远程内存数据的预取。本论文在应用程序执行时记录本地(相对于远程内存)不命中的地址信息,以页对齐分析其中存在的页框流(Page Frame Stream)的统计特征,并提出可基于页框流的预取机制可降低访存延迟、提升系统性能的观点。最后我们采用模拟的方法验证了观点的可行性与正确性,进一步提出了三种预取策略,比较并分析影响预取效果的因素。  相似文献   

17.
Due to cluster resource competition and task scheduling policy, some map tasks are assigned to nodes without input data, which causes significant data access delay. Data locality is becoming one of the most critical factors to affect performance of MapReduce clusters. As machines in MapReduce clusters have large memory capacities, which are often underutilized, in-memory prefetching input data is an effective way to improve data locality. However, it is still posing serious challenges to cluster designers on what and when to prefetch. To effectively use prefetching, we have built HPSO (High Performance Scheduling Optimizer), a prefetching service based task scheduler to improve data locality for MapReduce jobs. The basic idea is to predict the most appropriate nodes for future map tasks based on current pending tasks and then preload the needed data to memory without any delaying on launching new tasks. To this end, we have implemented HPSO in Hadoop-1.1.2. The experiment results have shown that the method can reduce the map tasks causing remote data delay, and improves the performance of Hadoop clusters.  相似文献   

18.
Cache performance is strongly influenced by the type of locality embodied in programs. In particular, multimedia programs handling images and videos are characterized by a bidimensional spatial locality, which is not adequately exploited by standard caches. In this paper we propose novel cache prefetching techniques for image data, called neighbor prefetching, able to improve exploitation of bidimensional spatial locality. A performance comparison is provided against other assessed prefetching techniques on a multimedia workload (with MPEG-2 and MPEG-4 decoding, image processing, and visual object segmentation), including a detailed evaluation of both the miss rate and the memory access time. Results prove that neighbor prefetching achieves a significant reduction in the time due to delayed memory cycles (more than 97% on MPEG-4 with respect to 75% of the second performing technique). This reduction leads to a substantial speedup on the overall memory access time (up to 140% for MPEG-4). Performance has been measured with the PRIMA trace-driven simulator, specifically devised to support cache prefetching.  相似文献   

19.
Data prefetching is an effective data access latency hiding technique to mask the CPU stall caused by cache misses and to bridge the performance gap between processor and memory.With hardware and/or software support,data prefetching brings data closer to a processor before it is actually needed.Many prefetching techniques have been developed for single-core processors.Recent developments in processor technology have brought multicore processors into mainstream. While some of the single-core prefetching t...  相似文献   

20.
实时多媒体应用存在严格的时间限制。存储访问时延是影响系统性能的一个最关键问题,针对实时多媒体数据而言,基于传统数据的LRU及其改进算法上的的预取和置换策略已经不能满足实际QoS要求。该文针对实时多媒体流,建立终端环境下的实时媒体对象的任务路径树模型,并给出该模型下的媒体流的预取和置换算法MO-VFT。实验模拟结果表明,所提出的模型合理,算法负载小,性能提高达20~35%。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号