首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 134 毫秒
1.
Cache自适应写分配策略   总被引:1,自引:0,他引:1  
处理器所能提供的有效带宽是目前制约处理器性能提高的关键因素 .通过对Cache写失效行为的分析,提出了一种新的提高处理器带宽利用率的Cache写失效处理策略--Cache自适应写分配策略 .该策略在访存失效队列中收集全修改Cache块,对全修改Cache块采用非写分配策略,并能够自适应地切换为写分配策略 .与传统的Cache写失效处理策略相比,Cache自适应写分配策略硬件代价小,避免了不必要的数据传输,降低Cache污染,减少存储管理队列阻塞的频率 .结果表明,采用Cache自适应写分配策略,STREAM基准测试程序带宽平均提高62.6%,SPEC CPU2000程序的IPC值平均提高5.9% .  相似文献   

2.
硬件数据预取技术可以有效提升处理器的访存性能,但传统流预取策略存在预取不及时的问题。为此,提出一种双倍步长流预取策略,并设计对应的预取部件结构。预取部件自动检测数据流的固定步长并将该步长扩大为原有的2倍,以计算预取地址。实验结果表明,加入该预取部件后,运行SPEC2006测试集的整数应用与浮点应用时,处理器性能最高可分别提升45%与57%,针对Cache Miss率较高的应用,该预取部件可以有效隐藏访存延时。  相似文献   

3.
“存储墙”问题已经成为处理器性能提升的主要障碍,而处理器内核猜测执行预测路径上访存指令时预载入的存储器数据所导致Cache污染会严重影响处理器性能.本文提出一种针对猜测执行过程中预载入数据的Cache污染控制方法CSDA.首先,利用置信度评估技术从所有预测路径中分离出错误概率较大的路径.然后,根据低置信度污染型访存指令识别历史表将低置信度预测路径上的访存指令划分为预取型和污染型,为污染型的访存指令建立低优先级Load/Store队列,并采用污染数据Cache存储污染数据.仿真结果表明,在双核模式下,CSDA策略相对于baseline结构来说,L1 D-Cache缺失率降低幅度从9%-23%,平均降低了17%;L2 Cache缺失率的下降范围从1.02%-14.39%,平均为5.67%;IPC的提升幅度从0.19% -5.59%,平均为2.21%.  相似文献   

4.
高性能处理器普遍采用片上集成大容量复杂结构的一级Cache提高处理器性能,但随着Cache容量和复杂度的增加,访问Cache所产生的访存延迟和功耗明显增加;基于存储队列,提出了一种通过减少Cache访问次数来降低功耗和延迟的方法,利用存储队列来缓存Load/Store指令的数据,并且当存储队列不满时,通过空闲入口暂存已经完成的仿存数据,提高了连续访存数据的复用率,减少了Cache的访问次数;仿真结果显示,该方法在增加少量的控制逻辑基础上,显著减少了Cache的访问次数,降低了Cache的功耗,减少了访存延迟,加快了执行速度。  相似文献   

5.
DOOC:一种能够有效消除抖动的软硬件合作管理Cache   总被引:3,自引:0,他引:3  
作为弥补处理器和主存之间速度巨大差异的桥梁,Cache已经成为现代处理器中不可或缺的一部分.经研究发现.传统Cache单独使用硬件进行管理,使用固定的Cache策略和一致性协议难以适应程序中数据访存模式的多样性,容易造成Cache抖动,以致影响性能,提出了一种新的软硬件合作管理Cache--面向数据对象Cache(data-obiect oriented cache,DOOC).DOOC动态地为程序中的数据对象分配Cache段,并且动态变化段容量、段内相联度、块大小和一致性协议,从而适应数据访存模式的多样性,还介绍了DOOC软件管理的编译方法以及面向数据对象的预取机制.分别使用CACTI和基于LEON3处理器的实验平台对DOOC的硬件开销进行评估.验证了DOOC的硬件可实现性,还使用软件模拟的方式分别测试了DOOC在单核和多核处理器平台上的性能.在单核处理器上对15个基准测试程序的评测结果表明.与传统Cache相比,DOOC失效率平均降低44.98%(最大降低93.02%),平均加速比为1.20(最大为2.36).同时.通过在4核处理器平台上运行NPB的OpenMP版本测试程序,失效率平均降低49.69%(最大降低73.99%).  相似文献   

6.
嵌入式处理器中访存部件的低功耗设计研究   总被引:2,自引:0,他引:2  
以“龙芯1号”处理器为研究对象,探讨了嵌入式处理器中访存部件的低功耗设计方法.通过对访存部件的结构、功耗以及关键路径进行分析,利用局部性原理,提出一种根据虚拟地址历史记录进行判断的方法,可以显著减少TLB和Cache对RAM块的访问次数,使得TLB部件功耗平均降低了28.1%,Cache部件功耗平均降低了54.3%,处理器总功耗平均降低了23.2%,而关键路径延时反而减少,处理器性能略有提高.  相似文献   

7.
由于链式数据结构的存储缺乏空间局部性,导致程序执行过程中对链式数据的访问会发生严重的Cache缺失行为。通过对面向链式结构的线程预取性能分析,研究链式数据结构程序热点循环的计算任务量与访存任务量比例特征对线程预取性能的影响。结合多核处理器平台特点,实现了一种适用于链式数据结构的帮助线程间隔预取方法。实验结果进一步验证了计算任务量与访存任务量比例特征对间隔预取性能的影响,表明间隔预取相比于传统线程预取技术有明显的性能优势。  相似文献   

8.
硬件数据预取技术可以有效提升处理器的访存性能,是申威处理器性能优化过程中亟需突破的一项技术。硬件开销和处理器架构的制约是硬件预取技术实现中的主要难点。借鉴学术界对硬件预取技术的研究成果和工业界的应用现状,紧密结合申威处理器的结构特点,研究了申威处理器硬件预取技术的实现方法。以流预取为例,在处理器核心面积增加0.97%的情况下,硬件预取技术的应用可以将目前申威处理器的整数性能平均提升5.17%,最高提升28.88%;浮点性能平均提升6.39%,最高提升30.11%。  相似文献   

9.
片上多处理器中二级Cache的设计和管理是影响其性能的关键因素之一。在私有二级Cache的基础上,提出一种基于集中式一致性目录的协作Cache设计方案,通过有效地管理片上存储资源来优化处理器的性能,从而使该协作Cache具有平均访存延迟小、Cache缺失率低、可扩展性好等优点。实验结果显示,与共享二级Cache设计相比,协作Cache可以将4核处理器的吞吐量平均提高13.5%,而其硬件开销约为8.1%。  相似文献   

10.
为了提高网络内存的访存性能,基于一种页面级流缓存和预取结构提出了可变步长的带状流检测算法VSS(variable stride stream)和基于时钟步长的流预取优化算法来优化网络访存性能.带状流检测算法解决了固定步长流检测下循环访问中虚拟页地址的跳跃问题,消除了断流,可以有效提高流检测的覆盖率.基于时钟步长的流预取优化动态调整预取长度,可以解决有些预取不能及时取回的问题,进一步提高预取性能.通过和顺序预取算法的比较可以看出,VSS算法可以实现高准确率、低通信开销的预取.通过模拟分析了这种流缓存和预取机制在网络访存系统中的应用,验证了以少量性能下降换取灵活的远程内存扩展方法的可行性.  相似文献   

11.
随着处理器和存储器速度差距的不断拉大,访存指令尤其是频繁cache miss的指令成为影响性能的重要瓶颈。编译器由于无法得知访存指令动态执行的拍数,一般假定这些指令的延迟为cache命中或者cache miss的延迟,所以并不准确。我们引入cache profiling技术来收集访存指令运行时的cache miss或者命中的信息,利用这些信息来计算访存的延迟。乱序机器上硬件的指令调度对于发射窗口内的指令能进行很好的动态调度,编译器则对更长的范围内的指令调度更有优势。在reorder buffer中cache miss一旦发生,容易引起reorder buffer满,导致流水线阻塞。调度容易cache miss的指令。使其并行执行,从而隐藏cache miss的长延迟,就可以提高程序性能。因此,我们针对load指令,一方面修改频繁miss的指令的延迟,一方面修改调度策略,提高存储级并行度。实验证明,我们的调度对于bzip2有高达4.8%的提升,art有4%的提升,整体平均提高1.5%。  相似文献   

12.
The speed gap between processor and main memory is the major performance bottleneck of modern computer systems. As a result, today's microprocessors suffer from frequent cache misses and lose many CPU cycles due to pipeline stalling. Although traditional data prefetching methods considerably reduce the number of cache misses, most of them strongly rely on the predictability for future accesses and often fail when memory accesses do not contain much locality. To solve the long latency problem of current memory systems, this paper presents the design and evaluation of our high-performance decoupled architecture, the HiDISC (Hierarchical Decoupled Instruction Stream Computer). The motivation for the design originated from the traditional decoupled architecture concept and its limits. The HiDISC approach implements an additional prefetching processor on top of a traditional access/execute architecture. Our design aims at providing low memory access latency by separating and decoupling otherwise sequential pieces of code into three streams and executing each stream on three dedicated processors. The three streams act in concert to mask the long access latencies by providing the necessary data to the upper level on time. This is achieved by separating the access-related instructions from the main computation and running them early enough on the two dedicated processors. Detailed hardware design and performance evaluation are performed with development of an architectural simulator and compiling tools. Our performance results show that the proposed HiDISC model reduces 19.7% of the cache misses and improves the overall IPC (Instructions Per Cycle) by 15.8%. With a slower memory model assuming 200 CPU cycles as memory access latency, our HiDISC improves the performance by 17.2%.  相似文献   

13.
The on-chip memory performance of embedded systems directly affects the system designers' decision about how to allocate expensive silicon area. A novel memory architecture, flexible sequential and random access memory (FSRAM), is investigated for embedded systems. To realize sequential accesses, small “links”are added to each row in the RAM array to point to the next row to be prefetched. The potential cache pollution is ameliorated by a small sequential access buyer (SAB). To evaluate the architecture-level performance of FSRAM, we ran the Mediabench benchmark programs on a modified version of the SimpleScalar simulator. Our results show that the FSRAM improves the performance of a baseline processor with a 16KB data cache up to 55%, with an average of 9%; furthermore, the FSRAM reduces 53.1% of the data cache miss count on average due to its prefetching effect. We also designed RTL and SPICE models of the FSRAM, which show that the FSRAM significantly improves memory access time, while reducing power consumption, with negligible area overhead.  相似文献   

14.
随着存储系统的访问速度与处理器运算速度的差距越来越显著,访存性能已成为提高处理器性能的瓶颈.通过对程序的访存行为进行分析,提出快速地址计算的自适应栈高速缓存方案.该方案将栈访问从数据高速缓存的访问中分离出来,充分利用栈空间数据访问的特点,提高指令级并行度,减少数据高速缓存污染,降低数据高速缓存失效率,并采用快速地址计算策略,减少栈访问的命中时间.该栈高速缓存在发生栈溢出时能够自适应地关闭,以避免栈切换对处理器性能的影响.栈高速缓存标志中增加进程标识,进程切换时不需要将数据写到低层存储系统中,适用于多进程环境.SPEC CPU2000程序运行结果表明,采用快速地址计算的自适应栈高速缓存方案,25.8%的访存指令可以并行执行,数据高速缓存失效率平均降低9.4%,IPC值平均提高6.9%.  相似文献   

15.
Cache performance is strongly influenced by the type of locality embodied in programs. In particular, multimedia programs handling images and videos are characterized by a bidimensional spatial locality, which is not adequately exploited by standard caches. In this paper we propose novel cache prefetching techniques for image data, called neighbor prefetching, able to improve exploitation of bidimensional spatial locality. A performance comparison is provided against other assessed prefetching techniques on a multimedia workload (with MPEG-2 and MPEG-4 decoding, image processing, and visual object segmentation), including a detailed evaluation of both the miss rate and the memory access time. Results prove that neighbor prefetching achieves a significant reduction in the time due to delayed memory cycles (more than 97% on MPEG-4 with respect to 75% of the second performing technique). This reduction leads to a substantial speedup on the overall memory access time (up to 140% for MPEG-4). Performance has been measured with the PRIMA trace-driven simulator, specifically devised to support cache prefetching.  相似文献   

16.
In this paper we propose and evaluate a new data-prefetching technique for cache coherent multiprocessors. Prefetches are issued by a functional unit called a prefetch engine which is controlled by the compiler. We let second-level cache misses generate cache miss traps and start the prefetch engine in a trap handler. The trap handler is fast (40–50 cycles) and does not normally delay the program beyond the memory latency of the miss. Once started, the prefetch engine executes on its own and causes no instruction overhead. The only instruction overhead in our approach is when a trap handler completes after data arrives. The advantages of this technique are (1) it exploits static compiler analysis to determine what to prefetch, which is hard to do in hardware, (2) it uses prefetching with very little instruction overhead, which is a limitation for traditional software-controlled prefetching, and (3) it is accurate in the sense that it generates very little useless traffic while maintaining a high prefetching coverage. We also study whether one could emulate the prefetch engine in software, which would not require any additional hardware beyond support for generating cache miss traps and ordinary prefetch instructions. In this paper we present the functionality of the prefetch engine and a compiler algorithm to control it. We evaluate our technique on six parallel scientific and engineering applications using an optimizing compiler with our algorithm and a simulated multiprocessor. We find that the prefetch engine removes up to 67% of the memory access stall time at an instruction overhead less than 0.42%. The emulated prefetch engine removes in general less stall time at a higher instruction overhead.  相似文献   

17.
The increasing gap in performance between processors and main memory has made effective instructions prefetching techniques more important than ever. A major deficiency of existing prefetching methods is that most of them require an extra port to I-cache. A recent study by Rivers et al. [19] shows that this factor alone explains why most modern microprocessors do not use such hardware-based I-cache prefetch schemes. The contribution of this paper is two-fold. First, we present a method that does not require an extra port to I-cache. Second, the performance improvement for our method is greater than the best competing method BHGP [23] even disregarding the improvement from not having an extra port. The three key features of our method that prevent the above deficiencies are as follows. First, late prefetching is prevented by correlating misses to dynamically preceding instructions. For example, if the I-cache miss latency is 12 cycles, then the instruction that was fetched 12 cycles prior to the miss is used as the prefetch trigger. Second, the miss history table is kept to a reasonable size by grouping contiguous cache misses together and associated them with one preceding instruction, and therefore, one table entry. Third, the extra I-cache port is avoided through efficient prefetch filtering methods. Experiments show that for our benchmarks, chosen for their poor I-cache performance, an average improvement of 9.2% in runtime is achieved versus the BHGP methods [23], while the hardware cost is also reduced. The improvement will be greater if the runtime impact of avoiding an extra port is considered. When compared to the original machine without prefetching, our method improves performance by about 35% for our benchmarks.  相似文献   

18.
穆雅莉  杨兵  喻明艳 《计算机工程》2012,38(7):273-275,278
一级指令Cache的平均缺失损失被量化为下一级存储系统的访问时间,在进行处理器性能瓶颈分析中简单的量化会引起较大的误差。针对该问题,应用区间模型分析影响一级指令Cache平均缺失损失的前端因素,并用模拟实验进行分析研究,结果表明,除下一级存储系统的访问时间外,取指带宽、取指队列的大小、一级指令Cache缺失率及程序特性,会对一级指令Cache平均缺失损失产生影响。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号