首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 109 毫秒
1.
多核处理器面向低功耗的共享Cache划分方案   总被引:1,自引:0,他引:1       下载免费PDF全文
随着多核处理器的发展,片上Cache的容量随之增大,其功耗占整个芯片功耗的比率也越来越大。如何减少Cache的功耗,已成为当今Cache设计的一个热点。本文研究了面向低功耗的多核处理器共享Cache的划分技术(LP-CP)。文中提出了Cache划分框架,通过在处理器中加入失效率监控器来动态地收集程序的失效率,然后使用面向低功耗的共享Cache划分算法,计算性能损耗阈值范围内的共享Cache划分策略。我们在一个共享L2 Cache的双核处理器系统中,使用多道程序测试集测试了面向低功耗的Cache划分:在性能损耗阈值为1%和3%的情况中,系统的Cache关闭率分别达到了20.8%和36.9%。  相似文献   

2.
方娟  郭媚  杜文娟  雷鼎 《计算机应用》2013,33(9):2404-2409
针对多核处理器下的共享二级缓存(L2 Cache)提出了一种面向低功耗的Cache设计方案(LPD)。在LPD方案中,分别通过低功耗的共享Cache混合划分算法(LPHP)、可重构Cache算法(CRA)和基于Cache划分的路预测算法(WPP-L2)来达到降低Cache功耗的目的,同时保证系统的性能良好。在LPHP和CRA中,程序运行时动态地关闭Cache中空闲的Cache列,节省了对空闲列的访问功耗。在WPP-L2中,利用路预测技术在Cache访问前给出预测路信息,预测命中时则可用最短的访问延时和最少的访问功耗完成Cache访问;预测失效时,则结合Cache划分策略,降低由路预测失效导致的额外功耗开销。通过SPEC2000测试程序验证,与传统使用最近最少使用(LRU)替换策略的共享L2 Cache相比,本方案提出的三种算法虽然对程序执行时间稍有影响,但分别节省了20.5%、17%和64.6%的平均L2 Cache访问功耗,甚至还提高了系统吞吐率。实验表明,所提方法在保持系统性能的同时可以显著降低多核处理器的功耗。  相似文献   

3.
一种低功耗可重构Cache的重构算法   总被引:4,自引:0,他引:4  
随着半导体技术的发展,芯片上的功率密度也逐渐增大,这使得功耗问题在芯片设计时越来越受到人们的关注.片上Cache是处理器芯片中的主要功耗源之一,采用低功耗Cache可有效降低处理器整体功耗.对低功耗Cache设计进行了研究.介绍了当前低功耗Cache设计的主要方法和一种低功耗可重构的数据Cache的体系结构及相应的重构算法.给出了一种新的重构算法——LoW-High Boundary(LHB)算法.实验表明LHB算法在性能和功耗上均优于原算法.  相似文献   

4.
基于超窄数据的低功耗数据Cache方案   总被引:2,自引:0,他引:2  
降低耗电量已经成为当前最重要的设计问题之一.现代微处理器多采用片上Cache来弥合主存储器与中央处理器(CPU)之间的巨大速度差异,但Cache也成为处理器功耗的主要来源,设计低功耗的Cache存储体变得越来越重要.仅需要很少的几位就可以存储的超窄数据(VNV)在Cache的存储和访问中都占有很大的比例.据此,提出了一种基于超窄数据的低功耗Cache结构(VNVC).在VNVC中,数据存储体被分为低位存储体和高位存储体两部分.在标志位控制下,用来存放超窄数据的高存储单元将被关闭,以节省其动态和静态功耗.VNVC仅通过改进存储体来获得低功耗,不需要额外的辅助硬件,并且不影响原有Cache的性能,所以适合于各种Cache组织结构.采用12个Spec2000测试程序的仿真结果表明,4位宽度的超窄数据可以获得最大的节省率,平均可节省动态功耗29.85%、静态功耗29.94%.  相似文献   

5.
基于记录缓冲的低功耗指令Cache方案   总被引:1,自引:1,他引:1  
现代微处理器大多采用片上Cache来缓解主存储器与中央处理器(CPU)之间速度的巨大差异,但Cache也成为处理器功耗的主要来源,尤其是其中大部分功耗来自于指令Cache.采用缓冲器可以过滤掉大部分的指令Cache访问,从而降低功耗,但仍存在相当程度不必要的存储体访问,据此提出了一种基于记录缓冲的低功耗指令Cache结构RBC.通过记录缓冲器和对存储体的改造,RBC能够过滤大部分不必要的存储体访问,有效地降低了Cache的功耗.对10个SPEC2000标准测试程序的仿真结果表明,与传统基于缓冲器的Cache结构相比,在仅牺牲6.01%处理器性能和3.75%面积的基础上,该方案可以节省24.33%的指令Cache功耗.  相似文献   

6.
在嵌入式处理器中,Cache功耗所占的比重越来越大.提出了一种可重构的低功耗数据Cache,能够利用程序运行过程中的空间和时间局部性以及高频数据值局部性来节省功耗.Mibench和Mediabench的仿真结果表明,对于多媒体应用为主的测试程序,采用基于高频值的可重构低功耗数据Cache与普通Cache相比,平均能量消耗降低34.45%,平均能量延迟乘积降低27.50%.  相似文献   

7.
嵌入式系统片上Cache功耗是微处理器的功耗的最主要部分.提出新的低功耗技术,将Filter Cache方法与Loop Table方法相结合,无需增加新的指令,不需要复杂的硬件结构,并可针对具体的应用程序对处理器系统结构进行定制.  相似文献   

8.
多核动态可重构Cache是解决Cache功耗困扰的一个重要方法。现有Cache功耗模拟器并不能很好地支持多核动态可重构Cache功耗研究,通过对多核动态可重构Cache的功耗模型进行研究,找到了计算可重构Cache的方法和思路,应用CACTI来分别构建各个组成结构的Cache功耗模型,以较为准确地测算可重构Cache的功耗。在Simics模拟器下构建动态可重构Cache,运行测试程序,对比传统的体系结构,可重构Cache的功耗能够得到10.4%的降低。同时,实验中发现功耗的降低不仅仅是动态可重构Cache贡献的,而是由系统综合产生的,因此在低功耗设计中,要综合考虑整体系统的功耗和性能,避免片面地考虑Cache结构而导致整体功耗的提高。  相似文献   

9.
随着工艺尺寸的缩小,漏流功耗逐渐成为制约微处理器设计的主要因素之一.Sleep Cache与Drowsy Cache是两种降低Cache漏流功耗的重要技术.基于统计信息的Cache漏流功耗估算方法(SB-CLPE)用于对Sleep Cache或Drowsy Cache进行Cache漏流功耗估算,根据该方法设计的Cache体系结构能够在程序执行过程中实时估算Cache漏流功耗.通过对所有Cache块的访问间隔时间进行统计,SB_CLPE可以估算出使用不同衰退间隔时Cache的漏流功耗,从而得到使Cache漏流功耗最低的最佳衰退间隔.实验表明,SB_CLPE对Sleep Cache的漏流功耗的估算结果与HotLeakage漏流功耗模拟器通过模拟获得的结果相比,平均偏差仅为3.16%,得到的最佳衰退间隔也可以较好吻合.使用SB_CLPE的Cache体系结构可以用于在程序执行过程中对最佳衰退间隔进行实时估算,通过动态调整衰退间隔以达到最优的功耗降低效果.  相似文献   

10.
方娟  王帅  于璐 《计算机科学》2014,41(7):36-39,73
如何提高多核处理器的性能和降低多核处理器中Cache的功耗已经成为下一代多核处理器的研究热点。为了降低片上多核处理器的功耗,基于路适应算法可以采用一种新的动态划分机制,该机制主要由路分配模块和动态功耗控制模块组成。路分配模块在程序运行过程中根据处理器核所运行线程的工作集的大小调整处理器核所分配的Cache路。动态功耗控制模块利用程序运行的局部性原理,将处理器核所运行线程的工作空间控制在少数Cache路中。关闭剩余的Cache路,从而达到降低Cache功耗的目的。该机制使用Simics全系统模拟平台模拟多核处理器,并用SpecOMP测试集测试了系统的性能和功耗。与传统的Cache(Conventional L2Cache,C-L2)相比,其IPC提高了9.27%,功耗降低了10.95%。  相似文献   

11.
Cache划分技术是解决共享Cache访问冲突的重要方法,但是已有的Cache划分技术具有开销高、Cache划分时机难以确定的缺点。本文提出了面向应用的Cache划分框架(ACP)。ACP的优点是能够使用程序员提供的应用最外层循环的边界信息,更好地获取应用的失效率信息,因此Cache划分算法具有更高的精度,从而降低了划分的频率,进而提高系统性能。实验结果表明,和传统的固定周期的Cache划分方向相比,ACP具有更好的性能。  相似文献   

12.
面向多线程多道程序的加权共享Cache划分   总被引:5,自引:1,他引:4  
并行应用在共享Cache结构的多核处理器执行时,会因为对共享Cache的冲突访问而产生性能下降和执行时间不确定的现象.共享Cache划分技术可以把共享Cache互斥地分配给多个进程使用,是解决该问题的有效方法.由于线程间的数据共享,线程数目不同的应用对共享Cache的利用率不同,但传统的以失效率最低为目标的共享Cache划分算法(例如UCP)没有区分应用线程数目的不同.文中设计了一种面向多线程多道程序的加权共享Cache划分框架(Weighted Cache Partitioning,WCP),包括面向应用的失效率监控器和加权Cache划分算法.失效率监控器以进程为单位动态监控在不同的Cache容量下应用的失效率;而加权Cache划分算法扩展了传统的失效率最优的Cache划分算法,根据应用线程数目的不同在进行Cache划分时给应用赋予不同的权值,以使具有更多线程的应用获得更多的共享Cache,从而提高系统的整体性能.实验结果表明:加权Cache划分算法虽然失效率有所增高,但却改进了IPC吞吐率、加权加速比和公平性.在由科学和工程计算应用组成的多道程序测试用例中,WCP-1的IPC吞吐率比以失效率最低为目标函数的共享Cache划分算法最高高出10.8%,平均高出5.5%.  相似文献   

13.
To confer the robustness and high quality of service, modern computing architectures running real-time applications should provide high system performance and high timing predictability. Cache memory is used to improve performance by bridging the speed gap between the main memory and CPU. However, the cache introduces timing unpredictability creating serious challenges for real-time applications. Herein, we introduce a miss table (MT) based cache locking scheme at level-2 (L2) cache to further improve the timing predictability and system performance/power ratio. The MT holds information of block addresses related to the application being processed which cause most cache misses if not locked. Information in MT is used for efficient selection of the blocks to be locked and victim blocks to be replaced. This MT based approach improves timing predictability by locking important blocks with the highest number of misses inside the cache for the entire execution time. In addition, this technique decreases the average delay per task and total power consumption by reducing cache misses and avoiding unnecessary data transfers. This MT based solution is effective for both uniprocessors and multicores. We evaluate the proposed MT-based cache locking scheme by simulating an 8-core processor with 2 levels of caches using MPEG4 decoding, H.264/AVC decoding, FFT, and MI workloads. Experimental results show that in addition to improving the predictability, a reduction of 21% in mean delay per task and a reduction of 18% in total power consumption are achieved for MPEG4 (and H.264/AVC) by using MT and locking 25% of the L2. The MT results in about 5% delay and power reductions on these video applications, possibly more on applications with worse cache behavior. For the FFT and MI (and other) applications whose code fits inside the level-1 instruction (I1) cache, the mean delay per task increases only by 3% and total power consumption increases by 2% due to the addition of the MT.  相似文献   

14.
Advancement in semiconductor technology increases power density in recent Chip Multi-Processors (CMPs) which significantly increases the leakage energy consumptions of on-chip Last Level Caches (LLCs). Performance linked dynamic tuning in LLC size is a promising option for reducing the cache leakage.This paper reduces static power consumption by dynamically shutting down or turning on cache banks based upon system performance and cache bank usage statistics. Shutting down of a cache bank remaps its future requests to another active bank, called as target bank. The proposed method is evaluated on three different implementation policies, viz (1) The system can decide to shutdown or turn-on some cache banks periodically throughout the process execution. (2) The system allows to shutdown banks initially and once the bank restarting initiates, no more shutdown is permitted further. (3) This policy resizes cache like first policy with some predefined time slices, in which cache cannot be resized.For a 4MB 4 way set associative L2 cache, experimental analysis shows 66% reduction in static energy with 29% gain in Energy Delay Product (EDP) for first strategy; for the second policy, static power is reduced by 59% with 27% savings in EDP. Finally, last policy saves 65% in static power and 30% in EDP with minimal performance penalty.  相似文献   

15.
The power consumed by memory systems accounts for 45% of the total power consumed by an embedded system, and the power consumed during a memory access is 10 times higher than during a cache access. Thus, increasing the cache hit rate can effectively reduce the power consumption of the memory system and improve system performance. In this study, we increased the cache hit rate and reduced the cache-access power consumption by developing a new cache architecture known as a single linked cache (SLC) that stores frequently executed instructions. SLC has the features of low power consumption and low access delay, similar to a direct mapping cache, and a high cache hit rate similar to a two way-set associative cache by adding a new link field. In addition, we developed another design known as a multiple linked caches (MLC) to further reduce the power consumption during each cache access and avoid unnecessary cache accesses when the requested data is absent from the cache. In MLC, the linked cache is split into several small linked caches that store frequently executed instructions to reduce the power consumption during each access. To avoid unnecessary cache accesses when a requested instruction is not in the linked caches, the addresses of the frequently executed blocks are recorded in the branch target buffer (BTB). By consulting the BTB, a processor can access the memory to obtain the requested instruction directly if the instruction is not in the cache. In the simulation results, our method performed better than selective compression, traditional cache, and filter cache in terms of the cache hit rate, power consumption, and execution time.  相似文献   

16.
Multicomputer cache simulation results derived from address traces collected from an Intel iPSC/2 hypercube multicomponent are presented. The primary emphasis is on examining how increasing the number of processor nodes executing a parallel application affects the overall multicomputer cache performance. The effects on multicomputer direct-mapped cache performance of application-specific data partitioning, data access patterns, communication distribution, and communication frequency are illustrated. The effects of system accesses on total cache performance are explored, as well as the reasons for application-specific differences in cache behavior for system and user accesses. Comparing user code results with full user and system code analysis reveals the significant effect of system accesses, and this effect increases with multicomputer size. The time distribution of an application's message-passing operations is found to more strongly affect cache performance than the total amount of time spent in message-passing code  相似文献   

17.
With rapid development of multi/many-core processors, contention in shared cache becomes more and more serious that restricts performance improvement of parallel programs. Recent researches have employed page coloring mechanism to realize cache partitioning on real system and to reduce contentions in shared cache. However, page coloring-based cache partitioning has some side effects, one is page coloring restricts memory space that an application can allocate, from which may lead to memory pressure, another is changing cache partition dynamically needs massive page copying which will incur large overhead. To make page coloring-based cache partition more practical, this paper proposes a malloc allocator-based dynamic cache partitioning mechanism with page coloring. Memory allocated by our malloc allocator can be dynamically partitioned among different applications according to partitioning policy. Only coloring the dynamically allocated pages can remit memory pressure and reduce page copying overhead led by re-coloring compared to all-page coloring. To further alleviate the overhead, we introduce minimum distance page copying strategy and lazy flush strategy. We conduct experiments on real system to evaluate these strategies and results show that they work well for reducing cache misses and re-coloring overhead.  相似文献   

18.
A new cache architecture based on temporal and spatial locality   总被引:5,自引:0,他引:5  
A data cache system is designed as low power/high performance cache structure for embedded processors. Direct-mapped cache is a favorite choice for short cycle time, but suffers from high miss rate. Hence the proposed dual data cache is an approach to improve the miss ratio of direct-mapped cache without affecting this access time. The proposed cache system can exploit temporal and spatial locality effectively by maximizing the effective cache memory space for any given cache size. The proposed cache system consists of two caches, i.e., a direct-mapped cache with small block size and a fully associative spatial buffer with large block size. Temporal locality is utilized by caching candidate small blocks selectively into the direct-mapped cache. Also spatial locality can be utilized aggressively by fetching multiple neighboring small blocks whenever a cache miss occurs. According to the results of comparison and analysis, similar performance can be achieved by using four times smaller cache size comparing with the conventional direct-mapped cache.And it is shown that power consumption of the proposed cache can be reduced by around 4% comparing with the victim cache configuration.  相似文献   

19.
介绍了一种采用预比较方法的高速缓存结构。通过标志段的预比较来避免对无关标志段和数据段的访问以降低访问功耗。并引入反相时钟来优化其访问时序,使平均访问延时少于一个周期。实验显示,在保持命中率的基础上,对测试程序的访存优化表现出很好一致性,且功耗优势随相联度增加而增大。相比预测型结构,在8路相联度下平均有28.5%的功耗降低。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号