首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 203 毫秒
1.
嵌入式处理器中Cache的应用极大地提高了处理器的性能,同时Cache,尤其是指令Cache功耗占据了处理器很大一部分功耗,关闭不必要的tag SRAM和data SRAM的访问,可以极大地降低功耗。提出了一种流水化的指令Cache访问机制,关闭不必要的data SRAM的访问;并且通过记录指令Cache行的信息和预测下一行的Cache形成一个Cache行滑动窗口,关闭不必要的tag SRAM访问。所提出的方法没有性能损失,在SMIC 90nm工艺下进行功耗分析,其指令访问的功耗降低50%。  相似文献   

2.
基于超窄数据的低功耗数据Cache方案   总被引:2,自引:0,他引:2  
降低耗电量已经成为当前最重要的设计问题之一.现代微处理器多采用片上Cache来弥合主存储器与中央处理器(CPU)之间的巨大速度差异,但Cache也成为处理器功耗的主要来源,设计低功耗的Cache存储体变得越来越重要.仅需要很少的几位就可以存储的超窄数据(VNV)在Cache的存储和访问中都占有很大的比例.据此,提出了一种基于超窄数据的低功耗Cache结构(VNVC).在VNVC中,数据存储体被分为低位存储体和高位存储体两部分.在标志位控制下,用来存放超窄数据的高存储单元将被关闭,以节省其动态和静态功耗.VNVC仅通过改进存储体来获得低功耗,不需要额外的辅助硬件,并且不影响原有Cache的性能,所以适合于各种Cache组织结构.采用12个Spec2000测试程序的仿真结果表明,4位宽度的超窄数据可以获得最大的节省率,平均可节省动态功耗29.85%、静态功耗29.94%.  相似文献   

3.
混合Cache的低功耗设计方案   总被引:1,自引:0,他引:1       下载免费PDF全文
在嵌入式处理器中,Cache的功耗所占的比重越来越大。为降低嵌入式系统中混合Cache的功耗,引入一种基于程序段的重构算法——PPBRA,并提出一种新的基于分类访问的可重构混合Cache结构,该方案能够根据不同程序段对Cache容量的需求,动态地分配混合Cache的指令路数和数据路数,还能够对混合Cache进行分类访问,过滤对不必要路的访问,从而实现降低混合Cache的功耗的目的。Mibench仿真结果表明,该方案在有效降低Cache功耗的同时,还能提高Cache的综合性能。  相似文献   

4.
王冶  张盛兵  王党辉 《计算机工程》2012,38(1):268-269,272
为降低微处理器中片上Cache的能耗,设计一种基于预缓冲机制的指令Cache。通过预缓冲控制部件的预测,使处理器需要的指令尽可能在缓冲区命中,从而避免访问指令Cache所造成的功耗。对7个测试程序的仿真结果表明,预缓冲机制能节省23.23%的处理器功耗,程序执行性能平均提升7.53%。  相似文献   

5.
高性能处理器普遍采用片上集成大容量复杂结构的一级Cache提高处理器性能,但随着Cache容量和复杂度的增加,访问Cache所产生的访存延迟和功耗明显增加;基于存储队列,提出了一种通过减少Cache访问次数来降低功耗和延迟的方法,利用存储队列来缓存Load/Store指令的数据,并且当存储队列不满时,通过空闲入口暂存已经完成的仿存数据,提高了连续访存数据的复用率,减少了Cache的访问次数;仿真结果显示,该方法在增加少量的控制逻辑基础上,显著减少了Cache的访问次数,降低了Cache的功耗,减少了访存延迟,加快了执行速度。  相似文献   

6.
随着工艺尺寸的缩小以及频率的增加,漏流能量将成为未来微处理器能量消耗的主要来源。其中,片上Cache存储结构将是整个处理器能量消耗的重要组成部分。为了降低漏流能量,组相联数据Cache中采用了分体的结构,通过使用位线隔离技术将那些未被访问的Cache存储体的位线进行隔离,使之进入低能耗状态。本文提出一种新的数据Cache替换策略——ELSS。该策略充分考虑到访问数据Cache的地址具有较好的空间局部性,特别增加了对数据地址序列中的跨步访问模式的识别,用于指导Cache块的替换。通过将符合顺序模式与跨步模式的数据块尽量放在同一个存储体中,可以减少存储体的转换次数。实验表明,使用ELSS替换策略可以进一步减少位线隔离数据Cache使用LRU策略时9%的体转换次数,多节省8%的数据Cache能量消耗,而对性能的影响比使用LRU策略时小。  相似文献   

7.
针对嵌入式处理器中日益明显的指令Cache漏功耗,提出了一种基于当前指令状态标志位的分支预测和返回目标寄存器映射的昏睡子块唤醒方法;该方法根据处理器执行过程中指令状态位提前判断分支指令的目标子块,同时设计了一种返回地址目标寄存器映射的结构,提前判断函数返回指令的目标子块。在消除唤醒延迟带来的性能损失基础上,提高了处理器的性能;通过实验对比,该方法可以减小36%的指令Cache静态功耗,同时处理器性能平均有13%的提高。  相似文献   

8.
方娟  郭媚  杜文娟  雷鼎 《计算机应用》2013,33(9):2404-2409
针对多核处理器下的共享二级缓存(L2 Cache)提出了一种面向低功耗的Cache设计方案(LPD)。在LPD方案中,分别通过低功耗的共享Cache混合划分算法(LPHP)、可重构Cache算法(CRA)和基于Cache划分的路预测算法(WPP-L2)来达到降低Cache功耗的目的,同时保证系统的性能良好。在LPHP和CRA中,程序运行时动态地关闭Cache中空闲的Cache列,节省了对空闲列的访问功耗。在WPP-L2中,利用路预测技术在Cache访问前给出预测路信息,预测命中时则可用最短的访问延时和最少的访问功耗完成Cache访问;预测失效时,则结合Cache划分策略,降低由路预测失效导致的额外功耗开销。通过SPEC2000测试程序验证,与传统使用最近最少使用(LRU)替换策略的共享L2 Cache相比,本方案提出的三种算法虽然对程序执行时间稍有影响,但分别节省了20.5%、17%和64.6%的平均L2 Cache访问功耗,甚至还提高了系统吞吐率。实验表明,所提方法在保持系统性能的同时可以显著降低多核处理器的功耗。  相似文献   

9.
嵌入式处理器中访存部件的低功耗设计研究   总被引:2,自引:0,他引:2  
以“龙芯1号”处理器为研究对象,探讨了嵌入式处理器中访存部件的低功耗设计方法.通过对访存部件的结构、功耗以及关键路径进行分析,利用局部性原理,提出一种根据虚拟地址历史记录进行判断的方法,可以显著减少TLB和Cache对RAM块的访问次数,使得TLB部件功耗平均降低了28.1%,Cache部件功耗平均降低了54.3%,处理器总功耗平均降低了23.2%,而关键路径延时反而减少,处理器性能略有提高.  相似文献   

10.
李勇  胡慧俐  杨焕荣 《计算机应用》2014,34(4):1005-1009
数字信号处理软件中循环程序在执行时间上占有很大比例,用指令缓冲器暂存循环代码可以减少程序存储器的访问次数,提高处理器性能。在VLIW处理器指令流水线中增加一个支持循环指令的缓冲器,该缓冲器能够缓存循环程序指令,并以软件流水的形式向功能部件派发循环程序指令。这样循环程序代码只需访存一次而执行多次,大大减少了访存次数。在循环指令运行期间,缓冲器发出信号使程序存储器进入睡眠状态可以降低处理器功耗。典型的应用程序测试表明,使用了循环缓冲后,取指流水线空闲率可达90%以上,处理器整体性能提高10%左右,而循环缓冲的硬件面积开销大约占取指流水线的9%。  相似文献   

11.
The power consumed by memory systems accounts for 45% of the total power consumed by an embedded system, and the power consumed during a memory access is 10 times higher than during a cache access. Thus, increasing the cache hit rate can effectively reduce the power consumption of the memory system and improve system performance. In this study, we increased the cache hit rate and reduced the cache-access power consumption by developing a new cache architecture known as a single linked cache (SLC) that stores frequently executed instructions. SLC has the features of low power consumption and low access delay, similar to a direct mapping cache, and a high cache hit rate similar to a two way-set associative cache by adding a new link field. In addition, we developed another design known as a multiple linked caches (MLC) to further reduce the power consumption during each cache access and avoid unnecessary cache accesses when the requested data is absent from the cache. In MLC, the linked cache is split into several small linked caches that store frequently executed instructions to reduce the power consumption during each access. To avoid unnecessary cache accesses when a requested instruction is not in the linked caches, the addresses of the frequently executed blocks are recorded in the branch target buffer (BTB). By consulting the BTB, a processor can access the memory to obtain the requested instruction directly if the instruction is not in the cache. In the simulation results, our method performed better than selective compression, traditional cache, and filter cache in terms of the cache hit rate, power consumption, and execution time.  相似文献   

12.
The L1 cache in today’s high-performance processors accesses all ways of a selected set in parallel. This constitutes a major source of energy inefficiency: at most one of the N fetched blocks can be useful in an N-way set-associative cache. The other N-1 cachelines will all be tag mismatches and subsequently discarded.We propose to eliminate unnecessary associative fetches by exploiting certain software semantics in cache design, thus reducing dynamic power consumption. Specifically, we use memory region information to eliminate unnecessary fetches in the data cache, and ring level information to optimize fetches in the instruction cache. We present a design that is performance-neutral, transparent to applications, and incurs a space overhead of mere 0.41% of the L1 cache.We show significantly reduced cache lookups with benchmarks including SPEC CPU, SPECjbb, SPECjAppServer, PARSEC, and Apache. For example, for SPEC CPU 2006, the proposed mechanism helps to reduce cache block fetches from the data and instruction caches by an average of 29% and 53% respectively, resulting in power savings of 17% and 35% in the caches, compared to the aggressively clock-gated baselines.  相似文献   

13.
In multiprocessor system-on-a-chips (MPSoCs) that use snoop-based cache coherency protocols, a miss in the data cache triggers the broadcast of coherency request to all the remote caches, to keep all data coherent. However, the majority of these requests are unnecessary because remote caches do not have the matching blocks and so their tag lookups fail. Both the coherency requests and the tag lookups corresponding to a remote miss consume unnecessary energy.We propose an architecture-level technique for snoop energy reduction, called broadcast filtering, which prevents unnecessary coherency requests from being broadcast to remote caches, and thus reduces snoop energy consumption by both the cache and bus. Broadcast filtering is implemented using a snooping cache and a split bus. The snooping cache checks if a block that cannot be obtained locally exists in remote caches before broadcasting a coherency request. If no remote cache has the matching block, there is no broadcast; and if broadcasting is necessary, the split bus allows coherency requests to be broadcast selectively to the remote caches which have matching blocks.Experimental results show a reduction by 90% of cache lookups, by 60% of bus usage, and by 40% of snoop energy consumption, at a small cost in reduced performance. An analysis result based on the energy model shows the broadcast filtering technique can reduce by up to 55% of energy consumption per cache coherency operation.  相似文献   

14.
Recently, EDRAM cells have gained much attention as a promising alternative to construct on-chip memories. However, due to inherent characteristics of DRAM cells, they need to be refreshed periodically, causing a huge refresh energy burden. Particularly, employing EDRAM cells in large-scale last-level caches will make refresh burden much higher due to their large capacity. In this paper, we propose a selective fine-grain round-robin refresh scheme for both performance improvement and refresh energy reduction. To reduce bank conflicts between normal cache accesses and refresh operations, we employ a refresh scheme which refreshes cache lines in a bank-wise round-robin fashion. We also apply a selective refresh depending on the inclusive information in cache hierarchies. For the data which reside in both LLC and upper-level cache (i.e., L2 cache), the data access will be filtered by the upper-level cache. Based on this insight, we skip the refresh to the cache block in the EDRAM-based LLC which also exists in the upper-level caches. By doing so, we can reduce unnecessary refresh operations in EDRAM-based LLCs. According to our evaluation, our proposed scheme improves performance by 7.3% and reduces energy per instruction by 13.3% compared to the baseline all-bank refresh scheme.  相似文献   

15.
现代嵌入式处理器中指令高速缓存的功耗十分显著,对此提出一种基于路访问轨迹的组相联指令高速缓存的低功耗策略,利用改进的指令高速缓存和转移目标缓存建立和维护运行时指令高速缓存的路访问轨迹来减少指令高速缓存命中检测及无关路访问.进一步提出了基于跨行访问前驱指针、转移前驱状态、转移前驱指针及转移目标索引的路访问轨迹信息维护策略用以降低信息重建的频度,从而更有效地利用已建立的路访问轨迹信息.实验结果表明:采用优化后的路访问轨迹策略的指令高速缓存的标志存储器访问和数据存储器访问分别降低到传统指令高速缓存的3.60%和27.70%.  相似文献   

16.
This paper focuses on energy consumption which is a major problem in the dark silicon era. As energy consumption becomes a key issue for operation and maintenance of cloud data centers, cloud computing providers are becoming significantly concerned. Here, we show how spin-transfer torque random access memory (STT-RAM) can be used as an on-chip L2 cache to obtain lower energy compared to conventional L2 caches, like SRAM. High density, fast read access and non-volatility make STT-RAM a significant technology for on-chip memories. Previous studies have mainly studied specific schemes based on common applications and do not provide a thorough analysis of emerging scale-out applications with multiple design options. Here, we discuss different outlooks consisting of performance and energy efficiency in cloud processors by running emerging scale-out workloads. Experiment results on the CloudSuite benchmarks show that the proposed method reduces energy by 51% (on average) and improves energy delay product by 37% (on average) where instruction per cycle degradation is only 22% (on average) compared to the SRAM method.  相似文献   

17.
On-chip instruction cache is a potential power hungry component in embedded systems due to its large chip area and high access-frequency. Aiming at reducing power consumption of the on-chip cache, we propose a Reduced One-Bit Tag Instruction Cache (ROBTIC), where the cache size is judiciously reduced and the cache tag field only contains the least significant bit of the full-tag. We develop a cache operational control scheme for ROBTIC so that with the one-bit cache tag, the program locality can still be efficiently exploited. For applications where most of the memory accesses are localized, our cache can achieve similar performance as a traditional full-tag cache; however, the power consumption of the cache can be significantly reduced due to the much smaller cache size, narrower tag array (just one bit), and tinier tag comparison circuit being used. Experiments on a set of benchmarks implemented in CMOS 180 nm process technology demonstrate that our proposed design can reduce up to 27.3% dynamic power consumption and 30.9% area of the traditional cache when the cache size is fixed at 32 instructions, which outperforms the existing partial-tag based cache design. With the cache size customization, a further 47.8% power saving can be achieved. Our experimental results also show that when implemented in the deep sub-micron technologies where the leakage power is not ignorable, our design is still efficient - a coherent power saving trend (about 22%) has been observed for technologies from 130 nm down to 65 nm.  相似文献   

18.
The cache memory consumes a large proportion of the energy used by a processor. In the on-chip cache, the translation lookaside buffer (TLB) accounts for 20–50% of energy consumption of the on-chip cache. To reduce energy consumption caused by TLB accesses, a virtual cache can be accessed by virtual addresses which are issued by a processor directly. However, a virtual cache may result in the synonym problem. In this paper, we propose low-cost synonym detection hardware and a synonym data coherence mechanism. These reduce the energy consumption incurred by TLB lookups, and maintain synonym data consistency in the virtual cache. The proposed synonym detection hardware efficiently reduces the number of blocks that must be looked up in a virtual cache for saving energy. In addition, the proposed synonym data coherence mechanism also reduces the number of invalidated blocks in the virtual cache to prevent the destruction of cache locality. The simulation results show that our proposed energy-aware virtual cache consumes 51%, 27%, and 20% less energy than the traditional physical cache, traditional virtual cache, and synonym lookaside buffer (SLB), respectively. In addition, our proposed design shows almost the same static energy consumption as SLB, and reduces static energy consumption by about 20% compared with the traditional physical cache and virtual cache.  相似文献   

19.
设计了一种低功耗指令Cache:通过在CPU与一级指令Cache之间加入Line Buffer,来减少CPU对指令Cache的访问次数,从而降低指令Cache的功耗。此外在Line Buffer控制器中添加了重装控制单元,当指令Cache发生缺失时,能将片外存储单元中的指令直接送给CPU,从而最大限度地减少由于Cache缺失所引起CPU取指的延迟。经验证,该设计在降低功耗的同时,还提升了指令Cache的性能。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号