首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
提出一种简单的基于频繁值和频繁模式的压缩方法,给出结合Cache压缩技术和接口压缩技术的片上多处理器结构。全系统的模拟结果表明Cache压缩技术和接口压缩技术能提高片上多处理器中Cache的有效容量和pin的有效带宽,从而提高系统的性能。实验表明只采用Cache压缩技术平均能提高10%的性能,只采用接口压缩技术平均能提高5.5%的性能,同时采用Cache压缩技术和接口压缩技术平均能提高12%的性能。  相似文献   

2.
CMP中二级Cache多采用分布式结构,其中有两种基本管理方式:共享方式和私有方式。共享方式能最大程度利用二级Cache容量空间但却有高的平均访问延迟;私有方式能提供低访问延迟,但由于数据块副本的存在减少了Cache有效容量,因而增加了Cache缺失率。本文提出了基于私有方式的副本动态控制策略,能根据实际应用程序的执行程序情况动态控制副本数据块的数量,从而提高二级Cache性能。  相似文献   

3.
“存储墙”问题已经成为处理器性能提升的主要障碍,而处理器内核猜测执行预测路径上访存指令时预载入的存储器数据所导致Cache污染会严重影响处理器性能.本文提出一种针对猜测执行过程中预载入数据的Cache污染控制方法CSDA.首先,利用置信度评估技术从所有预测路径中分离出错误概率较大的路径.然后,根据低置信度污染型访存指令识别历史表将低置信度预测路径上的访存指令划分为预取型和污染型,为污染型的访存指令建立低优先级Load/Store队列,并采用污染数据Cache存储污染数据.仿真结果表明,在双核模式下,CSDA策略相对于baseline结构来说,L1 D-Cache缺失率降低幅度从9%-23%,平均降低了17%;L2 Cache缺失率的下降范围从1.02%-14.39%,平均为5.67%;IPC的提升幅度从0.19% -5.59%,平均为2.21%.  相似文献   

4.
多核多线程处理器二级Cache预取结构的设计   总被引:1,自引:1,他引:0       下载免费PDF全文
合理的设计二级Cache是有效地减少多核多线程处理器存储器访问延迟的方法。针对现有的多核多线程处理器,讨论了二级Cache的混合预取结构设计方案。通过详细设计和仿真分析,结果表明混合预取结构可有效提高处理器的整体性能。特别是采用不命中混合预取结构的二级Cache性能更佳,适合满足此类结构的多核多线程处理器需求。  相似文献   

5.
制造工艺的快速进步给集成电路设计提供了广阔的空间,而发展较慢的设计能力导致难以对片上资源高效利用。目前,高性能处理器片上Cache普遍占到芯片总面积的一半以上,而如何高效、智能地利用片上Cache空间,构建高性能存储系统是处理器微体系结构研究的重要内容。分析了Cache数据污染和猜测执行对处理器性能的影响,并在此基础上提出一种基于数据Tag有效位分裂的无污染Cache访问控制技术-Pease,将原先D-Cache Tag中的一位数据有效位扩展为读数据有效位(RVB)和写数据有效位(WVB)两位,根据RVB和WVB值的不同组合对数据读写访问进行控制。不但充分保留了猜测执行的数据预取性,使污染数据透明化,写入数据时无需对污染数据进行替换操作,消除了污染数据对Cache效率的影响。Pease技术相对于baseline结构来说,IPC的提升幅度为1.05%~8.40%,平均提升4.04%;L1 D-Cache缺失率降低幅度为19.05%~48.16%,平均降低29.66%。  相似文献   

6.
《电子技术应用》2013,(1):128-131
通过分析多用户数据请求规律以及实时分解随机请求序列来获取顺序请求序列。基于对多用户顺序请求进行命令预分解和命中率统计,实现读预取长度自我学习。分析多用户预取率及系统负载与预取失效代价之间的关系,对常规自适应Cache策略进行优化,选择合适预取阈值等参数。与常规自适应预取策略相比,动态调整Cache策略的预取命中率提高了30%。有效解决了多用户访问共享存储系统的预取失效率高问题。  相似文献   

7.
Pentium4处理器的内存层次分析   总被引:2,自引:0,他引:2  
吴金  齐欢 《微机发展》2004,14(7):47-48,51
处理器存储系统的效率对其整体性能有着十分重要的作用。文中介绍了P4处理器内存的体系结构,它包括一级数据Cache、二级Cache、Trace Cache;各部分完成的功能以及为提高命中率和降低存取时间,从而提高效率而采取的预取处理机制;P4处理器主要采取具有层次结构的内存设计、大容量的二级Cache和在跟踪Cache中采用预取处理机制的方法来提高Cache的命中率和降低未命中的代价来缩短处理器的访问时间,最终达到提高处理器整体性能的目的。  相似文献   

8.
光盘镜像服务器的Cache技术研究与实现   总被引:2,自引:0,他引:2       下载免费PDF全文
本文提出了一种基于光盘镜像服务器系统的两级Cache结构,即在客户端建立一个小的Cache,通过预取机制增大一次请求的规模;同时,在服务器端设计一个大的Cache,加快数据请求的响应速度。实验证明,两级Cache结构大大提高了光盘镜像服务器系统的数据传输率。  相似文献   

9.
提出一种适用于多核环境的混合Cache一致性协议。该协议采用混合值传播策略,引入小容量目录D-Cache,克服传统监听一致性协议发送数据请求时盲目广播的缺点,通过数据块状态的扩展,有效避免乒乓现象的发生。仿真实验结果表明,该协议能减少测试程序的运行时间,降低多核处理器私有L1 Cache的失效率,提高系统性能。  相似文献   

10.
一种低功耗高性能的滑动Cache方案   总被引:2,自引:0,他引:2  
Cache存储器的功耗占整个芯片功耗的主要部分.针对不同类型的应用程序对指令和数据Cache的容量实时需求不同,一种滑动Cache组织方案被提出.它均衡考虑指令和数据Cache需求,动态地调整一级Cache的容量和配置,消除了Cache中闲置部分产生的功耗.SPEC95仿真结果表明,采用滑动Cache结构不但降低了一级Cache的动态和静态泄漏功耗,而且还降低了整个处理器的动态功耗,提高了性能.滑动Cache比两种传统Cache结构和DRI结构的一级Cache平均动态功耗分别降低21.3%,19.52%和20.62%.采用滑动Cache结构与采用两种传统Cache结构和DRI结构相比,处理器平均动态功耗分别降低了8.84%,8.23%和10.31%,平均能量延迟乘积提高了12.25%,7.02%和13.39%.  相似文献   

11.
The modern chip multiprocessors are vulnerable to transient faults caused by either on-purpose attacks or system mistakes, especially for those with large and multi-level caches in cloud servers. In this paper, we propose a modified/shared replication cache to keep a redundancy for the latest accessed and modified/shared L2 cache lines. According to the experiments based on Multi2Sim, this cache with proper size can provide considerable data reliability. In addition, the cache can reduce the average latency of memory hierarchy for error correction, with only about 20.2% of L2 cache energy cost and 2% of L2 cache silicon overhead.  相似文献   

12.
This research explores a compressed memory hierarchy model which can increase both the effective memory space and bandwidth of each level of memory hierarchy. It is well known that decompression time causes a critical effect to the memory access time and variable-sized compressed blocks tend to increase the design complexity of the compressed memory systems. This paper proposes a selective compressed memory system (SCMS) incorporating the compressed cache architecture and its management method. To reduce or hide decompression overhead, this SCMS employs several effective techniques, including selective compression, parallel decompression and the use of a decompression buffer. In addition, fixed memory space allocation method is used to achieve efficient management of the compressed blocks. Trace-driven simulation shows that the SCMS approach can not only reduce the on-chip cache miss ratio and data traffic by about 35% and 53%, respectively, but also achieve a 20% reduction in average memory access time (AMAT) over conventional memory systems (CMS). Moreover, this approach can provide both lower memory traffic at a lower cost than CMS with some architectural enhancement. Most importantly, the SCMS is a more attractive approach for future computer systems because it offers high performance in cases of long DRAM latency and limited bus bandwidth.  相似文献   

13.
以优化压缩cache的替换策略为目标,提出一种优化的基于修正LRU的压缩cache替换策略MLRU-C。MLRU-C策略能利用压缩cache中额外的tag资源,形成影子tag机制来探测并修正LRU替换策略的错误替换决策,从而优化压缩cache替换策略的性能。实验结果表明,与传统LRU替换策略相比,MLRU-C平均能降低L2压缩cache失效率12.3%。  相似文献   

14.
High-performance processors employ aggressive branch prediction and prefetching techniques to increase performance. Speculative memory references caused by these techniques sometimes bring data into the caches that are not needed by correct execution. This paper proposes the use of the first-level caches as filters that predict the usefulness of speculative memory references. With the proposed technique, speculative memory references bring data only into the first-level caches rather than all levels in the cache hierarchy. The processor monitors the use of the cache blocks in the first-level caches and decides which blocks to keep in the cache hierarchy based on the usefulness of cache blocks. It is shown that a simple implementation of this technique usually outperforms inclusive and exclusive baseline cache hierarchies commonly used by today’s processors and results in IPC performance improvements of up to 10% on the SPEC CPU2000 integer benchmarks.  相似文献   

15.
随着多核处理器规模的扩大,请求数据的处理器核到数据的宿主节点之间的平均距离相应增大,并且数据访问在分布式共享高速缓存块中的分布并不均衡引起了网络热点。这些情况导致一级高速缓存缺失延迟的增大。为了解决该问题,将每四个处理器核分为一组,在组内设计邻近数据探测器。邻近数据探测器通过确定一次缺失能否在邻近核的一级高速缓存中得到数据,从而利用了并行程序在多核处理器上执行时数据访问的核间局部性。另外,根据新的结构相应优化了高速缓存一致性协议。实验表明,该片上存储优化方法提高了系统性能,减少了片上网络流量,节省了能耗。  相似文献   

16.
This paper explores potential for the RAMpage memory hierarchy to use a microkernel with a small memory footprint, in a specialized cache-speed static RAM (tightly-coupled memory, TCM). Dreamy memory is DRAM kept in low-power mode, unless referenced. Simulations show that a small microkernel suits RAMpage well, in that it achieves significantly better speed and energy gains than a standard hierarchy from adding TCM. RAMpage, in its best 128KB L2 case, gained 11% speed using TCM, and reduced energy 14%. Equivalent conventional hierarchy gains were under 1%. While 1MB L2 was significantly faster against lower-energy cases for the smaller L2, the larger SRAM's energy does not justify the speed gain. Using a 128KB L2 cache in a conventional architecture resulted in a best-case overall run time of 2.58s, compared with the best dreamy mode run time (RAMpage without context switches on misses) of 3.34s, a speed penalty of 29%. Energy in the fastest 128KB L2 case was 2.18J vs. 1.50J, a reduction of 31%. The same RAMpage configuration without dreamy mode took 2.83s as simulated, and used 2.393, an acceptable trade-off (penalty under 10%) for being able to switch easily to a lower-energy mode.  相似文献   

17.
On-board disk cache is an effective approach to improve disk performance by reducing the number of physical accesses to the magnetic media. Disk drive manufacturers are increasing the on-board disk cache size to match the capacity growth of the backend magnetic media. Some disk drives nowadays have a cache of 32 MB. Modern computer systems use large amounts of memory to improve performance, any data brought into host memory will be re-accessed there, not in the on-board disk cache. This feature has a significant impact on the behavior of disk cache. This is because computer systems are complex systems consisting of various components. The components are correlated with each other. Therefore, a specific component cannot be isolated from the overall system when we analyze its performance behavior. This paper employs four block-level real traces to explore the performance behavior of the on-board disk cache by considering the impacts of the cache hierarchy contained in computer systems. The analysis gives three major implications: (1) I/O stream at block-level contains negligible temporal locality. Therefore, read/write cache can only achieve marginal benefits. (2) Static write cache does not achieve performance gains since the write stream does not have too much interference with the read stream. Therefore, it is better to leave the on-board disk cache shared by both the write and read streams. (3) Read cache dominates the contribution to the hit ratio besides prefetch. Thus, it is better to focus on improving the read performance rather than write performance of disk cache.  相似文献   

18.
ARP:同时多线程处理器中共享Cache自适应运行时划分机制   总被引:1,自引:1,他引:0  
同时多线程是一种延迟容忍的体系结构,采用共享的二级Cache,在每个周期内可以执行多个线程的多条指令,这就会增加对存储层次的压力,文中主要研究了SMT处理器中多个并发执行的线程之间共享Cache的划分问题,尤其是Cache共享中的公平性问题以及它和吞吐量之间的关系,传统的LRU策略会根据线程的需要隐式地划分共享Cache,给具有较高需求的线程分配较多的Cache空间,对Cache的管理具有不公平性,从而会引起线程饿死、优先级反转等问题,实现了一种自适应、运行时划分机制(ARP)来管理共享Cache.ARP采用公平性作为划分的度量,并且使用动态划分算法来优化公平性,该算法具有易于实现,所需剖析较少的特点,硬件上使用经典的监控器来收集每个线程的栈距离信息,其存储开销不到0.25%.实验结果显示,与基于LRU的Cache划分相比,ARP可以将一个2路SMT处理器的公平性提高2.26倍,而将吞吐量平均提高14.75%.  相似文献   

19.
In this paper, we evaluate compressibility of L1 data caches and L2 cache in general-purpose graphics processing units (GPGPUs). Our proposed scheme is geared toward improving performance and power of GPGPUs through cache compression. GPGPUs are throughput-oriented devices which execute thousands of threads simultaneously. To handle working set of this massive number of threads, modern GPGPUs exploit several levels of caches. GPGPU design trend shows that the size of caches continues to grow to support even more thread level parallelism. We propose using cache compression to increase effective cache capacity, improve performance, and reduce power consumption in GPGPUs. Our work is motivated by the observation that the values within a cache block are similar, i.e., the arithmetic difference of two successive values within a cache block is small. To reduce data redundancy in L1 data caches and L2 cache, we use low-cost and implementation-efficient base-delta-immediate (BDI) algorithm. BDI replaces a cache block with a base and an array of deltas where the combined size of the base and deltas is less than the original cache block. We also study locality of fields in integer and floating-point numbers. We found that entropy of fields varies across different data types. Based on entropy, we offer different BDI compression schemes for integer and floating-point numbers. We augment a simple, yet effective, predictor that determines type of values dynamically in hardware and without the help of a compiler or a programmer. Evaluation results show that on average, cache compression improves performance by 8% and saves energy of caches by 9%.  相似文献   

20.
王朝  高岭 《计算机应用研究》2020,37(12):3739-3743
针对边缘计算中服务器存储能力有限的问题,提出一种基于博弈论的数据协作缓存策略。该策略根据基站覆盖范围将边缘计算环境划分为多个区域,每个区域与相邻区域协作缓存数据资源。在每个区域中,计算每个数据块对本地区域及相邻区域的缓存价值,根据待缓存资源的缓存价值进行缓存决策,最小化用户获取数据资源的延迟。仿真实验结果表明,提出缓存策略比现有非协作缓存策略数据资源平均获取延迟降低了36.55%,有效降低了数据资源平均获取延迟。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号