首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到17条相似文献,搜索用时 187 毫秒
1.
方娟  郭媚  杜文娟 《计算机科学》2013,40(8):34-37,42
针对片上多核处理器下的二级共享Cache的能耗问题提出了基于Cache划分的路预测Cache结构WPP-L2,该结构首先对共享Cache进行公平性划分,然后采用路预测的方法降低了预测命中和失效时各自的能耗开销。实验表明,在基本保持多核处理器性能的同时,8核处理器系统下WPP-L2Cache比基于路预测的L2Cache的能耗延迟乘积EDP(Energy Delay Product)平均下降24.7%,比传统的L2Cache的EDP平均下降66.1%,极大地降低了L2Cache功耗。  相似文献   

2.
多核处理机系统Cache管理技术研究现状   总被引:1,自引:0,他引:1       下载免费PDF全文
多核处理器的Cache结构设计和管理是微处理器设计领域的重要问题。当前主流的商用微处理器均采用共享最后一级Cache的系统结构,而片上最后一级Cache的性能通常对处理器的性能影响较大,因此共享Cache的管理问题成为当前研究热点。本文首先介绍当前主流多核处理器及其设计问题,然后介绍了共享Cache管理的三项重要技术:线程调度、NUCA和Cache划分,最后给出多核处理器Cache管理技术的发展方向。  相似文献   

3.
王冶  张盛兵  王党辉 《计算机工程》2012,38(1):268-269,272
为降低微处理器中片上Cache的能耗,设计一种基于预缓冲机制的指令Cache。通过预缓冲控制部件的预测,使处理器需要的指令尽可能在缓冲区命中,从而避免访问指令Cache所造成的功耗。对7个测试程序的仿真结果表明,预缓冲机制能节省23.23%的处理器功耗,程序执行性能平均提升7.53%。  相似文献   

4.
多核多线程处理器二级Cache预取结构的设计   总被引:1,自引:1,他引:0       下载免费PDF全文
合理的设计二级Cache是有效地减少多核多线程处理器存储器访问延迟的方法。针对现有的多核多线程处理器,讨论了二级Cache的混合预取结构设计方案。通过详细设计和仿真分析,结果表明混合预取结构可有效提高处理器的整体性能。特别是采用不命中混合预取结构的二级Cache性能更佳,适合满足此类结构的多核多线程处理器需求。  相似文献   

5.
处理器存储系统的效率对其整体性能有着十分重要的作用.文中介绍了P4处理器内存的体系结构,它包括一级数据Cache、二级Cache、Trace Cache;各部分完成的功能以及为提高命中率和降低存取时间,从而提高效率而采取的预取处理机制;P4处理器主要采取具有层次结构的内存设计、大容量的二级Cache和在跟踪Cache中采用预取处理机制的方法来提高Cache的命中率和降低未命中的代价来缩短处理器的访问时间,最终达到提高处理器整体性能的目的.  相似文献   

6.
一种片上众核结构共享Cache动态隐式隔离机制研究   总被引:2,自引:0,他引:2  
访存带宽是限制众核处理器件能提升的关键,将片上最后一级Cache设计为所有处理器核共享是必要的.在共享Cache中隔离放置冲突的数据,是提高共享Cache性能的关键.文中提出了缓存块链接的硬件方法,用于隔离共享Cache中不同线程之间的数据.文中基于时钟精准的片上众核结构模拟器,使用Splash2程序组和生物信息学中的仟务,对所提机制进行了评估.实验结果表明,与传统共享Cache相比,使用缓存块链接机制时,使得共享Cache的冲突性缺失率降低约20%,而使得IPC平均提高了约10%.  相似文献   

7.
多核处理器面向低功耗的共享Cache划分方案   总被引:1,自引:0,他引:1       下载免费PDF全文
随着多核处理器的发展,片上Cache的容量随之增大,其功耗占整个芯片功耗的比率也越来越大。如何减少Cache的功耗,已成为当今Cache设计的一个热点。本文研究了面向低功耗的多核处理器共享Cache的划分技术(LP-CP)。文中提出了Cache划分框架,通过在处理器中加入失效率监控器来动态地收集程序的失效率,然后使用面向低功耗的共享Cache划分算法,计算性能损耗阈值范围内的共享Cache划分策略。我们在一个共享L2 Cache的双核处理器系统中,使用多道程序测试集测试了面向低功耗的Cache划分:在性能损耗阈值为1%和3%的情况中,系统的Cache关闭率分别达到了20.8%和36.9%。  相似文献   

8.
制造工艺的快速进步给集成电路设计提供了广阔的空间,而发展较慢的设计能力导致难以对片上资源高效利用。目前,高性能处理器片上Cache普遍占到芯片总面积的一半以上,而如何高效、智能地利用片上Cache空间,构建高性能存储系统是处理器微体系结构研究的重要内容。分析了Cache数据污染和猜测执行对处理器性能的影响,并在此基础上提出一种基于数据Tag有效位分裂的无污染Cache访问控制技术-Pease,将原先D-Cache Tag中的一位数据有效位扩展为读数据有效位(RVB)和写数据有效位(WVB)两位,根据RVB和WVB值的不同组合对数据读写访问进行控制。不但充分保留了猜测执行的数据预取性,使污染数据透明化,写入数据时无需对污染数据进行替换操作,消除了污染数据对Cache效率的影响。Pease技术相对于baseline结构来说,IPC的提升幅度为1.05%~8.40%,平均提升4.04%;L1 D-Cache缺失率降低幅度为19.05%~48.16%,平均降低29.66%。  相似文献   

9.
Pentium4处理器的内存层次分析   总被引:2,自引:0,他引:2  
吴金  齐欢 《微机发展》2004,14(7):47-48,51
处理器存储系统的效率对其整体性能有着十分重要的作用。文中介绍了P4处理器内存的体系结构,它包括一级数据Cache、二级Cache、Trace Cache;各部分完成的功能以及为提高命中率和降低存取时间,从而提高效率而采取的预取处理机制;P4处理器主要采取具有层次结构的内存设计、大容量的二级Cache和在跟踪Cache中采用预取处理机制的方法来提高Cache的命中率和降低未命中的代价来缩短处理器的访问时间,最终达到提高处理器整体性能的目的。  相似文献   

10.
作为嵌入式处理器的关键部件,片上Cache的功耗能占到整个处理器功耗的50%以上;一个设计良好的片上数据存储单元能有效降低处理器功耗,并且提高整个系统的性能;便签式存储器(Scratchpad memory,SPM)具有占用片上面积少、功耗低和访问时延确定等优点,因此成为嵌入式系统领域的研究热点;以SPM为基础,介绍了一种动态可配置片上数据存储单元的设计方法,并提出SPM操作函数,方便应用程序开发;实验结果表明,该片上数据存储单元能耗降低超过35%,测试程序运行时间平均减少了20.3%。  相似文献   

11.
Performance trade-offs between fast data access by local data replication and cache capacity maximization by global data sharing have been extensively studied for many-core Chip Multiprocessors (CMPs). Costly simulations over a wide spectrum of the design space are generally required to gain insight for a sound design. To lower the cost, we develop an abstract model for understanding the performance impact of data replication on CMP caches. To overcome the lack of real-time interactions among multiple cores in the model, we further develop an efficient single-pass stack simulation to study the performance of CMP cache organizations with various degrees of data replication. The global stack logically incorporates a shared stack and per-core private stacks; shared/private reuse (stack) distances can be collected in a single-pass simulation. With the reuse distances, one can calculate the performance of CMP cache organizations with various degrees of data replication. We verify both the model and the stack simulation against execution-driven simulations with commercial multithreaded workloads. The results show that the abstract model provides accurate information about performance trade-offs of data replication. The stack simulation accurately predicts the performance of various cache organizations with 2-9 percent error margins using only about 8 percent of the simulation time.  相似文献   

12.
片上多处理器中延迟和容量权衡的cache结构   总被引:1,自引:0,他引:1  
片上多处理器中二级cache的设计面临着延迟和容量不能同时满足的矛盾,私有结构有较小的命中延迟但是减少了cache的有效容量,共享结构能增加cache的有效容量但是有较长的命中延迟.提出了一种适用于CMP的cache结构--延迟和容量权衡的cache结构(TCLC).该结构是一种混合私有结构和共享结构的设计,核心思想是动态识别cache块的共享类型,根据不同共享类型分别对其进行优化,对私有cache块采用迁移的优化策略,对共享只读cache块采用复制的优化策略,对共享读写cache块采用中心放置的优化策略,以期达到访问延迟接近私有结构,有效容量接近共享结构的目的,从而缓解线延迟的影响,减少平均内存访问延迟.全系统模拟的实验结果表明,采用TCLC结构,相对于私有结构性能平均提高13.7%.相对于共享结构性能平均提高12%.  相似文献   

13.
This paper presents a helper thread prefetching scheme that is designed to work on loosely coupled processors, such as in a standard chip multiprocessor (CMP) system or an intelligent memory system. Loosely coupled processors have an advantage in that resources such as processor and L1 cache resources are not contended by the application and helper threads, hence preserving the speed of the application. However, interprocessor communication is expensive in such a system. We present techniques to alleviate this. Our approach exploits large loop-based code regions and is based on a new synchronization mechanism between the application and helper threads. This mechanism precisely controls how far ahead the execution of the helper thread can be with respect to the application thread. We found that this is important in ensuring prefetching timeliness and avoiding cache pollution. To demonstrate that prefetching in a loosely coupled system can be done effectively, we evaluate our prefetching by simulating a standard unmodified CMP system and an intelligent memory system where a simple processor in memory executes the helper thread. Evaluating our scheme with nine memory-intensive applications with the memory processor in DRAM achieves an average speedup of 1.25. Moreover, our scheme works well in combination with a conventional processor-side sequential L1 prefetcher, resulting in an average speedup of 1.31. In a standard CMP, the scheme achieves an average speedup of 1.33. Using a real CMP system with a shared L2 cache between two cores, our helper thread prefetching plus hardware L2 prefetching achieves an average speedup of 1.15 over the hardware L2 prefetching for the subset of applications with high L2 cache misses per cycle.  相似文献   

14.
This paper addresses cache organization in chip multiprocessors (CMPs). We show that in CMP systems it is valuable to distinguish between shared data, which is accessed by multiple cores, and private data accessed by a single core. We introduce Nahalal, an architecture whose novel floorplan topology partitions cached data according to its usage (shared versus private data), and thus enables fast access to shared data for all processors while preserving the vicinity of private data to each processor. Nahalal exhibits significant improvements in cache access latency compared to a traditional cache design.  相似文献   

15.
In this paper, we propose a novel on-chip L2 cache organization for chip multiprocessors (CMPs) with private L2 caches. The proposed approach, called reusability-aware cache sharing (RACS), combines the advantages of both a private L2 cache and a shared L2 cache. Since a private L2 cache organization has a short access latency, the RACS scheme employs a private L2 cache organization. However, when a cache block in a private L2 cache is selected for eviction, RACS first evaluates its reusability. If the block is likely to be reused in the near future, it may be saved to a peer L2 cache which has space available. In this way, the RACS scheme effectively simulates the larger capacity of a shared L2 cache. Simulation results show that RACS reduced the number of off-chip memory accesses by 24% compared to a pure private L2 cache organization on average for the SPLASH 2 multi-threaded benchmarks, and by 16% for multi-programmed benchmarks.  相似文献   

16.
Shared memory provides an attractive and intuitive programming model for large-scale parallel computing, but requires a coherence mechanism to allow caching for performance while ensuring that processors do not use stale data in their computation. Implementation options range from distributed shared memory emulations on networks of workstations to tightly coupled fully cache-coherent distributed shared memory multiprocessors. Previous work indicates that performance varies dramatically from one end of this spectrum to the other. Hardware cache coherence is fast, but also costly and time-consuming to design and implement, while DSM systems provide acceptable performance on only a limit class of applications. We claim that an intermediate hardware option-memory-mapped network interfaces that support a global physical address space, without cache coherence-can provide most of the performance benefits of fully cache-coherent hardware, at a fraction of the cost. To support this claim we present a software coherence protocol that runs on this class of machines, and use simulation to conduct a performance study. We look at both programming and architectural issues in the context of software and hardware coherence protocols. Our results suggest that software coherence on NCC-NUMA machines in a more cost-effective approach to large-scale shared-memory multiprocessing than either pure distributed shared memory or hardware cache coherence.  相似文献   

17.
机会网络节点协作缓存策略设计与实现   总被引:1,自引:1,他引:0       下载免费PDF全文
陈果  叶晖  赵明 《计算机工程》2010,36(18):85-87
针对如何有效利用机会网络中节点间的协作关系以及节点有限的缓存资源,避免拥塞和提升数据传输性能的问题,提出一种机会网络协作缓存优化策略——HMP-Cache。该策略根据节点不同运动状态的特点,利用目标地址匹配标准选择协作缓存节点,采用同步Cache数据表达到局部域内缓存信息共享的目的。仿真实验结果表明,该策略能够有效控制数据访问的网络开销,降低网络热点数据访问延迟。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号