首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 156 毫秒
1.
随着集成电路制造工艺的发展,片上集成大容量Cache成为微处理器的发展趋势。然而,互连线延迟所占比例越来越大,成为大容量Cache的性能瓶颈,因此需要新的Cache体系结构来克服这些问题。非一致Cache体系结构通过在Cache内部支持多级延迟和数据块迁移来减少Cache的命中时间,提高性能,从而克服互连线延迟对大容量Cache的限制,已经成为微处理器片上存储结构的研究热点。本文回顾了非一致Cache体系结构模型的研究进展,特别是对片上多核处理器中的非一致Cache体系结构模型进行了详细介绍,比较了不同模型的贡献和不足。最后,对非一致Cache体系结构的发展进行了展望。  相似文献   

2.
RAID控制器中多级Cache的研究   总被引:1,自引:0,他引:1       下载免费PDF全文
本文介绍了一种应用于RAID控制器的两级Cache结构。在物理上,整个Cache可分为读Cache和写Cache,且读Cache分为两级:一个容纳小块数据的组相联Cache和一个容纳大块数据的全相联空间Cache。性能测试结果表明,命中率和命中次数在两级Cache结构中都有所提高。  相似文献   

3.
为减少写回型Cache的总线通信量及总线访问时间,本文提出了以字为单位进行数据更新的Cache管理方式,给出了这种管理方式的组成原理、管理算法和工作流程,并通过模拟分析指出这种方式和以块为单位进行写回的Cache相比较,可大大减少主存访问次数和总线通信量。  相似文献   

4.
Cache自适应写分配策略   总被引:1,自引:0,他引:1  
处理器所能提供的有效带宽是目前制约处理器性能提高的关键因素 .通过对Cache写失效行为的分析,提出了一种新的提高处理器带宽利用率的Cache写失效处理策略--Cache自适应写分配策略 .该策略在访存失效队列中收集全修改Cache块,对全修改Cache块采用非写分配策略,并能够自适应地切换为写分配策略 .与传统的Cache写失效处理策略相比,Cache自适应写分配策略硬件代价小,避免了不必要的数据传输,降低Cache污染,减少存储管理队列阻塞的频率 .结果表明,采用Cache自适应写分配策略,STREAM基准测试程序带宽平均提高62.6%,SPEC CPU2000程序的IPC值平均提高5.9% .  相似文献   

5.
吴柯 《电脑学习》2007,(2):49-50
设计了一个Cache数据一致性演示系统,能演示Cache在采用不同的映象规则与不同写策略时的动态读写过程.  相似文献   

6.
分组密码Cache攻击技术研究   总被引:2,自引:0,他引:2  
近年来,Cache攻击已成为微处理器上分组密码实现的最大安全威胁,相关研究是密码旁路攻击的热点问题.对分组密码Cache攻击进行了综述.阐述了Cache工作原理及Cache命中与失效旁路信息差异,分析了分组密码查表Cache访问特征及泄露信息,从攻击模型、分析方法、研究进展3个方面评述了典型的分组密码Cache攻击技术,并对Cache攻击的发展特点进行了总结,最后指出了该领域研究存在的问题,展望了未来的研究方向.  相似文献   

7.
Cache是高性能微处理器解决CPU和存储器速度差异问题的有效措施之一。在共享存储器的多机环境下,共享数据在多个处理器的片上Cache中分布,Cache间维持数据一致性成为关键。该文讨论了32位嵌入式微处理器“龙腾R2”的Cache的设计和实现和支持多机环境的Cache一致性实现方法,并给出了实现的结果。  相似文献   

8.
寻找新型存储材料代替DRAM内存是当前的一个研究热点。相变存储PCM因其具有低功耗、高存储密度和非易失性的优点受到广泛的关注,然而PCM的可擦写次数有限,要用作内存必须考虑如何减少对其的写操作。针对该问题,一种有效的解决方法是优化Cache替换策略,减少Cache中脏块被替换出的数量。现有研究主要通过在插入和访问命中时给脏块设定较高的保护优先级来达到给脏块额外保护的目的,但是在降级过程中不再对脏块与干净块进行区分,这导致Cache可能在存在大量干净块的情况下仍然先替换脏块。提出一种新型的Cache替换策略MAC,它通过一个多维分级结构在脏块与干净块之间设置了不可逾越的界限,使得脏块能得到更有力的保护。模拟实验表明,相对LRU替换策略,MAC以较低的硬件开销代价平均减少约25.12%的内存写,同时对程序运行性能几乎没有影响。  相似文献   

9.
王冶  张盛兵  王党辉 《计算机工程》2012,38(1):268-269,272
为降低微处理器中片上Cache的能耗,设计一种基于预缓冲机制的指令Cache。通过预缓冲控制部件的预测,使处理器需要的指令尽可能在缓冲区命中,从而避免访问指令Cache所造成的功耗。对7个测试程序的仿真结果表明,预缓冲机制能节省23.23%的处理器功耗,程序执行性能平均提升7.53%。  相似文献   

10.
处理器存储系统的效率对其整体性能有着十分重要的作用.文中介绍了P4处理器内存的体系结构,它包括一级数据Cache、二级Cache、Trace Cache;各部分完成的功能以及为提高命中率和降低存取时间,从而提高效率而采取的预取处理机制;P4处理器主要采取具有层次结构的内存设计、大容量的二级Cache和在跟踪Cache中采用预取处理机制的方法来提高Cache的命中率和降低未命中的代价来缩短处理器的访问时间,最终达到提高处理器整体性能的目的.  相似文献   

11.
Pentium4处理器的内存层次分析   总被引:2,自引:0,他引:2  
吴金  齐欢 《微机发展》2004,14(7):47-48,51
处理器存储系统的效率对其整体性能有着十分重要的作用。文中介绍了P4处理器内存的体系结构,它包括一级数据Cache、二级Cache、Trace Cache;各部分完成的功能以及为提高命中率和降低存取时间,从而提高效率而采取的预取处理机制;P4处理器主要采取具有层次结构的内存设计、大容量的二级Cache和在跟踪Cache中采用预取处理机制的方法来提高Cache的命中率和降低未命中的代价来缩短处理器的访问时间,最终达到提高处理器整体性能的目的。  相似文献   

12.
We describe a data deduplication system for backup storage of PC disk images, named in-RAM metadata utilizing deduplication (IR-MUD). In-RAM hash granularity adaptation and miniLZO based data compression are firstly proposed to reduce the in-RAM metadata size and thereby reduce the space overheads required by the in-RAM metadata caches. Secondly, an in-RAM metadata write cache, as opposed to the traditional metadata read cache, is proposed for further reducing metadata-related disk I/O operations and improving deduplication throughput. During deduplication, the metadata write cache is managed following the LRU caching policy. For each manifest that is hit in the metadata write cache, an expensive manifest reloading operation from the disk is avoided. After deduplication, all the manifests in the metadata write cache are cleared and stored on the disk. Our experimental results using 1.5 TB real-world disk image dataset show that 1) IR-MUD achieved about 95% size reduction for the deduplication metadata, with a small time overhead introduced, 2) when the metadata write cache was not utilized, with the same RAM space size for the metadata read cache, IR-MUD achieved a 400% higher RAM hit ratio and a 50% higher deduplication throughput, as compared with the classic Sparse Indexing deduplication system where no metadata utilization approaches are utilized, and 3) when the metadata write cache was utilized and enough RAM space was available, IR-MUD achieved a 500% higher RAM hit ratio compared with Sparse Indexing and a 70% higher deduplication throughput compared with IR-MUD with only a single metadata read cache. The in-RAM metadata harnessing and metadata write caching approaches of IR-MUD can be applied in most parallel deduplication systems for improving metadata caching efficiency.  相似文献   

13.
Caches are essential to bridge the gap between the high latency main memory and the fast processor pipeline. Standard processor architectures implement two first-level caches to avoid a structural hazard in the pipeline: an instruction cache and a data cache. For tight worst-case execution times it is important to classify memory accesses as either cache hit or cache miss. The addresses of instruction fetches are known statically and static cache hit/miss classification is possible for the instruction cache. The access to data that is cached in the data cache is harder to predict statically. Several different data areas, such as stack, global data, and heap allocated data, share the same cache. Some addresses are known statically, other addresses are only known at runtime. With a standard cache organization all those different data areas must be considered by worst-case execution time analysis. In this paper we propose to split the data cache for the different data areas. Data cache analysis can be performed individually for the different areas. Access to an unknown address in the heap does not destroy the abstract cache state for other data areas. Furthermore, we propose to use a small, highly associative cache for the heap area. We designed and implemented a static analysis for this cache, and integrated it into a worst-case execution time analysis tool.  相似文献   

14.
缓存加速技术可以利用固态硬盘(SSD,solid state disk)随机访问性能高的优势,提升机械硬盘的随机读写性能;传统的缓存加速技术难以适应大数据背景下高并发、间歇性频繁访问等热点数据访问需求;为了提升缓存整体性能,提出一种基于虚拟存储层的缓存策略(CVSL,cache policy based on the virtual storage layer),将缓存技术和分层存储技术相结合,通过热度统计、数据逻辑迁移,实现基于数据逻辑分层的缓存控制;实验结果表明,相对传统的缓存策略,CVSL策略的随机读写性能提升了9%~10%,未见明显波动,在缓存命中率方面具有良好的效果,达到了预期设计目标.  相似文献   

15.
根据缓存数据在缓冲区的活动性不同而设计和实现了一个分类的延迟写(Write behind)技术,通过将不同活动性的数据分类缓存,并延迟刷新到磁盘来减少、合并写磁盘的次数,从而改进文件系统的写性能.初步的实现结果表明,分类的缓存延迟写技术比LRU的缓存策略有更短的系统响应时间,写文件的反馈时间减少了11.3%,并且使用RWB策略的缓存命中率比使用LRU策略高.  相似文献   

16.
The future storage systems are expected to contain a wide variety of storage media and layers due to the rapid development of NVM(non-volatile memory)techniques.For NVM-based read caches,many kinds of NVM devices cannot stand frequent data updates due to limited write endurance or high energy consumption of writing.However,traditional cache algorithms have to update cached blocks frequently because it is difficult for them to predict long-term popularity according to such limited information about data blocks,such as only a single value or a queue that reflects frequency or recency.In this paper,we propose a new MacroTrend(macroscopic trend)prediction method to discover long-term hot blocks through blocks'macro trends illustrated by their access count histograms.And then a new cache replacement algorithm is designed based on the MacroTrend prediction to greatly reduce the write amount while improving the hit ratio.We conduct extensive experiments driven by a series of real-world traces and find that compared with LRU,MacroTrend can reduce the write amounts of NVM cache devices significantly with similar hit ratios,leading to longer NVM lifetime or less energy consumption.  相似文献   

17.
Solid-state drives (SSDs) have been widely used as caching tier for disk-based RAID systems to speed up dataintensive applications. However, traditional cache schemes fail to effectively boost the parity-based RAID storage systems (e.g., RAID-5/6), which have poor random write performance due to the small-write problem. What’s worse, intensive cache writes can wear out the SSD quickly, which causes performance degradation and cost increment. In this article, we present the design and implementation of KDD, an efficient SSD-based caching system which Keeps Data and Deltas in SSD. When write requests hit in the cache, KDD dispatches the data to the RAID storage without updating the parity blocks to mitigate the small write penalty, and compactly stores the compressed deltas in SSD to reduce the cache write traffic while guaranteeing reliability in case of disk failures. In addition, KDD organizes the metadata partition on SSD as a circular log to make the cache persistent with low overhead.We evaluate the performance of KDD via both simulations and prototype implementations. Experimental results show that KDD effectively reduces the small write penalty while extending the lifetime of the SSD-based cache by up to 6.85 times.  相似文献   

18.
On-board disk cache is an effective approach to improve disk performance by reducing the number of physical accesses to the magnetic media. Disk drive manufacturers are increasing the on-board disk cache size to match the capacity growth of the backend magnetic media. Some disk drives nowadays have a cache of 32 MB. Modern computer systems use large amounts of memory to improve performance, any data brought into host memory will be re-accessed there, not in the on-board disk cache. This feature has a significant impact on the behavior of disk cache. This is because computer systems are complex systems consisting of various components. The components are correlated with each other. Therefore, a specific component cannot be isolated from the overall system when we analyze its performance behavior. This paper employs four block-level real traces to explore the performance behavior of the on-board disk cache by considering the impacts of the cache hierarchy contained in computer systems. The analysis gives three major implications: (1) I/O stream at block-level contains negligible temporal locality. Therefore, read/write cache can only achieve marginal benefits. (2) Static write cache does not achieve performance gains since the write stream does not have too much interference with the read stream. Therefore, it is better to leave the on-board disk cache shared by both the write and read streams. (3) Read cache dominates the contribution to the hit ratio besides prefetch. Thus, it is better to focus on improving the read performance rather than write performance of disk cache.  相似文献   

19.
Hash tables, as a type of data indexing structure that provides efficient data access based on key values, are widely used in various computer applications, especially in system software, databases, and high-performance computing field that requires extremely high performance. In network, cloud computing and IoT services, hash tables have become the core system components of cache systems. However, with the large-scale increase in the amount of large-scale data, performance bottlenecks have gradually emerged in systems designed with a multi-core CPU as the core of the hash table structure. There is an urgent need to further improve the high performance and scalability of the hash tables. With the increasing popularity of general-purpose Graphic Processing Units (GPUs) and the substantial improvement of hardware computing capabilities and concurrency performance, various types of system software tasks with parallel computing as the core have been optimized on the GPU and have achieved considerable performance promotion. Due to the sparseness and randomness, using the existing parallel structure of the hash tables directly on the GPUs will inevitably bring high-frequency memory access and frequent bus data transmission, which affects the performance of the hash tables on the GPUs. This study focuses on the analysis of memory access, hit ratio, and index overhead of hash table indexes in the cache system. A hybrid access cache indexing framework CCHT (Cache Cuckoo Hash Table) adapted to GPU is proposed and provided. The cache strategy suitable to different requirements of hit ratios and index overheads allows concurrent execution of write and query operations, maximizing the use of the computing performance and concurrency characteristics of GPU hardware, reducing memory access and bus transferring overhead. Through GPU hardware implementation and experimental verification, CCHT has better performance than other cache indexing hash tables while ensuring cache hit ratios.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号