首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 71 毫秒
1.
嵌入式移动数据库的客户机端Cache管理   总被引:7,自引:0,他引:7  
嵌入式移动数据库是一个具有广阔应用前景的新兴技术,具有嵌入式系统和移动计算的应用特点,有许多关键技术亟待研究和解决。主要论述数据广播技术中客户机端Cache的管理,通过对移动数据库应用中Cache的替换策略、数据预取策略和数据一致性问题的讨论,探讨如何更好地减小客户机端访问请求的平均响应时间。  相似文献   

2.
GridDaen数据网格中Cache机制的设计与实现   总被引:2,自引:0,他引:2  
黄斌  彭小宁  肖侬  刘波 《计算机工程》2005,31(10):119-120
数据网格是一个典型的分布式系统,访问其中的广域分布的海量数据需要很大的时间开销,介绍GridDaen数据网格系统实现高速数据统一访问的Cache技术。GridDaen采用二级Cache机制,使用两个数据缓冲表来快速定位缓冲数据和控制缓冲数据访问,给出了各级Cache的数据替换算法,并提供了灵活的配置方法,可以将Cache与客户机,服务器独立分布,实现了Cache的扩展性。  相似文献   

3.
混合Cache的低功耗设计方案   总被引:1,自引:0,他引:1       下载免费PDF全文
在嵌入式处理器中,Cache的功耗所占的比重越来越大。为降低嵌入式系统中混合Cache的功耗,引入一种基于程序段的重构算法——PPBRA,并提出一种新的基于分类访问的可重构混合Cache结构,该方案能够根据不同程序段对Cache容量的需求,动态地分配混合Cache的指令路数和数据路数,还能够对混合Cache进行分类访问,过滤对不必要路的访问,从而实现降低混合Cache的功耗的目的。Mibench仿真结果表明,该方案在有效降低Cache功耗的同时,还能提高Cache的综合性能。  相似文献   

4.
为了提高嵌入式系统中Cache的使用效率,针对不同类型的应用程序对指令和数据Cache的容量实时需求不同,提出一种滑动Cache组织方案.均衡考虑指令和数据Cache需求,动态地调整一级Cache的容量和配置.采用滑动Cache结构,不但降低了一级Cache的动态和静态泄漏功耗,而且还降低了整个处理器的动态功耗.模拟仿真结果表明,该方案在有效降低Cache功耗的同时能够提高Cache的综合性能.  相似文献   

5.
数据网格是一个典型的分布式系统,访问其中的广域分布的海量数据需要很大的时间开销.本文介绍一个数据网格系统中实现高速数据统一访问的Cache模型,该模型采用二级Cache机制,使用二个数据缓冲表来快速定位缓冲数据和控制缓冲数据访问,给出了各级Cache的数据替换算法,并提供了灵活的配置方法,可以将Cache与客户机、服务器独立分布,实现了Cache的可扩展性.  相似文献   

6.
郝俊京  孙义  肖展业 《计算机应用》2003,23(11):146-148
文中介绍了一种新系统——服务动态部署系统。它通过对传统无盘工作站的改进,使远程客户机在系统启动时,服务器能够动态为其分配不同的操作系统或服务。而且能够根据某台机器已有的环境,服务动态部署系统在数分钟以内复制出该机器环境的“克隆”版本分配给新的用户或者客户机,不必像传统那样每台机器都要重新安装操作系统和各种软件。  相似文献   

7.
论述了在分布式虚拟环境系统中采用层次结构、多服务器/客户机的数据管理模式下的Cache机制,并对该系统结构下的数据查询与读取能力进行了测试,测试结果表明该数据管理系统具有良好的数据服务性能。  相似文献   

8.
紧耦合多处理机系统引进Cache有两点好处:一点是缩短了CPU的访存时间,另一点是减轻了互联网络及访存负载,但同时也带来了多Cache信息一致性的问题。多年来人们提出了很多种Cache一致性控制方案,但实际使用时往往不尽人意。本文提出了一种全新而高效的Cache信息一致性方案,它容广播法和目录表方案的优点于一炉,具有硬软件开销小、易扩展、算法实现简洁等优点。该方案较好地适用于具有私有Cache和互联网络的紧耦合高性能多处理机系统。分析表明,该方案带来了系统速度可观的提高,在硬件开销和时间开销两方面都优越于广播法和目录表法。  相似文献   

9.
就多处理机系统中难点之一——Cache一致性问题的探讨,介绍已在商用系统中得到应用的MESI方案.  相似文献   

10.
谭成辉  杨磊  文建国  李肯立 《计算机工程》2011,37(5):270-272,275
设计并实现一个基于分级Cache的透明计算系统HCTS,在系统客户端和服务端采用两级缓存来提升I/O性能。在缓存的管理策略上,针对透明计算应用环境,以提高缓存命中率为主要目标,提出一种基于访问频率计数阈值的改进LRU置换算法LRU-AFS。测试结果表明,当网络环境中的客户主机数不断增加时,与普通透明计算系统TS相比,HCTS能够在减少网络流量的同时大幅缩短客户机启动时间,提高随机读写吞吐量。  相似文献   

11.
A new cache architecture based on temporal and spatial locality   总被引:5,自引:0,他引:5  
A data cache system is designed as low power/high performance cache structure for embedded processors. Direct-mapped cache is a favorite choice for short cycle time, but suffers from high miss rate. Hence the proposed dual data cache is an approach to improve the miss ratio of direct-mapped cache without affecting this access time. The proposed cache system can exploit temporal and spatial locality effectively by maximizing the effective cache memory space for any given cache size. The proposed cache system consists of two caches, i.e., a direct-mapped cache with small block size and a fully associative spatial buffer with large block size. Temporal locality is utilized by caching candidate small blocks selectively into the direct-mapped cache. Also spatial locality can be utilized aggressively by fetching multiple neighboring small blocks whenever a cache miss occurs. According to the results of comparison and analysis, similar performance can be achieved by using four times smaller cache size comparing with the conventional direct-mapped cache.And it is shown that power consumption of the proposed cache can be reduced by around 4% comparing with the victim cache configuration.  相似文献   

12.
片上多核技术的出现给处理器的设计和实现带来很多挑战,片上存储系统的设计就是其中最重要的方面之一.为了缓解日益严峻的存储墙问题,研究者们通常在片上放置大容量末级Cache,片上末级Cache设计和优化技术已成为当前的研究热点.介绍了片上多处理器(CMP)末级Cache设计面临的挑战,然后分别介绍了以私有设计和共享设计为基础的多种CMP末级Cache优化技术,并对它们进行了比较分析.  相似文献   

13.
The widening gap between processor and memory speeds makes cache an important issue in the computer system design.Compared with work set of programs,cache resource is often rare.Therefore,it is very important for a computer system to use cache efficiently.Toward a dynamically reconfigurable cache proposed recently,DOOC(Data-Object Oriented Cache),this paper proposes a quantitative framework for analyzing the cache requirement of data-objects, which includes cache capacity,block size,associativity and coh...  相似文献   

14.
The object of this paper is to explain a system of Internet traffic caching. The task is to create an analytical model of a cache system linking its size with other parameters by boundary conditions. A definition of a dynamic cache model is introduced. The parameters of a cache system are calculated using the Zipf's first law and Zipf-like distribution. The correspondence between size of a cache system and aggregated bandwidth of external links is derived.  相似文献   

15.
The power consumed by memory systems accounts for 45% of the total power consumed by an embedded system, and the power consumed during a memory access is 10 times higher than during a cache access. Thus, increasing the cache hit rate can effectively reduce the power consumption of the memory system and improve system performance. In this study, we increased the cache hit rate and reduced the cache-access power consumption by developing a new cache architecture known as a single linked cache (SLC) that stores frequently executed instructions. SLC has the features of low power consumption and low access delay, similar to a direct mapping cache, and a high cache hit rate similar to a two way-set associative cache by adding a new link field. In addition, we developed another design known as a multiple linked caches (MLC) to further reduce the power consumption during each cache access and avoid unnecessary cache accesses when the requested data is absent from the cache. In MLC, the linked cache is split into several small linked caches that store frequently executed instructions to reduce the power consumption during each access. To avoid unnecessary cache accesses when a requested instruction is not in the linked caches, the addresses of the frequently executed blocks are recorded in the branch target buffer (BTB). By consulting the BTB, a processor can access the memory to obtain the requested instruction directly if the instruction is not in the cache. In the simulation results, our method performed better than selective compression, traditional cache, and filter cache in terms of the cache hit rate, power consumption, and execution time.  相似文献   

16.
HTTP缓存服务器是提高HTTP Streaming系统客户并发量的关键环节。但当前主流HTTP缓存服务器,如Nginx、Squid、Varnish等,在缓存资源更新期间的行为都存在不足,当被应用在面向直播的HTTP Streaming系统中时,会周期性地把大量客户端请求转发至源服务器,从而制约了HTTP Streaming系统的可伸缩性。提出一种优化的HTTP缓存服务器在缓存更新期间的行为,即缓存服务器仅向源服务器转发一路客户端请求,缓存更新期间,拒绝其他关于该资源的请求。优化策略在使用最为广泛的Nginx服务器的基础上进行了实现。实验证明,优化后系统的伸缩性得到了显著提高。  相似文献   

17.
利用循环分割和循环展开避免Cache代价   总被引:1,自引:0,他引:1  
刘利  陈彧  乔林  汤志忠 《软件学报》2008,19(9):2228-2242
存储系统与处理器之间的速度差距逐渐变大,为此,cache使用了分级机制,但这也带来了额外的存储延迟(cache代价).提出一种利用循环分割和循环展开相结合避免cache代价的PCPLPU(prevent cache penalty by loop partition-unrolling)算法.实验结果表明,PCPLPU算法能够有效避免循环代价,提高程序性能.  相似文献   

18.
文凯  谭笑 《计算机应用》2019,39(7):2051-2055
在端到端(D2D)缓存网络中存在大量多媒体内容,而移动终端中缓存空间却相对有限。为了实现移动终端中缓存空间的高效利用,提出了一种基于用户偏好与副本阈值的D2D缓存部署算法。首先,基于用户偏好,设计缓存收益函数,用于判断各文件的缓存价值;然后,以系统缓存命中率最大化为目标,利用凸规划理论设计缓存副本阈值,用于部署系统中文件的副本数量;最后,联合缓存收益函数与副本阈值,提出一种启发式算法实现了文件的缓存部署。与现有缓存部署算法相比,该算法可显著提升缓存命中率及卸载增益,降低服务时延。  相似文献   

19.
尹洋  刘振军  许鲁 《软件学报》2009,20(10):2752-2765
随着计算规模越来越大,网络存储系统应用领域越来越广泛,对网络存储系统I/O性能要求也越来越高.在存储系统高负载的情况下,采用低速介质在客户机和网络存储系统的I/O路径上作为数据缓存也变得具有实际的意义.设计并实现了一种基于磁盘介质的存储系统块一级的缓存原型D-Cache.采用两级结构对磁盘缓存进行管理,并提出了相应的基于块一级的两级缓存管理算法.该管理算法有效地解决了因磁盘介质响应速度慢而带来的磁盘缓存管理难题,并通过位图的使用消除了磁盘缓存写Miss时的Copy on Write开销.原型系统的测试结果表明,在存储服务器高负载的情况下,缓存系统能够有效地提高系统的整体性能.  相似文献   

20.
The availability of low cost, high performance microprocessors has led to various designs of shared memory multiprocessor systems. As a result, commercial products which are based on shared memory have been proliferated. Such a multiprocessor system is heavily influenced by the structure of memory system and it is not difficult to find that most configurations include local cache memories. The more processors a system carries, the larger local cache memory is needed to maintain the traffic to and from the shared memory at reasonable level. The implementation of local cache memories, however, is not a simple task because of environmental limitations. In particular, the general lack of board space availability presents a formidable problem. A cache memory system usually needs space mostly to support its complex control logic circuits for the cache itself and network interfaces like snooping logic circuits for shared bus. Although packaging can be made denser to reduce system size, there are still multiple processors per board. It requires a more area-efficient cache memory architecture. This paper presents a design of shared cache for dual processor board of bus-based symmetric multiprocessors. The design and implementation issues are described first and then the evaluation and measurement results are discussed. The shared cache proposed in this paper has been determined to be quite area-efficient without the significant loss of throughput and scalability. It has been implemented as a plug-in unit for TICOM, a prevalent commercial multiprocessor system.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号