首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Update-Based Cache Access and Replacement in Wireless Data Access   总被引:1,自引:0,他引:1  
Cache has been applied for wireless data access with different replacement policies in wireless networks. Most of the current cache replacement schemes are access-based replacement policies since they are based on object access frequency/recency information. Access-based replacement policies either ignore or do not focus on update information. However, update information is extremely important since it can make access information almost useless. In this paper, we consider two fundamental and strongly consistent access algorithms: poll-per-read (PER) and call-back (CB). We propose a server-based PER (SB-PER) cache access mechanism in which the server makes replacement decisions and a client-based CB cache access mechanism in which clients make replacement decisions. Both mechanisms have been designed to be suitable for using both update frequency and access frequency. We further propose two update-based replacement policies, least access-to-update ratio (LA2U) and least access-to-update difference (LAUD). We provide a thorough performance analysis via extensive simulations for evaluating these algorithms in terms of access rate, update rate, cache size, database size, object size, etc. Our study shows that although effective hit ratio is a better metric than cache hit ratio, it is a worse metric than transmission cost, and a higher effective hit ratio does not always mean a lower cost. In addition, the proposed SB-PER mechanism is better than the original PER algorithm in terms of effective hit ratio and cost, and the update-based policies outperform access-based policies in most cases  相似文献   

2.
黄丹  宋荣方 《电信科学》2018,34(11):59-66
缓存替换机制是内容中心网络的重要研究问题之一,考虑到缓存空间的有限性,合理地对缓存内容进行置换,成为影响网络整体性能的关键因素。因此,设计了一种基于内容价值的缓存替换方案。该方案综合考虑了内容的动态流行度、缓存代价以及最近被请求的时间,构建了更实际的内容价值函数,并依据该内容价值函数,设计了有效的内容存储与置换方案。具体地,当缓存空间不足时,对已有缓存内容按照价值从小到大进行置换。仿真结果表明,相比于传统替换算法 LRU、LFU 和 FIFO,本文提出的方案有效地提升了网络节点的内容缓存命中率,降低了用户获取内容的平均跳数。  相似文献   

3.
One of the key research fields of content-centric networking (CCN) is to develop more efficient cache replacement policies to improve the hit ratio of CCN in-network caching. However, most of existing cache strategies designed mainly based on the time or frequency of content access, can not properly deal with the problem of the dynamicity of content popularity in the network. In this paper, we propose a fast convergence caching replacement algorithm based on dynamic classification method for CCN, named as FCDC. It develops a dynamic classification method to reduce the time complexity of cache inquiry, which achieves a higher caching hit rate in comparison to random classification method under dynamic change of content popularity. Meanwhile, in order to relieve the influence brought about by dynamic content popularity, it designs a weighting function to speed up cache hit rate convergence in the CCN router. Experimental results show that the proposed scheme outperforms the replacement policies related to least recently used (LRU) and recent usage frequency (RUF) in cache hit rate and resiliency when content popularity in the network varies.  相似文献   

4.
Information‐centric networking (ICN) has emerged as a promising candidate for designing content‐based future Internet paradigms. ICN increases the utilization of a network through location‐independent content naming and in‐network content caching. In routers, cache replacement policy determines which content to be replaced in the case of cache free space shortage. Thus, it has a direct influence on user experience, especially content delivery time. Meanwhile, content can be provided from different locations simultaneously because of the multi‐source property of the content in ICN. To the best of our knowledge, no work has yet studied the impact of cache replacement policy on the content delivery time considering multi‐source content delivery in ICN, an issue addressed in this paper. As our contribution, we analytically quantify the average content delivery time when different cache replacement policies, namely, least recently used (LRU) and random replacement (RR) policy, are employed. As an impressive result, we report the superiority of these policies in term of the popularity distribution of contents. The expected content delivery time in a supposed network topology was studied by both theoretical and experimental method. On the basis of the obtained results, some interesting findings of the performance of used cache replacement policies are provided.  相似文献   

5.
N.  D.  Y.   《Ad hoc Networks》2010,8(2):214-240
The production of cheap CMOS cameras, which are able to capture rich multimedia content, combined with the creation of low-power circuits, gave birth to what is called Wireless Multimedia Sensor Networks (WMSNs). WMSNs introduce several new research challenges, mainly related to mechanisms to deliver application-level Quality-of-Service (e.g., latency minimization). Such issues have almost completely been ignored in traditional WSNs, where the research focused on energy consumption minimization. Towards achieving this goal, the technique of cooperative caching multimedia content in sensor nodes can efficiently address the resource constraints, the variable channel capacity and the in-network processing challenges associated with WMSNs. The technological advances in gigabyte-storage flash memories make sensor caching to be the ideal solution for latency minimization. Though, with caching comes the issue of maintaining the freshness of cached contents. This article proposes a new cache consistency and replacement policy, called NICC, to address the cache consistency issues in a WMSN. The proposed policies recognize and exploit the mediator nodes that relay on the most “central” points in the sensor network so that they can forward messages with small latency. With the utilization of mediator nodes that lie between the source node and cache nodes, both push-based and pull-based strategies can be applied in order to minimize the query latency and the communication overhead. Simulation results attest that NICC outperforms the state-of-the-art cache consistency policy for MANETs.  相似文献   

6.
In local loss recovery schemes, a small number of recovery nodes distributed along the transmission paths save incoming packets temporarily in accordance with a specified cache policy and retransmit these packets if they subsequently receive a request message from a downstream receiver. To reduce the recovery latency, the cache policy should ensure that the recovery nodes are always able to satisfy the retransmission requests of the downstream receivers. However, owing to the limited cache size of the recovery nodes and the behavior of the cache policy, this cannot always be achieved, and thus some of the packets must be retransmitted by the sender. Accordingly, this paper develops a new network‐coding‐based cache policy, designated as network‐coding‐based FIFO (NCFIFO), which extends the caching time of the packets at the recovery nodes without dropping any of the incoming packets. As a result, the lost packets can be always recovered from the nearest recovery nodes and the recovery latency is significantly reduced. The loss recovery performance of the NCFIFO cache policy is compared with that of existing cache policies by performing a series of simulation experiments using both a uniform error model and a burst error model. The simulation results show that the NCFIFO cache policy not only achieves a better recovery performance than existing cache policies, but also provides a more effective solution for managing a small amount of cache size in environments characterized by a high packet arrival rate. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

7.
In-network caching is one of the most important issues in content centric networking (CCN), which may extremely influence the performance of the caching system. Although much work has been done for in-network caching scheme design in CCN, most of them have not addressed the multiple network attribute parameters jointly during caching algorithm design. Hence, to fill this gap, a new in-network caching based on grey relational analysis (GRA) is proposed. The authors firstly define two newly metric parameters named request influence degree (RID) and cache replacement rate, respectively. The RID indicates the importance of one node along the content delivery path from the view of the interest packets arriving The cache replacement rate is used to denote the caching load of the node. Then combining hops a request traveling from the users and the node traffic, four network attribute parameters are considered during the in-network caching algorithm design. Based on these four network parameters, a GRA based in-network caching algorithm is proposed, which can significantly improve the performance of CCN. Finally, extensive simulation based on ndnSIM is demonstrated that the GRA-based caching scheme can achieve the lower load in the source server and the less average hops than the existing the betweeness (Betw) scheme and the ALWAYS scheme.  相似文献   

8.
Recently, Internet energy efficiency is paid more and more attention. New Internet architectures with more energy efficiency were proposed to promote the scalability in energy consumption. The eontent-eentrie networking (CCN) proposed a content-centric paradigm which was proven to have higher energy efficiency. Based on the energy optimization model of CCN with in-network caching, the authors derive expressions to tradeoff the caching energy and the transport energy, and then design a new energy efficiency cache scheme based on virtual round trip time (EV) in CCN. Simulation results show that the EV scheme is better than the least recently used (LRU) and popularity based cache policies on the network average energy consumption, and its average hop is also much better than LRU policy.  相似文献   

9.
Wireless Mesh Networks (WMNs) provide a new and promising solution for broadband Internet services. The distinguishing features and the wide range of WMNs’ applications have attracted both academic and industrial communities. Routing protocols play a crucial role in the functionality and the performance of WMNs due to their direct effect on network throughput, connectivity, supported Quality of Service (QoS) levels, etc. In this paper, a cross-layer based routing framework for multi-interface/multi-channel WMNs, called Cross-Layer Enhanced and Adaptive Routing (CLEAR), is proposed. This framework embodies optimal as well as heuristic solutions. The major component of CLEAR is a new bio-inspired routing protocol called Birds’ Migration Routing protocol (BMR). BMR adopts a newly developed routing metric called Multi-Level Routing metric (MLR) to efficiently utilize the advantages of both multi-radio/multi-channel WMNs and cross-layer design. We also provide an exact solution based on dynamic programming to solve the optimal routing problem in WMNs. Simulation results show that our framework outperforms other routing schemes in terms of network throughput, end-to-end delay, and interference reduction, in addition to being the closest one to the optimal solution.  相似文献   

10.
ON-CRP:机会网络缓存替换策略研究   总被引:2,自引:0,他引:2  
叶晖  陈志刚  赵明 《通信学报》2010,31(5):99-107
提出了一种新的机会网络缓存替换策略(ON-CRP,opportunistic networking cache replacement policy).与现有策略不同,该策略基于节点与数据项的相关度来选择要替换的缓存数据,并利用人类移动模式提取了目标地址匹配概率这一关键因素来对相关度进行判定;同时结合数据项的访问与更新频率比值这一重要因素来对缓存数据替换标准进行设计.仿真实验结果表明ON-CRP能够有效降低数据的远程访问延迟,与其他缓存替换算法相比网络开销降低了约30%,而数据的缓存命中率性能有约10%~30%的提高.  相似文献   

11.
李芳  程东年  杨仕荣 《通信技术》2007,40(12):155-157
无线移动网络的发展日新月异。TCP协议是目前互联网上使用的最为广泛的端到端可靠传输协议,但其应用在无线网络上的时候,TCP性能下降明显。文中详细分析并比较了现有几种机制在移动切换时的性能,针对现有机制不能有效地感知频繁切换的弱点,提出EHN-WS-HP机制。EHN-WS-HP机制在现有EHN-HP协议上进行改进,增加了抗摆动机制(withstand Swing)以处理频繁的切换。  相似文献   

12.
罗熹  安莹  王建新  刘耀 《电子与信息学报》2015,37(11):2790-2794
内容中心网络(CCN)是为了适应未来网络通信模式的转变,提供对可扩展和高效内容获取的原生支持而提出一种新型的网络体系架构,内容缓存机制是其研究的关键问题之一。现有机制在缓存节点的选择时往往过于集中,缓存负载分布严重不均,大大降低了网络资源利用率以及系统的缓存性能。该文提出一种基于缓存迁移的协作缓存机制,首先在缓存节点选择时考虑节点的中心性保证内容尽可能缓存在位置更重要的节点。同时,在缓存压力过大时,通过可用缓存空间大小、缓存替换率以及网络连接的稳定性等信息选择合适的邻居节点进行缓存内容的转移,充分利用邻居资源实现负载分担。仿真结果表明该机制能有效地改善缓存负载在节点上分布的均衡性,提高缓存命中率和缓存资源利用率并降低平均接入代价。  相似文献   

13.
Web cache replacement policies: a pragmatic approach   总被引:1,自引:0,他引:1  
Research involving Web cache replacement policy has been active for at least a decade. In this-article we would like to claim that there is a sufficient number of good policies, and further proposals would only produce minute improvements. We argue that the focus should be fitness for purpose rather than proposing any new policies. Up to now, almost all policies were purported to perform better than others, creating confusion as to which policy should be used. Actually, a policy only performs well in certain environments. Therefore, the goal of this article is to identify the appropriate policies for proxies with different characteristics, such as proxies with a small cache, limited bandwidth, and limited processing power, as well as suggest policies for different types of proxies, such as ISP-level and root-level proxies.  相似文献   

14.
Summary cache: a scalable wide-area Web cache sharing protocol   总被引:8,自引:0,他引:8  
The sharing of caches among Web proxies is an important technique to reduce Web traffic and alleviate network bottlenecks. Nevertheless it is not widely deployed due to the overhead of existing protocols. In this paper we demonstrate the benefits of cache sharing, measure the overhead of the existing protocols, and propose a new protocol called “summary cache”. In this new protocol, each proxy keeps a summary of the cache directory of each participating proxy, and checks these summaries for potential hits before sending any queries. Two factors contribute to our protocol's low overhead: the summaries are updated only periodically, and the directory representations are very economical, as low as 8 bits per entry. Using trace-driven simulations and a prototype implementation, we show that, compared to existing protocols such as the Internet cache protocol (ICP), summary cache reduces the number of intercache protocol messages by a factor of 25 to 60, reduces the bandwidth consumption by over 50%, eliminates 30% to 95% of the protocol CPU overhead, all while maintaining almost the same cache hit ratios as ICP. Hence summary cache scales to a large number of proxies. (This paper is a revision of Fan et al. 1998; we add more data and analysis in this version.)  相似文献   

15.
In the mobile environment, the movement of the users, disconnected modes, many data updates, power battery consumption, limited cache size, and limited bandwidth impose significant challenges in information access. Caching is considered one of the most important concepts to deal with these challenges. There are 2 general topics related to the client cache policy: cache invalidation method keeps data in the cache up to date and cache replacement method chooses the cached element(s) that would be removed from the cache once the cache stays full. The aim of this work is to introduce a new technique for cache replacement in a mobile database that takes into consideration the impact of invalidation time for enhancing data availability in the mobile environment by using genetic programming. In this case, each client collects information for every cached item in the cache like access probability, cached document size, and validation time and uses these factors in a fitness function to determine cached items that will be removed from the cache. The experiments were carried by NS2 simulator to assess the efficiency of the proposed method, and the outcomes are judged against existing cache replacement algorithms. It is concluded that the proposed approach performs significantly better than other approaches.  相似文献   

16.

Although multi-core processors enhance the performance yet the challenge of estimating Worst-Case Execution Time (WCET) of a task remains in such systems due to interference in shared resources like Last Level Caches (LLC). Cache partitioning has been used to reduce the interference problem by isolating the shared cache among each thread to ease the WCET estimation. However, it prevents information shared among parallel threads running in different cores. In current work, we propose sharing and reuse aware partitioned cache (SRCP) framework such that replication of shared information, data, or instruction, in different partitions could be avoided in LLC. Further, enhancement in existing cache replacement policy is proposed, which avoids eviction of cache blocks shared among multiple cores accessing partitioned last level cache. Tighter WCET, as well as improved resource utilization, is thereby ensured with the proposed framework. Experimental results show that SRCP shows significant improvement in cache hit-rate for PARSEC and SPLASH2 benchmarks as compared to least recently used cache replacement policy and outperforms EHC and TA-DRRIP, which are state-of-the-art replacement policies.

  相似文献   

17.
Load Balancing for Parallel Forwarding   总被引:1,自引:0,他引:1  
Workload distribution is critical to the performance of network processor based parallel forwarding systems. Scheduling schemes that operate at the packet level, e.g., round-robin, cannot preserve packet-ordering within individual TCP connections. Moreover, these schemes create duplicate information in processor caches and therefore are inefficient in resource utilization. Hashing operates at the flow level and is naturally able to maintain per-connection packet ordering; besides, it does not pollute caches. A pure hash-based system, however, cannot balance processor load in the face of highly skewed flow-size distributions in the Internet; usually, adaptive methods are needed. In this paper, based on measurements of Internet traffic, we examine the sources of load imbalance in hash-based scheduling schemes. We prove that under certain Zipf-like flow-size distributions, hashing alone is not able to balance workload. We introduce a new metric to quantify the effects of adaptive load balancing on overall forwarding performance. To achieve both load balancing and efficient system resource utilization, we propose a scheduling scheme that classifies Internet flows into two categories: the aggressive and the normal, and applies different scheduling policies to the two classes of flows. Compared with most state-of-the-art parallel forwarding schemes, our work exploits flow-level Internet traffic characteristics.  相似文献   

18.
Home agent is a key component of MIPv6 functionality that comprises binding cache to hold the mobile nodes current point of attachment to the Internet. This paper is concerned with binding cache support for home agents within MIPv6 network. Existing binding cache of home agent supports weak cache consistency by using fixed contract length for Binding Refresh Request, which functions reasonably well in normal situations. However, maintaining a strong binding cache consistency in home agent as a crucial exceptional handling mechanism has become more demanding for the following objectives: (i) to adapt increasingly frequent change of care‐of address due to mobile nodes movement detection update; (ii) to provide fine‐grain controls to balance the binding cache load distributions for better delivery services; and (iii) to reduce the overhead allowances around the binding cache. In this paper, we have first verified the effectiveness of Binding Refresh Request contract length, and on the basis of that, two dynamic contract algorithms are suggested to reduce the storage and communication overhead allowances in binding cache. We have also compared our technique with the existing fixed Binding Refresh Request contract length, and our simulation results reveals that the proposed approach provides an effective performance to reduce overhead within the network. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

19.
该文构造了一种新的流媒体缓存效用函数,该函数综合考虑流媒体节目的流行度特性及传输网络的代价参数;设计了一种针对多视频服务器、基于网络代价的流媒体缓存分配与替换算法(Network Cost Based cache allocation and replacement algorithm, NCB)。仿真实验结果显示,NCB算法有效提高了缓存命中率,降低了传送流媒体所消耗的总体网络代价;该算法在网络结构复杂、节目数量庞大的Internet流媒体应用环境中表现出较优越的性能。  相似文献   

20.
互联网需求由主机到主机通信转向海量内容获取为主。为满足新的互联网需求,内容中心网络(CCN)成为下一代互联网架构的研究热点。CCN中最重要的特征之一是利用网内缓存提高接收者获取内容的传输效率和网络资源的利用率。本文阐述了CCN的基本思想,从CCN内容缓存替换策略和缓存决策策略两个角度,阐述现有的研究如何实现CCN中内容缓存;对现有缓存策略进行了总结、分析和评价;给出了CCN缓存策略研究中仍存在的问题和未来的研究方向。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号