首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
社交网络和其他云应用程序应该能对从数据中心发出的请求作出快速响应,实现这种请求的技术之一是内存中的键值存储(IMKVS),它是一种缓存机制,目的是为了提高整体用户体验。一般地,IMKVS系统使用一致性哈希来决定在哪存储目标,一致性哈希使用起来方法简单,但可能引起网络负载的不平衡。为了提高IMKVS的缓存性能,提出一种软件定义网络中利用IMKVS结合NFV的分布式网络负载均衡策略。该策略包含两个阶段,第一阶段设计通用的SDN负载平衡器的模块,以运行不同的负载平衡算法;第二阶段是基于IMKVS的专业化缓存,可以实现通信管理和数据复制。仿真结果表明,相比于一致性哈希,缓存服务器上的负载可改善24%,网络上的负载可改善7%,策略能够使资源利用更合理,获得更好的用户体验。  相似文献   

2.

With the rapid developments in cloud computing and mobile networks, multimedia content can be accessed conveniently. Recently, some novel intelligent caching-based approaches have been proposed to improve the memory architectures for multimedia applications. These applications often face bottleneck related challenges which result in performance degradation and service delay issues. Intelligent multimedia network applications access the shared data by using a specific network file system. This results in answering the processing related constraints on hard-drive storage and might result in bringing bottleneck issues. Therefore, to improve the performance of these multimedia network applications, we present an intelligent distributed memory caching system. We integrate the multimedia application message passing interface in a multi-threaded environment and propose an algorithm which can handle concurrent response behavior for different multimedia applications. Results demonstrate that our proposed scheme outperforms traditional approaches in terms of throughput and file read access features.

  相似文献   

3.
为了提高内容中心移动边缘网络的缓存性能,提出了一种基于用户移动性感知和节点中心性度量的内容中心移动边缘网络缓存机制(user mobility-aware and node centrality based caching,简称UMANCC).UMANCC机制利用边缘节点计算节点中心性、缓存空闲率以及小区内用户逗留时间.移动边缘网络控制器综合各边缘节点的信息,计算各边缘节点的重要性并进行排序,最后根据排序结果选择内容缓存节点.仿真实验结果表明:与传统缓存机制LCE及Prob相比,UMANCC有效减少用户获取内容的平均跳数高达15.9%,提高边缘节点缓存命中率至少13.7%,减少进入核心网流量高达32.1%,有效地提高了内容中心移动边缘网络的内容分发性能.  相似文献   

4.
本文基于原有工作提出了利用内存网格进行磁盘缓存的技术,从而组织和共享网络上分布的空闲内存资源,提高具有大量磁盘IO的节点的系统性能;给出了一种基本设计方案,并提出了缓存节点的时间分布和空间合并等自主协同机制,进一步优化了系统性能;最后通过基于实际采集数据的模拟,对方案进行了评价。  相似文献   

5.
WWW上的缓存技术   总被引:1,自引:0,他引:1  
WWW上的缓存技术被认为是改进Web性能的有效技术之一,总观了缓存系统所要解决的问题,针对需要解决的问题,总结了目前为止所提出的各种缓存策略,最后概括了理想的WWW缓存系统应该具备的性能指标。  相似文献   

6.
Dynamic Web applications have gained a great deal of popularity. Improving the performance of these applications has recently attracted the attention of many researchers. One of the most important techniques proposed for this purpose is caching, which can be done at different locations and within different stages of the process of generating a dynamic Web page. Most of the caching schemes proposed in literature are lenient about the issue of consistency; they assume that users can tolerate receiving stale data. However, an important class of dynamic Web applications are those in which users always expect to get the freshest data available. Any caching scheme has to incur a significant overhead to be able to provide this level of consistency (i.e., strong consistency); the overhead may be so much that it neutralizes the benefits of caching. In this paper, three alternative architectures are investigated for dynamic Web applications that require strong consistency. A proxy caching scheme is designed and implemented, which performs caching at the level of database queries. This caching system is used in one of the alternative architectures. The performance experiments show that, despite the high overhead of providing strong consistency in database caching, this technique can improve the performance of dynamic Web applications, especially when there is a long network latency between clients and the (origin) server.  相似文献   

7.
Data caching is a popular technique that improves data accessibility in wired or wireless networks. However, in mobile ad hoc networks, improvement in access latency and cache hit ratio may diminish because of the mobility and limited cache space of mobile hosts (MHs). In this paper, an improved cooperative caching scheme called group-based cooperative caching (GCC) is proposed to generalize and enhance the performance of most group-based caching schemes. GCC allows MHs and their neighbors to form a group, and exchange a bitmap data directory periodically used for proposed algorithms, such as the process of data discovery, and cache placement and replacement. The goal is to reduce the access latency of data requests and efficiently use available caching space among MH groups. Two optimization techniques are also developed for GCC to reduce computation and communication overheads. The first technique compresses the directories using an aggregate bitmap. The second employs multi-point relays to develop a forwarding node selection scheme to reduce the number of broadcast messages inside the group. Our simulation results show that the optimized GCC yields better results than existing cooperative caching schemes in terms of cache hit ratio, access latency, and average hop count.  相似文献   

8.
超节点P2P(Super-peerP2P)结合了P2P结构和C/S结构的优点,是当前应用最广的一类P2P系统。在超节点P2P网络中,文件访问是最基本的操作,往往使用缓存技术来提高其操作效率。目前大多数超节点P2P网络使用传统的“尽力而为”的缓存机制,该机制没有区分超节点网络中不同节点对资源的需求及关注程度不同,导致偶尔访问的对象替换经常访问的对象。针对“尽力而为”缓存机制的不足,本文提出一种基于语义信息的协同缓存管理机制SCOCM(Semantic-based Cooperative Cache Management mechanism for super-peer net works),应用已经请求对象的语义信息主动地选择对象放置缓存,以兴趣度的远近驱逐缓存内的对象,减少缓存对象的替换,使得每个缓存中缓存的对象之间尽可能地保持语义联系。实验结果表明,基于语义信息的协同缓存管理机制与LRU相比可大大降低缓存的替换率,提高缓存的存取效率,命中率也较高。  相似文献   

9.
Integrating Web caching and Web prefetching in client-side proxies   总被引:2,自引:0,他引:2  
Web caching and Web prefetching are two important techniques used to reduce the noticeable response time perceived by users. Note that by integrating Web caching and Web prefetching, these two techniques can complement each other since the Web caching technique exploits the temporal locality, whereas Web prefetching technique utilizes the spatial locality of Web objects. However, without circumspect design, the integration of these two techniques might cause significant performance degradation to each other. In view of this, we propose in this paper an innovative cache replacement algorithm, which not only considers the caching effect in the Web environment, but also evaluates the prefetching rules provided by various prefetching schemes. Specifically, we formulate a normalized profit function to evaluate the profit from caching an object (i.e., either a nonimplied object or an implied object according to some prefetching rule). Based on the normalized profit function devised, we devise an innovative Web cache replacement algorithm, referred to as Algorithm IWCP (standing for the Integration of Web Caching and Prefetching). Using an event-driven simulation, we evaluate the performance of Algorithm IWCP under several circumstances. The experimental results show that Algorithm IWCP consistently outperforms the companion schemes in various performance metrics.  相似文献   

10.
Current proxy server and client caching techniques do not incorporate the dynamics of document selection and modification. The adaptive model proposed in the article uses document life histories to optimize cache performance. We briefly describe existing “semi intelligent” caching strategies and then propose a mechanism for adaptive cache management. Our approach attempts to improve cache performance by modeling document life histories to determine usefulness. We use damped exponential smoothing to ensure an accurate yet responsive model of document dynamics  相似文献   

11.
The gateways are the performance bottleneck of wireless mesh access networks and thus alleviating stress on them is essential to making such wireless networks robust and scalable. Using proxy servers or wireless peer-to-peer streaming techniques can help reduce the gateway load. However, these techniques, because they are data caching methods, do not save wireless resources. We instead consider a communication-sharing approach in this paper. Traditional stream sharing solutions depend on cooperation with the video server. However, in the wireless access network it is difficult to cooperate with online video sites. To address this problem in wireless mesh access networks, we propose a distributed video sharing technique called Dynamic Stream Merging (DSM). DSM is able to improve the robustness of the access network without cooperation from the online video site or the users and has the intelligence to handle sudden spikes in demand for certain videos due to specific events, thereby preventing adverse effects to other daily wireless traffic. The technique can also leverage the 80:20 data access pattern, common for many video applications, to substantially increase the service throughput. We explain the DSM technique, present the system prototype, and discuss the experimental results.  相似文献   

12.
Nowadays, the peer-to-peer (P2P) system is one of the largest Internet bandwidth consumers. To relieve the burden on Internet backbone and improve the query and retrieve performance of P2P file sharing networks, efficient P2P caching algorithms are of great importance. In this paper, we propose a distributed topology-aware unstructured P2P file caching infrastructure and design novel placement and replacement algorithms to achieve optimal performance. In our system, for each file, an adequate number of copies are generated and disseminated at topologically distant locations. Unlike general believes, our caching decisions are in favor of less popular files. Combined with the underlying topology-aware infrastructure, our strategy retains excellent performance for popular objects while greatly improves the caching performance for less popular files. Overall, our solution can reduce P2P traffic on Internet backbone, and relieve the over-caching problem that has not been properly addressed in unstructured P2P networks. We carry out simulation experiments to compare our approaches with several traditional caching strategies. The results show that our algorithms can achieve better query hit rates, smaller query delay, higher cache hit rates, and lower communication overhead.  相似文献   

13.
蔡凌  王兴伟  汪晋宽  黄敏 《软件学报》2019,30(12):3765-3781
针对如何提高信息中心网络的网内缓存性能,提出了一种基于概念漂移学习(concept drift learning,简称CDL)的自适应缓存策略.考虑到节点数据和内容数据的相互感知对缓存性能的影响,将节点和内容的状态数据流作为网络资源,对提取的多维状态属性数据和缓存匹配数据进行分析挖掘,利用学习到的状态属性与缓存匹配之间的函数映射关系,即概念,对未来时期内的节点与内容间的匹配关系进行预测.为提高匹配算法的准确度,在学习过程中,提出了一种基于信息熵的概念漂移识别算法,当根据状态属性的信息熵变识别出漂移后,利用提出的基于概念重现的缓存算法,重新定义函数映射关系.仿真实验结果表明,该策略与CEE,LCD,prob和OPP策略相比,降低了网络运行成本,提高了用户体验质量.  相似文献   

14.
Large web search engines have to answer thousands of queries per second with interactive response times. Due to the sizes of the data sets involved, often in the range of multiple terabytes, a single query may require the processing of hundreds of megabytes or more of index data. To keep up with this immense workload, large search engines employ clusters of hundreds or thousands of machines, and a number of techniques such as caching, index compression, and index and query pruning are used to improve scalability. In particular, two-level caching techniques cache results of repeated identical queries at the frontend, while index data for frequently used query terms are cached in each node at a lower level. We propose and evaluate a three-level caching scheme that adds an intermediate level of caching for additional performance gains. This intermediate level attempts to exploit frequently occurring pairs of terms by caching intersections or projections of the corresponding inverted lists. We propose and study several offline and online algorithms for the resulting weighted caching problem, which turns out to be surprisingly rich in structure. Our experimental evaluation based on a large web crawl and real search engine query log shows significant performance gains for the best schemes, both in isolation and in combination with the other caching levels. We also observe that a careful selection of cache admission and eviction policies is crucial for best overall performance. Work supported by NSF CAREER Award CCR-0093400 and the New York State Center for Advanced Technology in Telecommunications (CATT) at Polytechnic University.  相似文献   

15.
Exploiting client caches to build large Web caches   总被引:2,自引:1,他引:1  
New demands brought by the continuing growth of the Internet will be met in part by more effective and comprehensive use of caching. This paper proposes to exploit client browser caches in the context of cooperative proxy caching by constructing the client caches within each organization (e.g., corporate networks) as a peer-to-peer (P2P) client cache. Via trace-driven simulations we evaluate the potential performance benefit of cooperative proxy caching with/without exploiting client caches. We show that exploiting client caches in cooperative proxy caching can significantly improve performance, particularly when the size of individual proxy caches is limited compared to the universe of Web objects. We further devise a cooperative hierarchical greedy-dual replacement algorithm (Hier-GD), which not only provides some cache coordination but also utilizes client caches. Through Hier-GD, we explore the design issues of how to exploit client caches in cooperative proxy caching to build large Web caches. We show that Hier-GD is technically practical and can potentially improve the performance of cooperative proxy caching by utilizing client caches.
Yiming HuEmail:
  相似文献   

16.
Location Dependent Information Services (LDISs), through which mobile clients can access location sensitive data such as weather information, traffic reports, and local news, are gaining increasing popularity in recent years. Due to limited client power and intermittent connectivity, caching is an important approach to improve the performance of LDISs. In this paper, we propose a cache replacement policy called Location Dependent Cooperative Caching (LDCC). Unlike existing location dependent cache replacement policies, the LDCC strategy applies a prediction model to approximate client movement behavior and a probabilistic transition model to analyze the communication cost. These models are used in the design of a cache replacement policy to improve system overall performance. Simulation results demonstrate that the proposed strategy significantly outperforms existing caching policies in providing LDIS in mobile ad hoc networks.  相似文献   

17.
现有大多数内容缓存算法需要对内容流行度的准确估计,这在动态移动网络环境中是较难实现的。提出考虑内容异构5G无线网络云对边混合缓存策略,设计优化了内容缓存位置,其可以是原始内容服务器、云单元(CUs)和基站(BSs)。采用Lyapunov优化方法解决了NP-hard缓存控制问题与CU缓存和BS缓存控制决策之间的紧密耦合问题,有助于改善和识别网络体系结构的层次性和Cus缓存与BSs缓存之间的隶属关系,同时新的分层网络架构能够通过机会性地开发以云为中心和以边缘为中心的缓存来提高内容缓存性能,支持高平均请求的内容数据速率。采用李雅普诺夫优化技术,可实现恒定分数的容量区域的所有到达率的有限服务延迟,进而实现缓存数据的快速读取。仿真结果显示,所提缓存策略在平均端到端服务延迟和负载降低率方面具有较为显著的优势。  相似文献   

18.
提出一个新的 Web Caching结构模型—基于内容的 Web Caching.模型综合考虑了 Proxy的操作信息和 Web文档的内容特性 ,界定了虚拟用户团体和 Proxy个性 ,并利用 Ontology技术来刻画 Proxy的个性 ,模拟实验表明 ,结合内容属性可以使得 Web Caching性能得到进一步提高  相似文献   

19.
随着移动设备和新兴移动应用的广泛使用,移动网络中流量的指数级增长所引发的网络拥塞、时延较大、用户体验质量差等问题无法满足移动用户的需求。边缘缓存技术通过对网络热点内容的复用,能极大缓解无线网络的传输压力;同时,该技术减少用户请求的网络时延,进而改善用户的网络体验,已经成为面向5G/Beyond 5G的移动边缘计算(MEC)中的关键性技术之一。围绕移动边缘缓存技术,首先介绍了移动边缘缓存的应用场景、主要特性、执行过程和评价指标;其次,对以低时延高能效、低时延高命中率及最大化收益为优化目标的边缘缓存策略进行了分析和对比,并总结出各自的关键研究点;然后,阐述了支持5G的MEC服务器的部署,并在此基础上分析了5G网络中的绿色移动感知缓存策略和5G异构蜂窝网络中的缓存策略;最后,从安全、移动感知缓存、基于强化学习的边缘缓存、基于联邦学习的边缘缓存以及Beyond 5G/6G网络的边缘缓存等几个方面讨论了边缘缓存策略的研究挑战和未来发展方向。  相似文献   

20.
The question of whether prefetching blocks on the file into the block cache can effectively reduce overall execution time of a parallel computation, even under favorable assumptions, is considered. Experiments have been conducted with an interleaved file system testbed on the Butterfly Plus multiprocessor. Results of these experiments suggest that (1) the hit ratio, the accepted measure in traditional caching studies, may not be an adequate measure of performance when the workload consists of parallel computations and parallel file access patterns, (2) caching with prefetching can significantly improve the hit ratio and the average time to perform an I/O (input/output) operation, and (3) an improvement in overall execution time has been observed in most cases. In spite of these gains, prefetching sometimes results in increased execution times (a negative result, given the optimistic nature of the study). The authors explore why it is not trivial to translate savings on individual I/O requests into consistently better overall performance and identify the key problems that need to be addressed in order to improve the potential of prefetching techniques in the environment  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号