首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
顾汇贤  王海江  魏贵义 《软件学报》2022,33(11):4396-4409
随着多媒体数据流量的急剧增长,传统云计算模式难以满足用户对于低延时和高带宽的需求.虽然边缘计算中基站等边缘设备拥有的计算能力以及基站与用户之间的短距离通信能够使用户获得更高的服务质量,但是如何利用边缘节点的收益和成本之间的关系设计边缘缓存策略,仍然是一个具有挑战性的问题.利用5G和协作边缘计算技术,在大量短视频应用场景下,提出了一种协作边缘缓存技术来同时解决以下3个问题:(1)通过减少传输延时,提高了用户的服务体验;(2)通过近距离传输,降低了骨干网络的数据传输压力;(3)分布式的工作模式减少了云服务器的工作负载.首先定义了一个协作边缘缓存模型,其中,边缘节点配备有容量有限的存储空间,移动用户可以接入这些边缘节点,一个边缘节点可以服务多个用户;其次,设计了一个非协作博弈模型来研究边缘节点之间的协作行为,每一个边缘节点看成一个玩家并且可以做出缓存初始和缓存重放策略;最后,找到了该博弈的纳什均衡,并设计了一个分布式的算法以达到均衡.实验仿真结果表明,提出的边缘缓存策略能够降低用户20%的延时,并且减少了80%的骨干网络的流量.  相似文献   

2.
在无线Mesh中,由于每个节点缓冲的数据量不同,可能会造成某些节点的缓冲区利用率低,某些节点因为缓冲任务繁重而进行频繁的数据置换操作,从而造成节点存储空间使用不均衡,降低数据缓冲的效率。提出了一种基于节点分级管理的协作缓冲算法,该算法为网络中的每个节点在网络中构造一个分布式缓冲区域,利用该缓冲区域来替代节点本身的缓冲区,通过合理地利用每个节点的存储空间,增加单个节点的数据缓冲能力。理论分析和实验结果表明,该算法可以有效提高数据访问命中率,减少缓冲区数据的置换操作,降低节点的能量消耗。  相似文献   

3.
Nowadays, the peer-to-peer (P2P) system is one of the largest Internet bandwidth consumers. To relieve the burden on Internet backbone and improve the query and retrieve performance of P2P file sharing networks, efficient P2P caching algorithms are of great importance. In this paper, we propose a distributed topology-aware unstructured P2P file caching infrastructure and design novel placement and replacement algorithms to achieve optimal performance. In our system, for each file, an adequate number of copies are generated and disseminated at topologically distant locations. Unlike general believes, our caching decisions are in favor of less popular files. Combined with the underlying topology-aware infrastructure, our strategy retains excellent performance for popular objects while greatly improves the caching performance for less popular files. Overall, our solution can reduce P2P traffic on Internet backbone, and relieve the over-caching problem that has not been properly addressed in unstructured P2P networks. We carry out simulation experiments to compare our approaches with several traditional caching strategies. The results show that our algorithms can achieve better query hit rates, smaller query delay, higher cache hit rates, and lower communication overhead.  相似文献   

4.
为提高P2P空间矢量数据索引网络的性能,在已有混合结构P2P空间索引网络的基础上,引入缓存机制,并提出了一种新的面向多图层的空间矢量数据缓存更新策略。该策略针对空间矢量数据多图层特性,综合考虑图层优先级以及查询频率对于缓存更新的影响,合理地利用了缓存空间。同时,将缓存更新抽象成0/1背包问题的数学模型,采用遗传算法对其优化求解。仿真结果表明该缓存更新策略可以增加缓存命中率,提高空间索引效率。  相似文献   

5.
为了提高内容中心移动边缘网络的缓存性能,提出了一种基于用户移动性感知和节点中心性度量的内容中心移动边缘网络缓存机制(user mobility-aware and node centrality based caching,简称UMANCC).UMANCC机制利用边缘节点计算节点中心性、缓存空闲率以及小区内用户逗留时间.移动边缘网络控制器综合各边缘节点的信息,计算各边缘节点的重要性并进行排序,最后根据排序结果选择内容缓存节点.仿真实验结果表明:与传统缓存机制LCE及Prob相比,UMANCC有效减少用户获取内容的平均跳数高达15.9%,提高边缘节点缓存命中率至少13.7%,减少进入核心网流量高达32.1%,有效地提高了内容中心移动边缘网络的内容分发性能.  相似文献   

6.
We describe a data deduplication system for backup storage of PC disk images, named in-RAM metadata utilizing deduplication (IR-MUD). In-RAM hash granularity adaptation and miniLZO based data compression are firstly proposed to reduce the in-RAM metadata size and thereby reduce the space overheads required by the in-RAM metadata caches. Secondly, an in-RAM metadata write cache, as opposed to the traditional metadata read cache, is proposed for further reducing metadata-related disk I/O operations and improving deduplication throughput. During deduplication, the metadata write cache is managed following the LRU caching policy. For each manifest that is hit in the metadata write cache, an expensive manifest reloading operation from the disk is avoided. After deduplication, all the manifests in the metadata write cache are cleared and stored on the disk. Our experimental results using 1.5 TB real-world disk image dataset show that 1) IR-MUD achieved about 95% size reduction for the deduplication metadata, with a small time overhead introduced, 2) when the metadata write cache was not utilized, with the same RAM space size for the metadata read cache, IR-MUD achieved a 400% higher RAM hit ratio and a 50% higher deduplication throughput, as compared with the classic Sparse Indexing deduplication system where no metadata utilization approaches are utilized, and 3) when the metadata write cache was utilized and enough RAM space was available, IR-MUD achieved a 500% higher RAM hit ratio compared with Sparse Indexing and a 70% higher deduplication throughput compared with IR-MUD with only a single metadata read cache. The in-RAM metadata harnessing and metadata write caching approaches of IR-MUD can be applied in most parallel deduplication systems for improving metadata caching efficiency.  相似文献   

7.
为了保证网络存储的负载平衡并避免在节点或磁盘故障的情况下造成不可恢复的损失,提出一种基于均衡数据放置策略的分布式网络存储编码缓存方案,针对大型高速缓存和小型缓存分别给出了不同的解决办法。首先,将Maddah方案扩展到多服务器系统,结合均衡数据放置策略,将每个文件作为一个单元存储在数据服务器中,从而解决大型高速缓存问题;然后,将干扰消除方案扩展到多服务器系统,利用干扰消除方案降低缓存的峰值速率,结合均衡数据放置策略,提出缓存分段的线性组合,从而解决小型缓存问题。最后,通过基于Linux的NS2仿真软件,分别在一个和两个奇偶校验服务器系统中进行仿真实验。仿真结果表明,提出的方案可以有效地降低峰值传输速率,相比其他两种较新的缓存方案,提出的方案获得了更好的性能。此外,采用分布式存储虽然限制了将来自不同服务器的内容组合成单个消息的能力,导致编码缓存方案性能损失,但可以充分利用分布式存储系统中存在的固有冗余,从而提高存储系统的性能。  相似文献   

8.
In recent years, telecom operators have been moving away from traditional broadcast-driven television, towards IP-based interactive and on-demand multimedia services. Consequently, multicast is no longer sufficient to limit the amount of generated traffic in the network. In order to prevent an explosive growth in traffic, caches can be strategically placed throughout the content delivery infrastructure. As the size of caches is usually limited to only a small fraction of the total size of all content items, it is important to accurately predict future content popularity. Traditional caching strategies only take into account the past when deciding what content to cache. Recently, a trend towards novel strategies that actually try to predict future content popularity has arisen. In this paper, we ascertain the viability of using popularity prediction in realistic multimedia content caching scenarios. The proposed generic popularity prediction algorithm is capable of predicting future content popularity, independent of specific content and service characteristics. Additionally, a novel cache replacement strategy, which employs the popularity prediction algorithm when making its decisions, is introduced. A detailed evaluation, based on simulation results using trace files from an actual deployed Video on Demand service, was performed. The evaluation results are used to determine the merits of popularity-based caching compared to traditional strategies. Additionally, the synergy between several parameters, such as cache size and prediction window, is investigated. Results show that the proposed prediction-based caching strategy has the potential to significantly outperform state-of-the-art traditional strategies. Specifically, the evaluated Video on Demand scenario showed a performance increase of up to 20% in terms of cache hit rate.  相似文献   

9.
Nowadays, Information centric networking has tremendous importance in accessing internet based applications. The increasing rate of Internet traffic has encouraged to adapt content centric architectures to better serve content provider needs and user demand of internet based applications which is based on receiver driven data retrieval. These architectures have built-in network Caches and these properties improves efficiency of content delivery with high speed and efficiency and they are very efficient in comparison with traditional caching of internet access methodologies. By using Information centric architectures, users need not download content from content server despite they can easily access data from nearby caches. User requested content is independent of the location of that data by use of caching approaches and does not rely on storage and transmission methodologies of that content. There has been many researches going to base on caching approaches in the Content centric network. Efficient caching is essential to reduce delay and to enhance performance of the network. Efficient caching is essential to reduce delay and to enhance performance of the network. So, in this paper, we presented a survey on caching approaches and related issues like cache content, availability, and cache router localization and so on. The main focus to present a state-of-art research paper for researchers who are interested in the area of Information centric network so that they can get an idea about what work and issues have been developed and arises in this particular caching field.  相似文献   

10.
Semantic caching and query processing   总被引:2,自引:0,他引:2  
Semantic caching is very attractive for use in distributed systems due to the reduced network traffic and the improved response time. It is particularly efficient for a mobile computing environment, where the bandwidth of wireless links is a major performance bottleneck. Previous work either does not provide a formal semantic caching model, or lacks efficient query processing strategies. This paper extends the existing research in three ways: formal definitions associated with semantic caching are presented, query processing strategies are investigated and, finally, the performance of the semantic cache model is examined through a detailed simulation study.  相似文献   

11.
With the rapid development of mobile Internet technologies and various new service services such as virtual reality (VR) and augmented reality (AR), users’ demand for network quality of service (QoS) is getting higher and higher. To solve the problems of high load and low latency in-network services, this paper proposes a data caching strategy based on a multi-access mobile edge computing environment. Based on the MEC collaborative caching framework, an SDN controller is introduced into the MEC collaborative caching framework, a joint cache optimization mechanism based on data caching and computational migration is constructed, and the user-perceived time-lengthening problem in the data caching strategy is solved by a joint optimization algorithm based on an improved heuristic genetic algorithm and simulated annealing. Meanwhile, this paper proposes a multi-base station collaboration-based service optimization strategy to solve the problem of collaboration of computation and storage resources due to multiple mobile terminals and multiple smart base stations. For the problem that the application service demand in MEC server changes due to time, space, requests and other privacy, an application service optimization algorithm based on the Markov chain of service popularity is constructed, and a deep deterministic strategy (DDP) based on deep reinforcement learning is also used to minimize the average delay of computation tasks in the cluster while ensuring the energy consumption of MEC server, which improves the accuracy of application service cache updates in the system as well as reducing the complexity of service updates. The experimental results show that the proposed data caching algorithm weighs the cache space of user devices, the average transfer latency of acquiring data resources is effectively reduced, and the proposed service optimization algorithm can improve the quality of user experience.  相似文献   

12.
The fast-growing traffic of peer-to-peer (P2P) applications, most notably BitTorrent, is putting unprecedented pressure to Internet Service Providers (ISPs). To address this challenge, a number of P2P traffic management schemes have been proposed in recent years, among which caching and redirection are two representatives. Both of them have shown their success in theory and in practice. Yet, their implementations are largely independent, making the overall effectiveness sub-optimal. In this paper, we for the first time examine the joint implementation of these two promising solutions under a coherent framework, Tod-Cache (Traffic Orientated Distributed Caching). We show that the combination of caching and redirection can dramatically reduce the P2P traffic traversing across ISPs. Under this framework, we formulate the optimal caching and redirection problem, and show its complexity. We then present a highly adaptive and scalable heuristic algorithm which achieves close-to-optimal performance with much lower computational complexity. We extensively evaluate our framework under diverse network and end-system configurations. Our simulation results show that, under the same configuration, it can achieve at least 85 % of performance of the traditional cache with at most 1/10 of the device number.  相似文献   

13.
因特网中日益增长的内容获取需求促使学术界提出了多种以信息为中心的未来网络架构。这类架构将以主机为中心的通信模式转变为以内容为中心。信息中心网络(ICN)最重要的特征之一是利用内置缓存减少用户获取内容的时延、节省网络带宽和缓解网络拥塞。与传统的内容分发网络(CDN)、对等网络(P2P)和Web缓存系统相比,ICN缓存系统呈现出一系列的新特征。分析了缓存新特征对ICN研究带来的挑战;从多方面重点阐述了ICN缓存的优化方法,详细分析对比了不同缓存策略;指出了未来研究方向并总结全文。  相似文献   

14.
A multi-level cache model for run-time optimization of remote visualization   总被引:1,自引:0,他引:1  
Remote visualization is an enabling technology aiming to resolve the barrier of physical distance. While many researchers have developed innovative algorithms for remote visualization, previous work has focused little on systematically investigating optimal configurations of remote visualization architectures. In this paper, we study caching and prefetching, an important aspect of such architecture design, in order to optimize the fetch time in a remote visualization system. Unlike a processor cache or web cache, caching for remote visualization is unique and complex. Through actual experimentation and numerical simulation, we have discovered ways to systematically evaluate and search for optimal configurations of remote visualization caches under various scenarios, such as different network speeds, sizes of data for user requests, prefetch schemes, cache depletion schemes, etc. We have also designed a practical infrastructure software to adaptively optimize the caching architecture of general remote visualization systems, when a different application is started or the network condition varies. The lower bound of achievable latency discovered with our approach can aid the design of remote visualization algorithms and the selection of suitable network layouts for a remote visualization system.  相似文献   

15.
电工装备智慧物联有利于电力装备行业实现高水平发展,而传统信息交互架构无法适应智慧物联的信息交互需求。针对电工装备智能监造平台信息分发效率问题,设计一种基于边缘缓存的电工装备信息分发架构。考虑到网络建设成本以及业务数据的分发时延需求,以电力公司服务缓存收益最大化为目标,研究一种面向电工装备智能监造的边缘信息缓存策略。为降低问题求解复杂度,提出一种基于改进粒子群的缓存决策算法完成问题求解。仿真结果表明,所提缓存算法在保证部分时延优先级较高的数据缓存需求的情况下实现了电力公司高缓存服务收益的获取。  相似文献   

16.
机会网络节点协作缓存策略设计与实现   总被引:1,自引:1,他引:0       下载免费PDF全文
陈果  叶晖  赵明 《计算机工程》2010,36(18):85-87
针对如何有效利用机会网络中节点间的协作关系以及节点有限的缓存资源,避免拥塞和提升数据传输性能的问题,提出一种机会网络协作缓存优化策略——HMP-Cache。该策略根据节点不同运动状态的特点,利用目标地址匹配标准选择协作缓存节点,采用同步Cache数据表达到局部域内缓存信息共享的目的。仿真实验结果表明,该策略能够有效控制数据访问的网络开销,降低网络热点数据访问延迟。  相似文献   

17.
集群协作缓存机制研究   总被引:1,自引:0,他引:1  
计算机集群中的节点使用内存一般不均衡,往往有些节点使用太多内存,而其他节点又有较多的空闲内存.为了改进集群操作系统,将集群节点的内存作全局分布的资源使用,我们首先提出一个内存互操作高速缓存方案:通过使用集群范围内的内存作文件高速缓存,从其他节点的高速缓存中读文件,可以避免很多低速的磁盘访问,改进集群文件系统的总体吞吐量.然后利用我们提出的缓存页面代替策略GCAR来支持这种内存互操作的高速缓存方案.该算法与CAR相比,对缓存中被"经常"使用的页面的管理粒度更细,更适合集群协作缓存的计算环境.实验结果表明,GCAR对本地缓存的命中率比CAR略好,在集群协作缓存下能取得更好的缓存命中率.  相似文献   

18.
针对信息中心网络缓存管理效率较低的问题,提出一种旨在提高缓存管理效率的方法,且充分利用了无线网状网络(WMN)环境中软件定义网络(SDN)的概念。主要工作体现在SDN内容管理上,缓存位置决策考虑了网络拓扑位置、内容尺寸和缓存节点资源的位置。缓存操作考虑了请求客户端和缓存节点位置,且操作分为通过分支点的off-path缓存和通过缓存节点内容流的on-path缓存。控制器通过缓存内容表来确定工作的分配。实验在两种环境下进行:含有局部聚合客户端的小型网络和局部分布式客户端的大型网络。结果显示,所提方案仅利用每秒5.13kb的控制流量负荷即可将随机缓存位置方案的平均响应延迟减少23.95%。相比于其他网络缓存方案,所提方案最大化了WMN的节点缓存效率,明显提升了内容缓存分配性能,且系统没有较大的额外开销。  相似文献   

19.
Caching collaboration and cache allocation in peer-to-peer video systems   总被引:1,自引:1,他引:0  
Providing scalable video services in a peer-to-peer (P2P) environment is challenging. Since videos are typically large and require high communication bandwidth for delivery, many peers may be unwilling to cache them in whole to serve others. In this paper, we address two fundamental research problems in providing scalable P2P video services: (1) how a host can find enough video pieces, which may scatter among the whole system, to assemble a complete video; and (2) given a limited buffer size, what part of a video a host should cache and what existing data should be expunged to make necessary space. We address these problems with two new ideas: Cell caching collaboration and Controlled Inverse Proportional (CIP) cache allocation. The Cell concept allows cost-effective caching collaboration in a fully distributed environment and can dramatically reduce video lookup cost. On the other hand, CIP cache allocation challenges the conventional caching wisdom by caching unpopular videos in higher priority. Our approach allows the system to retain many copies of popular videos to avoid creating hot spots and at the same time, prevent unpopular videos from being quickly evicted from the system. We have implemented a Gnutella-like simulation network and use it as a testbed to evaluate the proposed technique. Our extensive study shows convincingly the performance advantage of the new scheme.
Wallapak TavanapongEmail:
  相似文献   

20.
In information-centric networking, in-network caching has the potential to improve network efficiency and content distribution performance by satisfying user requests with cached content rather than downloading the requested content from remote sources. In this respect, users who request, download, and keep the content may be able to contribute to in-network caching by sharing their downloaded content with other users in the same network domain (i.e., user-assisted in-network caching). In this paper, we examine various aspects of user-assisted in-network caching in the hopes of efficiently utilizing user resources to achieve in-network caching. Through simulations, we first show that user-assisted in-network caching has attractive features, such as self-scalable caching, a near-optimal cache hit ratio (that can be achieved when the content is fully cached by the in-network caching) based on stable caching, and performance improvements over in-network caching. We then examine the caching strategy of user-assisted in-network caching. We examine three caching strategies based on a centralized server that maintains all content availability information and informs each user of what to cache. We also examine three caching strategies based on each user’s content availability information. We first show that the caching strategy affects the distribution of upload overhead across users and the number of cache hits in each segment. One interesting observation is that, even with a small storage space (i.e., 0.1% of the content size per user), the centralized and distributed approaches improve the cache hit ratio by 50% and 45%, respectively. With an overall view of caching information, the centralized approach can achieve a higher cache hit ratio than the distributed approach. Based on this observation, we discuss a distributed approach with a larger view of caching information than the distributed approach and, through simulations, confirm that a larger view leads to a higher cache hit ratio. Another interesting observation is that the random distributed strategy yields comparable performance to more complex strategies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号