首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 562 毫秒
1.
This study proposes a novel caching scheme for content delivery network services. In general, video content users often watch the first part of video clips and then switch to other content. Therefore, a caching scheme is proposed, in which the first part of the frequently referenced content is stored on a solid state drive (SSD) while the remaining video content is stored on a hard disk drive (HDD),. The proposed hybrid (SSD/HDD) caching scheme offers several benefits, such as an improved average data output capacity due to the high average data rate and average hit capacity of the SSD. That is, performance can be significantly improved at a low extra cost with the cache server of a content delivery network (CDN).  相似文献   

2.
Because Internet access rates are highly heterogeneous, many video content providers today make available different versions of the videos, with each version encoded at a different rate. Multiple video versions, however, require more server storage and may also dramatically impact cache performance in a traditional cache or in a CDN server. An alternative to versions is layered encoding, which can also provide multiple quality levels. Layered encoding requires less server storage capacity and may be more suitable for caching; but it typically increases transmission bandwidth due to encoding overhead. In this paper we compare video streaming of multiple versions with that of multiple layers in a caching environment. We examine caching and distribution strategies that use both versions and layers. We consider two cases: the request distribution for the videos is known a priori; and adaptive caching, for which the request distribution is unknown. Our analytical and simulation results indicate that mixed distribution/caching strategies provide the best overall performance.A shorter version of this work has appeared in Proc. of IEEE International Conference on Multimedia and Expo (ICME), Vol. 2, pages 45–48, Lausanne, Switzerland, August 2002  相似文献   

3.
Most proxy caches for streaming videos do not cache the entire video but only a portion of it. This is partly due to the large size of video objects. Another reason is that the popularity of different parts of a video can be different, e.g., the prefix is generally more popular. Therefore, the development of efficient cache mechanisms requires an understanding of the internal popularity characteristics of streaming videos. This paper has two major contributions. Firstly, we analyze two 6-month long traces of RTSP video requests recorded at different streaming video servers of an entertainment video-on-demand provider, and show that the traces provide evidence that the internal popularity of the majority of the most popular videos obeys a k-transformed Zipf-like distribution. Secondly, we propose a caching algorithm which exploits this empirical internal popularity distribution. We find that this algorithm has similar performance compared with fine-grained caching but requires significantly less state information.  相似文献   

4.
命名数据网络(NDN)中的路由器节点具有缓存能力,这就极大地提高了网络中的数据发送与检索效率。然而,由于路由器的缓存能力是有限的,设计有效的缓存策略仍然是一项紧迫的任务。为了解决这个问题,提出了一种动态内容流行度缓存决策和替换策略(DPDR)。DPDR综合考虑内容流行度和缓存能力,利用一个和式增加、积式减少(AIMD)的算法动态调节流行度阈值,并将超过流行度阈值的内容存入缓存空间;同时提出了一个缓存替换算法,综合考虑了缓存空间中内容的流行度和内容最后被访问时间等因素,将替换值最小的内容移出内容缓存。大量仿真结果显示,与其他算法相比,本文所提的算法能够有效提高缓存命中率,缩短平均命中距离和网络吞吐量。  相似文献   

5.
In information-centric networking, in-network caching has the potential to improve network efficiency and content distribution performance by satisfying user requests with cached content rather than downloading the requested content from remote sources. In this respect, users who request, download, and keep the content may be able to contribute to in-network caching by sharing their downloaded content with other users in the same network domain (i.e., user-assisted in-network caching). In this paper, we examine various aspects of user-assisted in-network caching in the hopes of efficiently utilizing user resources to achieve in-network caching. Through simulations, we first show that user-assisted in-network caching has attractive features, such as self-scalable caching, a near-optimal cache hit ratio (that can be achieved when the content is fully cached by the in-network caching) based on stable caching, and performance improvements over in-network caching. We then examine the caching strategy of user-assisted in-network caching. We examine three caching strategies based on a centralized server that maintains all content availability information and informs each user of what to cache. We also examine three caching strategies based on each user’s content availability information. We first show that the caching strategy affects the distribution of upload overhead across users and the number of cache hits in each segment. One interesting observation is that, even with a small storage space (i.e., 0.1% of the content size per user), the centralized and distributed approaches improve the cache hit ratio by 50% and 45%, respectively. With an overall view of caching information, the centralized approach can achieve a higher cache hit ratio than the distributed approach. Based on this observation, we discuss a distributed approach with a larger view of caching information than the distributed approach and, through simulations, confirm that a larger view leads to a higher cache hit ratio. Another interesting observation is that the random distributed strategy yields comparable performance to more complex strategies.  相似文献   

6.
The explosive growth of smart devices has led to the evolution of multimedia data (mainly video) services in mobile networks. It attracted many mobile network operators (MNO) to deploy novel network architectures and develop effective economic policies. Mobile data offloading through smart devices (SDs) by exploring device-to-device (D2D) communications can significantly reduce network congestion and enhance quality of service at a lower cost, which is the key requirement of upcoming 5G networks. This reasonable cost solution is useful for attracting mobile users to participate in the offloading process by paying them proper incentives. It is beneficial for MNOs as well as mobile users. Moreover, D2D communications promise to be one of the prominent services for 5G networks. In this paper, we present a combinatorial optimal reverse auction (CORA) mechanism, which efficiently selects and utilizes available high-end SDs on the basis of available resources for offloading purposes. It also decides the optimal pricing policy for the selected SDs. The efficiency of CORA has been realized in terms of bandwidth and storage demand. Subsequently, we implement caching in SDs, eNodeB (eNB), and evolved packet core (EPC) with the help of our novel video dissemination cache update algorithm to solve the latency or delay issues in the offloading process. Due to high popularity, we specifically focus on video data. Simulation results show that the proposed SD caching scenario curtails the delay up to 75% and the combined cache (CC) scenario slashes the delay varying from 15 to 57%. We also scruitinize the video downloading time performance of various cache scenarios (i.e., CC, EPC cache, eNB cache, and SD cache scenarios).  相似文献   

7.
信息中心网络与传统IP网络区别的重要特点是在路由器节点配备缓存设施,将热门内容推送到更靠近用户的网络边缘,从而降低用户获取内容的时延。为了提高缓存空间的利用率,避免内容在相邻节点的冗余放置,本文提出了一种分层协作的内容放置和请求路由算法,在层次结构的接入网络中,依据内容的访问热度仅在层次结构的其中一层放置内容副本以减少内容的冗余备份;此外,通过建立二元标签组记录内容的放置信息,用于动态引导请求的转发;仿真结果表明,与现有的缓存放置算法相比,分层协作的内容缓存和请求路由机制,提高了网络存储内容的多样性,并降低了用户访问内容的时延。  相似文献   

8.
IPTV services have been widely deployed by network operators around the world over the last years. However, real-time streaming of Linear TV and Video-on-Demand (VoD) offerings, especially in High Definition quality, still puts a high burden on the network and content servers concerning bandwidth, Quality-of-Service, processing performance and scalability if 100.000s of users have to be supported simultaneously. While multicast delivery can cope with some of these problems for Linear TV services, the unicast VoD services cannot take advantage of that and especially the request for on-demand content is expected to substantially grow in the future. With the introduction of Content Download Services (CDSs), operators have the option to provide IPTV services in innovative ways: They can provide high-quality video services to users with limited access bandwidth, offload the streaming request for blockbuster movies at peak times from the VoD servers or provided personalized advertisements for insertion into a live program event in advance to the users end device. The Digital Video Broadcast (DVB) Project has recently finalized its CDS specification within its IPTV specification efforts. DVB CDS supports push and pull delivery models with unicast, multicast and peer-to-peer distribution in order to enable various business models and use cases. In this work we introduce the specified technology and map it to example use cases and business models.  相似文献   

9.
内容中心网络(CCN)是一种新型的网络架构,但其路由器缓存数据包的模式给用户带来了隐私泄露的风险。提出一种面向隐私保护的动态区域协作缓存策略。该策略以兼顾用户隐私保护的同时提高网络性能为目的进行设计,从信息熵的角度出发,以提高用户的请求信息的不确定度为目标,通过将内容存储在相应的隐匿系数高的缓存节点,增加攻击者确定请求用户的难度;以动态区域协作的方式存储内容,增大缓存内容的归属不确定性,以加大攻击者定位数据包的难度。仿真结果表明,该策略可降低内容请求时延,提高缓存命中率。  相似文献   

10.
With the explosion of multimedia content, Internet bandwidth is wasted by repeated downloads of popular content. Recently, Content-Centric Networking (CCN), or the so-called Information-Centric Networking (ICN), has been proposed for efficient content delivery. In this paper, we investigate the performance of in-network caching for Named Data Networking (NDN), which is a promising CCN proposal. First, we examine the inefficiency of LRU (Least Recently Used) which is a basic cache replacement policy in NDN. Then we formulate the optimal content assignment for two in-network caching policies. One is Single-Path Caching, which allows a request to be served from routers only along the path between a requester and a content source. The other is Network-Wide Caching, which enables a request to be served from any router holding the requested content in a network. For both policies, we use a Mixed Integer Program to optimize the content assignment models by considering the link cost, cache size, and content popularity. We also consider the impact of link capacity and routing issues on the optimal content assignment. Our evaluation and analysis present the performance bounds of in-network caching on NDN in terms of the practical constraints, such as the link cost, link capacity, and cache size.  相似文献   

11.
In the Content-Centric Networking (CCN) architecture, popular content can be cached in some intermediate network devices while being delivered, and the following requests for the cached content can be efficiently handled by the caches. Thus, how to design in-network caching is important for reducing both the traffic load and the delivery delay. In this paper, we propose a caching framework of Prefix-based Popularity Prediction (PPP) for efficient caching in CCN. PPP assigns a lifetime (in a cache) to the prefix of a name (of each cached object) based on its access history (or popularity), which is represented as a Prefix-Tree (PT). We demonstrate PPP’s predictability of content popularity in CCN by both traces and simulations. The evaluation results show that PPP can achieve higher cache hits and less traffic load than traditional caching algorithms (i.e., LRU and LFU). Also, its performance gain increases with users of high mobility.  相似文献   

12.
现有大多数内容缓存算法需要对内容流行度的准确估计,这在动态移动网络环境中是较难实现的。提出考虑内容异构5G无线网络云对边混合缓存策略,设计优化了内容缓存位置,其可以是原始内容服务器、云单元(CUs)和基站(BSs)。采用Lyapunov优化方法解决了NP-hard缓存控制问题与CU缓存和BS缓存控制决策之间的紧密耦合问题,有助于改善和识别网络体系结构的层次性和Cus缓存与BSs缓存之间的隶属关系,同时新的分层网络架构能够通过机会性地开发以云为中心和以边缘为中心的缓存来提高内容缓存性能,支持高平均请求的内容数据速率。采用李雅普诺夫优化技术,可实现恒定分数的容量区域的所有到达率的有限服务延迟,进而实现缓存数据的快速读取。仿真结果显示,所提缓存策略在平均端到端服务延迟和负载降低率方面具有较为显著的优势。  相似文献   

13.
With the rapid development of WiFi and 3G/4G, people tend to view videos on mobile devices. These devices are ubiquitous but have small memory to cache videos. As a result, in contrast to traditional computers, these devices aggravate the network pressure of content providers. Previous studies use CDN to solve this problem. But its static leasing mechanism in which the rental space cannot be dynamically adjusted makes the operational cost soar and incompatible with the dynamically video delivery. In our study, based on a thorough analysis of user behavior from Tencent Video, a popular Chinese on-line video share platform, we identify two key user behaviors. Firstly, lots of users in the same region tend to watch the same video. Secondly, the popularity distribution of videos conforms with the Pareto principle, i.e., the top 20% popular videos own 80% of all video traffic. To turn these observations into silver bullet, we propose and implement a novel cloud- and peer-assisted video on demand system (CPA-VoD). In the system, we group users in the same region as a peer swarm, and in the same peer swarm, users can provide videos to other users by sharing their cached videos. Besides, we cache the 10% most popular videos in cloud servers to further alleviate the network pressure. We choose cloud servers to cache videos because the rental space can be dynamically adjusted. According to the evaluation on a real dataset from Tencent Video, CPA-VoD alleviates the network pressure and the operation cost excellently, while only 20.9% traffic is serviced by the content provider.  相似文献   

14.
In this paper, we propose a novel proxy caching scheme for video-on-demand (VoD) services. Our approach is based on the observation that streaming video users searching for some specific content or scene pay most attention to the initial delay, while a small shift of the starting point is acceptable. We present results from subjective VoD tests that relate waiting time and starting point deviation to user satisfaction. Based on this relationship as well as the dynamically changing popularity of video segments, we propose an efficient segment-based caching algorithm, which maximizes the user satisfaction by trading off between the initial delay and the deviation of starting point. Our caching scheme supports interactive video cassette recorder (VCR) functionalities and enables cache replacement with a much finer granularity compared to previously proposed segment-based approaches. Our experimental results show a significantly improved user satisfaction for our scheme compared to conventional caching schemes.   相似文献   

15.
Named data networking (NDN) is a new Internet architecture that replaces today’s focus on where – addresses and hosts with what – the content that users and applications care about. One of NDN’s prominent advantages is scalable and efficient content distribution due to its native support of caching and multicast in the network. However, at the last hop to wireless users, often the WiFi link, current NDN implementation still treats the communication as multiple unicast sessions, which will cause duplicate packets and waste of bandwidth when multiple users request for the same popular content. WiFi’s built-in broadcast mechanism can alleviate this problem, but it suffers from packet loss since there is no MAC-layer acknowledgement as in unicast. In this paper, we develop a new NDN-based cross-layer approach called NLB for efficient and scalable live video streaming over wireless LAN. The core ideas are: using WiFi’s broadcast channel to deliver content from the access point to the users, a leader-based mechanism to suppress duplicate requests from users, and receiver-driven rate control and loss recovery. The design is implemented and evaluated in a physical testbed comprised of one software AP and 20 Raspberry Pi-based WiFi clients. While NDN with multiple unicast sessions or plain broadcast can support no more than ten concurrent viewers of a 1Mbps streaming video, NDN plus NLB supports all 20 viewers, and Received December 29, 2015; accepted April 28, 2016 E-mail: zhxp@tsinghua.edu.cn can likely support much more when present.  相似文献   

16.
针对空间信息网络(space information network,SIN)中卫星节点缓存容量有限,且卫星高速移动使得星间链路时变,导致地面用户内容访问延迟增大的问题,提出一种基于人工蜂群算法的空间信息网络缓存决策策略(satellite improved artificial bee colony,SIABC)。首先根据低轨卫星节点链路切换的周期性和可预知性,建立网络分区模型,对空间信息网络的卫星节点进行分区,在此基础上,建立区域节点协作缓存模型,使得整个网络区域内的卫星节点有选择性地缓存不同流行度的内容,同时兼顾区域内节点的协作缓存,从而流行度高的内容缓存在网络边缘。仿真结果表明,与现有缓存机制相比,该缓存策略能够明显提高内容的平均缓存命中率,并显著降低用户的内容访问时延。  相似文献   

17.
缓存技术能有效的节省网络带宽,减少用户的访问延迟.在分布式缓存系统中,一个值得研究的问题是如何根据用户的请求动态的进行缓存部署,使得系统的收益最大.描述了缓存部署问题并建立了优化模型,在此基础上提出一种新的协作缓存部署算法,该算法利用对象的热度、网络距离,以及系统中各节点接收的请求和系统缓存分布信息,依次对请求路径上的节点进行缓存部署决策,同时该算法将计算分布到请求路径的各个节点上进行.仿真结果表明,该算法具有比LRU和Graph算法更高的缓存命中率和更低的访问延迟.  相似文献   

18.
Recently, many Video-on-Demand (VoD) service providers try to attract as many users as possible by offering multi-bitrate video streaming services with differentiated qualities. Many researches focus on video layered coding (e.g., scalable video coding, SVC). However, SVC is not widely used in VoD industry. Another solution, multi-version videos, can be classified into online transcoding and pre-stored multi-version videos. Online transcoding is a CPU-intensive and costly task, so it is not suitable for large-scale VoD applications. In this paper, we study how to improve caching efficiency based on pre-stored multi-version videos. We leverage the sharing probability among different versions of the same video and propose a multi-version shared caching (MSC) method to maximize the benefit of caching proxy. If the desired version is not in the cache while the higher neighbor version is in, MSC transmits the higher version streaming to user temporarily. In this case, MSC can make full use of the caching resources to improve the cache hit ratio and decrease users’ average waiting time. Simulation results show that MSC outperforms the others in the cache hit ratio and the average waiting time.  相似文献   

19.
Exploiting Regularities in Web Traffic Patterns for Cache Replacement   总被引:2,自引:0,他引:2  
Cohen  Kaplan 《Algorithmica》2002,33(3):300-334
Abstract. Caching web pages at proxies and in web servers' memories can greatly enhance performance. Proxy caching is known to reduce network load and both proxy and server caching can significantly decrease latency. Web caching problems have different properties than traditional operating systems caching, and cache replacement can benefit by recognizing and exploiting these differences. We address two aspects of the predictability of traffic patterns: the overall load experienced by large proxy and web servers, and the distinct access patterns of individual pages. We formalize the notion of ``cache load' under various replacement policies, including LRU and LFU, and demonstrate that the trace of a large proxy server exhibits regular load. Predictable load allows for improved design, analysis, and experimental evaluation of replacement policies. We provide a simple and (near) optimal replacement policy when each page request has an associated distribution function on the next request time of the page. Without the predictable load assumption, no such online policy is possible and it is known that even obtaining an offline optimum is hard. For experiments, predictable load enables comparing and evaluating cache replacement policies using partial traces , containing requests made to only a subset of the pages. Our results are based on considering a simpler caching model which we call the interval caching model . We relate traditional and interval caching policies under predictable load, and derive (near)-optimal replacement policies from their optimal interval caching counterparts.  相似文献   

20.
针对内容中心网络(content centric networking,CCN)中不同业务内容的合理放置问题,提出了一种基于业务分类和节点分区的混合缓存机制。根据不同的业务特征,设计了差异化的缓存策略。对于流媒体点播业务,采用基于流行度的推拉式缓存,实现其在边缘网络的按序存储;对于非流媒体共享内容,采用基于hash的显式缓存,实现其在核心网络的单一副本放置。仿真结果表明,与经典算法相比,该机制提高了缓存命中率和跳数减少率,降低了平均请求时延。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号