首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper investigates the optimal replication of data objects at hierarchical and transparent Web proxies. By transparent, we mean the proxies are capable of intercepting users' requests and forwarding the requests to a higher level proxy if the requested data are not present in their local cache. Two cases of data replication at proxies are studied: 1) proxies having unlimited storage capacities and 2) proxies having limited storage capacities. For the former case, an efficient algorithm for computing the optimal result is proposed. For the latter case, we prove the problem is NP-hard, and propose two heuristic algorithms. Extensive simulations have been conducted and the simulation results have demonstrated significant performance gain by using the proposed data replication algorithms and also shown the proposed algorithms out-perform the standard Web caching algorithm (LRU threshold method).  相似文献   

2.
Proxy caching is a key technique to reduce transmission cost for on-demand multimedia streaming. The effectiveness of current caching schemes, however, is limited by the insufficient storage space and weak cooperation among proxies and their clients, particularly considering the high bandwidth demands from media objects. In this paper, we propose COPACC, a cooperative proxy-and-client caching system that addresses the above deficiencies. This innovative approach combines the advantages of both proxy caching and peer-to-peer client communications. It leverages the client-side caching to amplify the aggregated cache space and rely on dedicated proxies to effectively coordinate the communications. We propose a comprehensive suite of distributed protocols to facilitate the interactions among different network entities in COPACC. It also realizes a smart and cost-effective cache indexing, searching, and verifying scheme. Furthermore, we develop an efficient cache allocation algorithm for distributing video segments among the proxies and clients. The algorithm not only minimizes the aggregated transmission cost of the whole system, but also accommodates heterogeneous computation and storage constraints of proxies and clients. We have extensively evaluated the performance of COPACC under various network and end-system configurations. The results demonstrate that it achieves remarkably lower transmission cost as compared to pure proxy-based caching with limited storage space. On the other hand, it is much more robust than a pure peer-to-peer communication system in the presence of node failures. Meanwhile, its computation and control overheads are both kept in low levels  相似文献   

3.
Many geographically distributed proxies are increasingly used for collaborative Web caching to improve performance. In hashing-based collaborative Web caching, the response times can be negatively impacted for those URL requests hashed into geographically distant or overloaded proxies. In this paper, we present and evaluate a latency-sensitive hashing scheme for collaborative Web caching. It takes into account latency delays due to both geographical distances and dynamic load conditions. Each URL request is first hashed into an anchor hash bucket, with each bucket mapping to one of the proxies. Secondly, a number of nearby hash buckets are examined to select the proxy with the smallest latency delay to the browser. Trace-driven simulations are conducted to evaluate the performance of this new latency-sensitive hashing. The results show that (1) with the presence of load imbalance due to skew in request origination or hot-spot references, latency-sensitive hashing effectively balances the load by hashing into geographically distributed proxies for collaborative Web caching, and (2) when the overall system is lightly loaded, latency-sensitive hashing effectively reduces latency delays by directing requests to geographically closer proxies.  相似文献   

4.
Data caching is a popular technique that improves data accessibility in wired or wireless networks. However, in mobile ad hoc networks, improvement in access latency and cache hit ratio may diminish because of the mobility and limited cache space of mobile hosts (MHs). In this paper, an improved cooperative caching scheme called group-based cooperative caching (GCC) is proposed to generalize and enhance the performance of most group-based caching schemes. GCC allows MHs and their neighbors to form a group, and exchange a bitmap data directory periodically used for proposed algorithms, such as the process of data discovery, and cache placement and replacement. The goal is to reduce the access latency of data requests and efficiently use available caching space among MH groups. Two optimization techniques are also developed for GCC to reduce computation and communication overheads. The first technique compresses the directories using an aggregate bitmap. The second employs multi-point relays to develop a forwarding node selection scheme to reduce the number of broadcast messages inside the group. Our simulation results show that the optimized GCC yields better results than existing cooperative caching schemes in terms of cache hit ratio, access latency, and average hop count.  相似文献   

5.
Hash routing reduces cache misses for a cluster of web proxies by eliminating the duplication of cache contents. In this paper, we investigate the optimization of hash routing performance by dynamically adapting object and DNS allocations to the traffic pattern. An analytical model is developed for hash routing that takes into consideration the original request distribution, the object allocation strategy, the speeds of the proxies and the cache hit ratios. Based on this model, the optimal hash routing problem is studied. The analytical results are applied to the design of two adaptive hash routing schemes: ADA-OBJ optimizes object allocation under static client configuration, and ADA-OBJ/DNS optimizes both object and DNS allocations under dynamic client configuration. Trace-driven simulation experiments have been conducted to evaluate the performance of the proposed schemes. The results show that they significantly outperform the intuitive static hash routing scheme based only on the speeds of the proxies.  相似文献   

6.
Traditional caching technology is not applicable to cache video streaming objects over heterogeneous networking environments. The popularity of mobile devices in the heterogeneous networking environments make the access of Internet content become a common phenomenon. To support different mobile devices in the heterogeneity networking environments, a transcoding proxy is used to transcode different versions of the streaming videos according to clients’ requests. In this paper, we propose a weighted caching replace strategy for video streaming objects over heterogeneous networking environments. A new caching algorithm with static weight transcoding graph and dynamic caching relation tree is introduced. The proposed algorithm is compared with LRU, LFU, CP and PF cache algorithms in three parts: hit ratio, byte hit ratio, and average transmission delay. Experimental results show that the proposed algorithm outperforms than traditional LRU, LFU, CP and PF cache algorithms.  相似文献   

7.
In this paper, the problem of caching continuous media data in a (main) memory and disk caching system is addressed. Caching schemes can significantly reduce the load on the network as well as on the servers, also the retrieval of documents from the cache requires short response time. In interval-level caching algorithms, an interval of data between two adjacent streams is the basic caching entity. In this paper, we design a novel algorithm, referred to as variable bit rate caching (VBRC) algorithm, which belongs to the interval-level caching algorithms. The proposed VBRC algorithm can be used in the system for memory caching or disk caching. VBRC can handle variable retrieval bandwidth as well as constant retrieval bandwidth . In designing the VBRC algorithm, we propose the strategies of reducing the number of switching operation, which will probably cause discontinuity of retrieving data. Also, we propose a just-in-time scheme for resource allocation in our VBRC algorithm and show that the caching performance in comparison with the reservation scheme adopted in the resource-based caching (RBC) algorithm is significantly improved. Our simulation study compares the recent and most popular generalized interval caching, RBC, and VBRC, on several influencing factors such as cache space size, cache I/O bandwidth, request arrival rate, and percentage of requests for large documents, with respect to the byte hit ratio and the number of switching operations. The simulation result confirms our analysis.
Bharadwaj VeeravalliEmail: URL: http://cnds.ece.nus.edu.sg
  相似文献   

8.
Nowadays, server-side Web caching becomes an important technique used to reduce the User Perceived Latency (UPL). In large-scale multimedia systems, there are many Web proxies, connected with a multimedia server, that can cache some most popular multimedia objects and respond to the requests for them. Multimedia objects have some particular characteristic, e.g., strict QoS requirements. Hence, even some efficient conventional caching strategies based on cache hit ratio, meant for non-multimedia objects, will confront some problems in dealing with the multimedia objects. If we consider additional resources of proxy besides cache space, say bandwidth, we can readily observe that high hit ratios may deteriorate the entire system performance. In this paper, we propose a novel placement model for networked multimedia systems, referred to as the Hk/T model, which considers the combined influence of arrival rate, size, and playback time to select the objects to be cached. Based on this model, we propose an innovative Web caching algorithm, named as the ART-Greedy algorithm, which can balance the load among the proxies and achieve a minimum Average Response Time (ART) of the requests. Our experimental results conclusively demonstrate that the ART-Greedy algorithm outperforms the most popular and commonly used LFU (Least Frequently Used) algorithm significantly, and can achieve a better performance than the byte-hit algorithm when the system utilization is medium and high.  相似文献   

9.
一个基于集中管理的协作式Web缓存系统   总被引:10,自引:2,他引:10  
共享不同代理的缓存Web文档是减少通信量和减轻网络瓶颈的重要方法。在分析原有Web缓存通信协议(ICP)的基础上,提出了一种新的协作Web缓存系统(CMCS)并作了分析比较。通过将HTTP请求均匀分散到系统各个代理,消除了代理之间庞大的通信开销以及由此带来的处理负担。在动态变化的网络环境下,有效地将各个代理组织起来,处理来自服务器的文档。另外也克服了以往每个代理里有大量冗余内容,造成各个代理内容趋向的情形。  相似文献   

10.
刘伟  陈振 《计算机应用研究》2021,38(9):2628-2634
结合边缘缓存技术与流媒体传输技术能有效提升视频服务质量,为降低视频内容提供商的边缘资源租赁成本,提出一种视频缓存、转码和传输联合优化策略.首先,综合考虑视频的缓存、转码、边缘传输和云端传输的成本,以最小总租赁成本为目标建立整数规划模型,并证明其NP-complete性质;其次,根据历史请求数估计视频流行度变化,并对流行视频进行缓存;最后,基于视频的缓存状态,为用户的请求选择成本最低的响应方式.仿真实验表明,所提策略与现有策略相比,可提升请求命中率并有效降低内容提供商的资源租赁成本.  相似文献   

11.
For an ISP (Internet Service Provider) that has deployed P2P caches in more than one ASs (autonomous systems), cooperative caching which makes their caches cooperate with each other can save more cost of carrying P2P traffic than independent caching. However, existing cooperative caching algorithms only use objects’ popularity as the measurement to decide which objects should be cached, and cost on intra-ISP links that has great impact on the benefits of cooperative caching is not considered. In this paper, we first model the cooperative caching problem as a NP-Complete problem, which is based on our analysis about the cost of serving requests with consideration of both the objects’ popularity and the cost on intra-ISP links. Then we propose a novel cooperative caching algorithm named cLGV (Cooperative, Lowest Global Value). The cLGV algorithm uses a new concept global value to estimate the benefits of caching or replacing an object in the cooperative caching system, and the global value of each object is evaluated according to not only objects’ popularity in each AS but also cost on intra-ISP links among ASs. Results of both synthetic and real traces driven simulations indicate that our cLGV algorithm can save the cost of carrying P2P traffic at least 23 % higher than that of existing cooperative caching algorithms.  相似文献   

12.
《Computer Networks》2007,51(10):2753-2770
Distributed Denial of Service (DDoS) attacks remain a daunting challenge for Internet service providers. Previous work on countering these attacks has focused primarily on attacks at a single server location and the associated network infrastructure. Increasingly, however, high-volume sites are served via content distribution networks (CDNs). In this paper, we propose two mechanisms to withstand and deter DDoS attacks on CDN-hosted Web sites and the CDN infrastructure. First, we present a novel CDN request routing algorithm which allows CDN proxies to effectively distinguish attacks from the requests from actual users. The proposed scheme, based on the keyed hash function, can significantly improve the resilience of CDNs to DDoS attacks. In particular, the resilience of a CDN, consisting of n proxies, becomes O(n2) with the proposed approach, when compared to a site hosted by a single server. We present performance numbers from a controlled test environment to show that the proposed approach is effective. Second, we introduce novel site allocation algorithms based on the well-established theory on binary codes. The proposed allocation algorithm guarantees an upper bound on the level of service outage of a CDN-hosted site even when a DoS attack on another site on the same CDN has been successful. Together, our schemes significantly improve the resilience of the Web sites hosted by CDNs, and complement other work on countering DoS.  相似文献   

13.
《Computer Networks》2007,51(11):2917-2937
The distributed partitioning of autonomous, self-aware nodes into cooperative groups, within which scarce resources could be effectively shared for the benefit of the group, is increasingly emerging as a hallmark of many newly-proposed overlay and peer-to-peer applications. Distributed caching protocols in which group members cooperate to satisfy local requests for objects is a canonical example of such applications. In recent work of ours we identified mistreatment as a potentially serious problem for nodes participating in such cooperative caching arrangements. Mistreatment materializes when a node’s access cost for fetching objects worsens as a result of cooperation. To that end, we outlined an emulation-based framework for the development of mistreatment-resilient distributed selfish caching schemes. Under this framework, a node opts to participate in the group only if its individual access cost is less than the one achieved while in isolation. In this paper, we argue against the use of such static “all or nothing” approaches which force an individual node to either join or not join a cooperative group. Instead, we advocate the use of a smoother approach, whereby the level of cooperation is tied to the benefit that a node begets from joining a group. To that end, we propose a distributed and easily deployable feedback-control scheme which mitigates mistreatment. Under our proposed adaptive scheme, a node independently emulates its performance as if it were acting in a greedy local manner and then adapts its caching policy in the direction of reducing its measured access cost below its emulated greedy local cost. Using control-theoretic analysis, we show that our proposed scheme converges to the minimal access cost, and indeed outperforms any static scheme. We also show that our scheme results in insignificant degradation to the performance of the caching group under typical operating scenaria.  相似文献   

14.
为了应对第五代无线通信网络中数据吞吐量急剧增加的问题,移动边缘缓存成为了一种有效的解决方案。它通过在边缘设备上存储网络内容,减轻回程链路和核心网络的负担,缩短服务时延。到目前为止,大多数边缘缓存研究主要在协作内容缓存的优化方面,忽略了内容传输的效率。研究超密集网络的内容协作边缘缓存与无线带宽资源的分配问题,通过余弦相似度和高斯相似度求解基站之间总的相似度,将网络中的小基站根据总相似度进行分组,把缓存和无线带宽分配问题建模成一个长期混合整数的非线性规划问题(LT-MINLP),进而将协作边缘缓存与带宽分配问题转变为一个带约束的马尔可夫决策过程,并利用深度确定性策略梯度DDPG模型,提出了一种基于深度强化学习的内容协作边缘缓存与带宽分配算法CBDDPG。提出的基站分组方案增加了基站之间文件共享的机会,提出的CBDDPG算法的缓存方案利用DDPG双网络机制能更好地捕捉用户的请求规律,优化缓存部署。将CBDDPG算法与三种基线算法(RBDDPG、LCCS和CB-TS)进行了对比实验,实验结果表明所提方案能够有效地提高内容缓存命中率,降低内容传递的时延,提升用户体验。  相似文献   

15.
Joint bandwidth and power allocation for a multi-radio access(MRA)system in a heterogeneous wireless access environment is studied.Since both the number of users being served by the system and the wireless channel state are time-varying,the optimal resource allocation is no longer a static optimum and will change with the varying network state.Moreover,distributed resource allocation algorithms that require iterative updating and signaling interactions cannot converge in negligible time.Thus,it is unrealistic to assume that the active user number and the wireless channel state remain unchanged during the iterations.In this paper,we propose an adaptive joint bandwidth and power allocation algorithm based on a novel iteration stepsize selection method,which can adapt to the varying network state and accelerate the convergence rate.A distributed solution is also designed for the adaptive joint resource allocation implementation.Numerical results show that the proposed algorithm can not only track the varying optimal resource allocation result much more quickly than a traditional algorithm with fixed iteration stepsize,but can also reduce the data transmission time for users and increase the system throughput.  相似文献   

16.
With the wide availability of high-speed network access, we are experiencing high quality streaming media delivery over the Internet. The emergence of ubiquitous computing enables mobile users to access the Internet with their laptops, PDAs, or even cell phones. When nomadic users connect to the network via wireless links or phone lines, high quality video transfer can be problematic due to long delay or size mismatch between the application display and the screen. Our proposed solution to this problem is to enable network proxies with the transcoding capability, and hence provide different, appropriate video quality to different network environment. The proxies in our transcoding-enabled caching (TeC) system perform transcoding as well as caching for efficient rich media delivery to heterogeneous network users. This design choice allows us to perform content adaptation at the network edges. We propose three different TeC caching strategies. We describe each algorithm and discuss its merits and shortcomings. We also study how the user access pattern affects the performance of TeC caching algorithms and compare them with other approaches. We evaluate TeC performance by conducting two types of simulation. Our first experiment uses synthesized traces while the other uses real traces derived from an enterprise media server logs. The results indicate that compared with the traditional network caches, with marginal transcoding load, TeC improves the cache effectiveness, decreases the user-perceived latency, and reduces the traffic between the proxy and the content origin server.  相似文献   

17.
In a localized routing algorithm, each node makes forwarding decisions solely based on the position of itself, its neighbors, and its destination. In distance, progress, and direction-based approaches'(reported in the literature), when node A wants to send or forward message m to destination node D, it forwards m to its neighbor C which is closest to D (has best progress toward D, whose direction is closest to the direction of D, respectively) among all neighbors of A. The same procedure is repeated until D, if possible, is eventually reached. The algorithms are referred to as GEDIR, MFR, and DIR when a common failure criterion is introduced: The algorithm stops if the best choice for the current node is the node from which the message came. We propose 2-hop GEDIR, DIR, and MFR methods in which node A selects the best candidate node C among its 1-hop and 2-hop neighbors according to the corresponding criterion and forwards m to its best 1-hop neighbor among joint neighbors of A and C. We then propose flooding GEDIR and MFR and hybrid single-path/flooding GEDIR and MFR methods which are the first localized algorithms (other than full flooding) to guarantee the message delivery (in a collision-free environment). We show that the directional routing methods are not loop-free, while the GEDIR and MFR-based methods are inherently loop free. The simulation experiments, with static random graphs, show that GEDIR and MFR have similar success rates, which is low for low degree graphs and high for high degree ones. When successful, their hop counts are near the performance of the shortest path algorithm. Hybrid single-path/flooding GEDIR and MFR methods have low communication overheads. The results are also confirmed by experiments with moving nodes and MAC layer  相似文献   

18.
Proxy caching of large multimedia objects on the edge of the Internet has become increasingly important for reducing network latency. For a large media object, such as a two-hour video, treating the whole media as a single object for caching is not appropriate. In this paper, we study three media segmentation approaches to proxy caching: fixed, pyramid, and skyscraper. Blocks of a media stream are grouped into various segments for cache management. The cache admission and replacement policies attach different caching priorities to individual segments, taking into account the access frequency of the media object and the segment distance from the start of the media. These caching policies give preferential treatment to the beginning segments. As such, most user requests can be quickly played back from the proxy servers without delay. Event-driven simulations are conducted to evaluate the segmentation approaches and compare them with whole media caching. The results show that: 1) compared with whole media caching, segmentation-based caching is more effective not only in increased byte-hit ratio but also in lowered fraction of requests that requires delayed start; 2) pyramid segmentation, where segment size increases exponentially, is the best segmentation approach; and 3) segmentation-based caching is especially advantageous when the cache size is limited, when the set of hot media objects changes over time, when the media file size is large, and when there are a large number of distinct media objects.  相似文献   

19.
In the Content-Centric Networking (CCN) architecture, popular content can be cached in some intermediate network devices while being delivered, and the following requests for the cached content can be efficiently handled by the caches. Thus, how to design in-network caching is important for reducing both the traffic load and the delivery delay. In this paper, we propose a caching framework of Prefix-based Popularity Prediction (PPP) for efficient caching in CCN. PPP assigns a lifetime (in a cache) to the prefix of a name (of each cached object) based on its access history (or popularity), which is represented as a Prefix-Tree (PT). We demonstrate PPP’s predictability of content popularity in CCN by both traces and simulations. The evaluation results show that PPP can achieve higher cache hits and less traffic load than traditional caching algorithms (i.e., LRU and LFU). Also, its performance gain increases with users of high mobility.  相似文献   

20.
Due to recent advances in network, storage and data compression technologies, video-on-demand (VOD) service has become economically feasible. It is a challenging task to design a video storage server that can efficiently service a large number of concurrent requests on demand. One approach to accomplishing this task is to reduce the I/O demand to the VOD server through data- and resource-sharing techniques. One form of data sharing is the stream-merging approach proposed in [5]. In this paper, we formalize a static version of the stream-merging problem, derive an upper bound on the I/O demand of static stream merging, and propose efficient heuristic algorithms for both static and dynamic versions of the stream-merging problem.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号