首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
《Computer Networks》2007,51(13):3673-3692
Network congestion remains one of the main barriers to the continuing success of the Internet. For Web users, congestion manifests itself in unacceptably long response times. One possible remedy to the latency problem is to use caching at the client, at the proxy server, or within the Internet. However, Web documents are becoming increasingly dynamic (i.e., have short lifetimes), which limits the potential benefit of caching. The performance of a Web caching system can be dramatically increased by integrating document prefetching (a.k.a. “proactive caching”) into its design. Although prefetching reduces the response time of a requested document, it also increases the network load, as some documents will be unnecessarily prefetched (due to the imprecision in the prediction algorithm). In this study, we analyze the confluence of the two effects through a tractable mathematical model that enables us to establish the conditions under which prefetching reduces the average response time of a requested document. The model accommodates both passive client and proxy caching along with prefetching. Our analysis is used to dynamically compute the “optimal” number of documents to prefetch in the subsequent client’s idle (think) period. In general, this optimal number is determined through a simple numerical procedure. Closed-form expressions for this optimal number are obtained for special yet important cases. We discuss how our analytical results can be used to optimally adapt the parameters of an actual prefetching system. Simulations are used to validate our analysis and study the interactions among various system parameters.  相似文献   

2.
Barford  Paul  Bestavros  Azer  Bradley  Adam  Crovella  Mark 《World Wide Web》1999,2(1-2):15-28
Understanding the nature of the workloads and system demands created by users of the World Wide Web is crucial to properly designing and provisioning Web services. Previous measurements of Web client workloads have been shown to exhibit a number of characteristic features; however, it is not clear how those features may be changing with time. In this study we compare two measurements of Web client workloads separated in time by three years, both captured from the same computing facility at Boston University. The older dataset, obtained in 1995, is well known in the research literature and has been the basis for a wide variety of studies. The newer dataset was captured in 1998 and is comparable in size to the older dataset. The new dataset has the drawback that the collection of users measured may no longer be representative of general Web users; however, using it has the advantage that many comparisons can be drawn more clearly than would be possible using a new, different source of measurement. Our results fall into two categories. First we compare the statistical and distributional properties of Web requests across the two datasets. This serves to reinforce and deepen our understanding of the characteristic statistical properties of Web client requests. We find that the kinds of distributions that best describe document sizes have not changed between 1995 and 1998, although specific values of the distributional parameters are different. Second, we explore the question of how the observed differences in the properties of Web client requests, particularly the popularity and temporal locality properties, affect the potential for Web file caching in the network. We find that for the computing facility represented by our traces between 1995 and 1998, (1) the benefits of using size‐based caching policies have diminished; and (2) the potential for caching requested files in the network has declined. This revised version was published online in August 2006 with corrections to the Cover Date.  相似文献   

3.
A large-scale, distributed video-on-demand (VOD) system allows geographically dispersed residential and business users to access video services, such as movies and other multimedia programs or documents on demand from video servers on a high-speed network. In this paper, we first demonstrate through analysis and simulation the need for a hierarchical architecture for the VOD distribution network.We then assume a hierarchical architecture, which fits the existing tree topology used in today's cable TV (CATV) hybrid fiber/coaxial (HFC) distribution networks. We develop a model for the video program placement, configuration, and performance evaluation of such systems. Our approach takes into account the user behavior, the fact that the user requests are transmitted over a shared channel before reaching the video server containing the requested program, the fact that the input/output (I/O) capacity of the video servers is the costlier resource, and finally the communication cost. In addition, our model employs batching of user requests at the video servers. We study the effect of batching on the performance of the video servers and on the quality of service (QoS) delivered to the user, and we contribute dynamic batching policies which improve server utilization, user QoS, and lower the servers' cost. The evaluation is based on an extensive analytical and simulation study.  相似文献   

4.
In information-centric networking, in-network caching has the potential to improve network efficiency and content distribution performance by satisfying user requests with cached content rather than downloading the requested content from remote sources. In this respect, users who request, download, and keep the content may be able to contribute to in-network caching by sharing their downloaded content with other users in the same network domain (i.e., user-assisted in-network caching). In this paper, we examine various aspects of user-assisted in-network caching in the hopes of efficiently utilizing user resources to achieve in-network caching. Through simulations, we first show that user-assisted in-network caching has attractive features, such as self-scalable caching, a near-optimal cache hit ratio (that can be achieved when the content is fully cached by the in-network caching) based on stable caching, and performance improvements over in-network caching. We then examine the caching strategy of user-assisted in-network caching. We examine three caching strategies based on a centralized server that maintains all content availability information and informs each user of what to cache. We also examine three caching strategies based on each user’s content availability information. We first show that the caching strategy affects the distribution of upload overhead across users and the number of cache hits in each segment. One interesting observation is that, even with a small storage space (i.e., 0.1% of the content size per user), the centralized and distributed approaches improve the cache hit ratio by 50% and 45%, respectively. With an overall view of caching information, the centralized approach can achieve a higher cache hit ratio than the distributed approach. Based on this observation, we discuss a distributed approach with a larger view of caching information than the distributed approach and, through simulations, confirm that a larger view leads to a higher cache hit ratio. Another interesting observation is that the random distributed strategy yields comparable performance to more complex strategies.  相似文献   

5.
Because of the rapid growth of the World Wide Web and the popularization of smart phones, tablets and personal computers, the number of web service users is increasing rapidly. As a result, large web services require additional disk space, and the required disk space increases with the number of web service users. Therefore, it is important to design and implement a powerful network file system for large web service providers. In this paper, we present three design issues for scalable network file systems. We use a variable number of objects within a bucket to decrease internal fragmentation in small files. We also propose a free space and access load-balancing mechanism to balance overall loading on the bucket servers. Finally, we propose a mechanism for caching frequently accessed data to lower the total disk I/O. These proposed mechanisms can effectively improve scalable network file system performance for large web services.  相似文献   

6.
The catch-up TV (CUTV) service allows users to watch video content that was previously broadcast live on TV channels and later placed on an on-line video store. Upon a request from a user to watch a recently missed episode of his/her favourite TV series, the content is streamed from the video server to the customer’s receiver device. This requires that an individual flow is set up for the duration of the video, and since it is hard to impossible to employ multicast streaming for this purpose (as users seldomly issue a request for the same episode at the same time), these flows are unicast. In this paper, we demonstrate that with the growing popularity of the CUTV service, the number of simultaneously running unicast flows on the aggregation parts of the network threaten to lead to an unwieldy increase in required bandwidth. Anticipating this problem and trying to alleviate it, the network operators deploy caches in strategic places in the network. We investigate the performance of such a caching strategy and the impact of its size and the cache update logic. We first analyse and model the evolution of video popularity over time based on traces we collected during 10 months. Through simulations we compare the performance of the traditional least-recently used and least-frequently used caching algorithms to our own algorithm. We also compare their performance with a “perfect” caching algorithm, which knows and hence does not have to estimate the video request rates. In the experimental data, we see that the video parameters from the popularity evolution law can be clustered. Therefore, we investigate theoretical models that can capture these clusters and we study the impact of clustering on the caching performance. Finally, some considerations on the optimal cache placement are presented.  相似文献   

7.
The sharing of caches among proxies is an important technique to reduce Web traffic, alleviate network bottlenecks, and improve response time of document requests. Most existing work on cooperative caching has been focused on serving misses collaboratively. Very few have studied the effect of cooperation on document placement schemes and its potential enhancements on cache hit ratio and latency reduction. We propose a new document placement scheme which takes into account the contentions at individual caches in order to limit the replication of documents within a cache group and increase document hit ratio. The main idea of this new scheme is to view the aggregate disk space of the cache group as a global resource of the group and uses the concept of cache expiration age to measure the contention of individual caches. The decision of whether to cache a document at a proxy is made collectively among the caches that already have a copy of this document. We refer to this new document placement scheme as the Expiration Age-based scheme (EA scheme). The EA scheme effectively reduces the replication of documents across the cache group, while ensuring that a copy of the document always resides in a cache where it is likely to stay for the longest time. We report our study on the potentials and limits of the EA scheme using both analytic modeling and trace-based simulation. The analytical model compares and contrasts the existing (ad hoc) placement scheme of cooperative proxy caches with our new EA scheme and indicates that the EA scheme improves the effectiveness of aggregate disk usage, thereby increasing the average time duration for which documents stay in the cache. The trace-based simulations show that the EA scheme yields higher hit rates and better response times compared to the existing document placement schemes used in most of the caching proxies.  相似文献   

8.
Geographically distributed cloud platforms enable an attractive approach to large-scale content delivery. Storage at various sites can be dynamically acquired from (and released back to) the cloud provider so as to support content caching, according to the current demands for the content from the different geographic regions. When storage is sufficiently expensive that not all content should be cached at all sites, two issues must be addressed: how should requests for content be routed to the cloud provider sites, and what policy should be used for caching content using the elastic storage resources obtained from the cloud provider. Existing approaches are typically designed for non-elastic storage and little is known about the optimal policies when minimizing the delivery costs for distributed elastic storage.In this paper, we propose an approach in which elastic storage resources are exploited using a simple dynamic caching policy, while request routing is updated periodically according to the solution of an optimization model. Use of pull-based dynamic caching, rather than push-based placement, provides robustness to unpredicted changes in request rates. We show that this robustness is provided at low cost. Even with fixed request rates, use of the dynamic caching policy typically yields content delivery cost within 10% of that with the optimal static placement. We compare request routing according to our optimization model to simpler baseline routing policies, and find that the baseline policies can yield greatly increased delivery cost relative to optimized routing. Finally, we present a lower-cost approximate solution algorithm for our routing optimization problem that yields content delivery cost within 2.5% of the optimal solution.  相似文献   

9.
Internet traffic volume is increasing and this causes scalability issues in content delivery. This problem can be addressed with different types of caching solutions. The incentives of different stakeholders to pay for these solutions are not known. However, it has been identified that Internet service providers (ISPs) need to be involved in the process of cache deployment due to their ownership of the network. This work evaluates a new business model where ISPs charge content providers (CPs) for a caching service because CPs benefit from more efficient content distribution. We provide conditions for sustainable paid in-network caching and their numerical evaluation in order to aid strategic decision-making by CPs, ISPs, and Cloud storage providers (CSPs). Although ISP caching as a paid service may not be an equilibrium, it turns out to be Pareto optimal at the right pricing. This encourages cooperation between CPs and ISPs. CSPs may choose cache friendly physical locations for their facilities in order to provide the necessary capacity to the ISPs. However, the required amounts are in all likelihood too small to be an incentive for the CSPs. ISP caching as a paid service can be an equilibrium when future benefits are considered and when the ISPs terminate caching-related improvements of service quality for clients who do not pay for caching.  相似文献   

10.
In this paper, a distributed telematics peer‐to‐peer networking system is proposed to provide an efficient and feasible service discovery mechanism for mobile users. When mobile users travel on the road, they may request for some services from service providers to meet their demands. Mobile users in vehicles are assumed to go through a lot of regions, in which each region is associated with a region server. Related information of all service providers in a region is stored in its region server, and each region server is in charge of the service discovery requests from all vehicles located in its region. This paper focuses on the issues of developing a distributed peer‐to‐peer telematics service discovery networking system over the vehicular network environment for mobile users. To provide the communication mechanism for telematics service discovery among region server, mobile users, and service providers, the Telematics Service Markup Language is proposed. According to clients' demands, three types of query scenarios are proposed, which are (1) reference point based query, (2) continual query, and (3) route‐based query. Finally, we present usage examples and system implementation of distributed telematics peer‐to‐peer. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

11.
基于网络性能的智能Web加速技术——缓存与预取   总被引:8,自引:0,他引:8  
Web业务在网络业务中占有很大比重,在无法扩大网络带宽时,需要采取一定技术合理利用带宽,改善网络性能。研究了基于RTT(round trip time)等网络性能指标的Web智能加速技术,在对Web代理服务器上的业务进行分析和对网络RTT进行测量分析的基础上,提出了智能预取控制技术及新的缓存(cache)替换方法。对新算法的仿真研究表明,该方法提高了缓存的命中率。研究表明预取技术在不明显增加网络负荷的前提下,提高了业务的响应速度,有效地改进了Web访问性能。  相似文献   

12.
Building a large and efficient hybrid peer-to-peer Internet caching system   总被引:2,自引:0,他引:2  
Proxy hit ratios tend to decrease as the demand and supply of Web contents are becoming more diverse. By case studies, we quantitatively confirm this trend and observe significant document duplications among a proxy and its client browsers' caches. One reason behind this trend is that the client/server Web caching model does not support direct resource sharing among clients, causing the Web contents and the network bandwidths among clients to be relatively underutilized. To address these limits and improve Web caching performance, we have extensively enhanced and deployed our browsers-aware framework, a peer-to-peer Web caching management scheme. We make the browsers and their proxy share the contents to exploit the neglected but rich data locality in browsers and reduce document duplications among the proxy and browsers' caches to effectively utilize the Web contents and network bandwidth among clients. The objective of our scheme is to improve the scalability of proxy-based caching both in the number of connected clients and in the diversity of Web documents. We show that building such a caching system with considerations of sharing contents among clients, minimizing document duplications, and achieving data integrity and communication anonymity is not only feasible but also highly effective.  相似文献   

13.
随着计算机技术和高速网络技术的发展,视频点播系统已变成现实,并且具有巨大的潜在需求。利用视频对象简介能够给用户一个友好的互动收视环境。可扩展视频服务器集群可以适应未来的用户需求的快速增长。视频对象分段技术和前缀缓存技术使视频文件按照一定的缓存策略以分段方式分布在协作式的缓存服务器集群中,以利于服务器集群的负载平衡和减少对用户的启动延迟。系统还引入了IP组播技术来减少对网络带宽的开销。该文提出了混合式的协作缓存和IP组播的方式交付视频对象,并描述了它是如何工作的。  相似文献   

14.
With the wide availability of high-speed network access, we are experiencing high quality streaming media delivery over the Internet. The emergence of ubiquitous computing enables mobile users to access the Internet with their laptops, PDAs, or even cell phones. When nomadic users connect to the network via wireless links or phone lines, high quality video transfer can be problematic due to long delay or size mismatch between the application display and the screen. Our proposed solution to this problem is to enable network proxies with the transcoding capability, and hence provide different, appropriate video quality to different network environment. The proxies in our transcoding-enabled caching (TeC) system perform transcoding as well as caching for efficient rich media delivery to heterogeneous network users. This design choice allows us to perform content adaptation at the network edges. We propose three different TeC caching strategies. We describe each algorithm and discuss its merits and shortcomings. We also study how the user access pattern affects the performance of TeC caching algorithms and compare them with other approaches. We evaluate TeC performance by conducting two types of simulation. Our first experiment uses synthesized traces while the other uses real traces derived from an enterprise media server logs. The results indicate that compared with the traditional network caches, with marginal transcoding load, TeC improves the cache effectiveness, decreases the user-perceived latency, and reduces the traffic between the proxy and the content origin server.  相似文献   

15.
Recently, many Video-on-Demand (VoD) service providers try to attract as many users as possible by offering multi-bitrate video streaming services with differentiated qualities. Many researches focus on video layered coding (e.g., scalable video coding, SVC). However, SVC is not widely used in VoD industry. Another solution, multi-version videos, can be classified into online transcoding and pre-stored multi-version videos. Online transcoding is a CPU-intensive and costly task, so it is not suitable for large-scale VoD applications. In this paper, we study how to improve caching efficiency based on pre-stored multi-version videos. We leverage the sharing probability among different versions of the same video and propose a multi-version shared caching (MSC) method to maximize the benefit of caching proxy. If the desired version is not in the cache while the higher neighbor version is in, MSC transmits the higher version streaming to user temporarily. In this case, MSC can make full use of the caching resources to improve the cache hit ratio and decrease users’ average waiting time. Simulation results show that MSC outperforms the others in the cache hit ratio and the average waiting time.  相似文献   

16.
The high-speed transfer rate and various services of the wired network have been combined with the convenience and mobility of wireless networks. This era of combined wire/wireless is being disseminated by new technology that creates new service applications and brings changes to both users and service providers. Network integration is a very important in this wire/wireless service, it is integrated by the convergence between heterogeneous networks and the integration of transmission technologies across networks. In this situation, existing security and communication technologies are difficult to apply due to the need to integrate with heterogeneous networks. Network security has many vulnerabilities. In existing homogeneous networks, user authentication and key management between heterogeneous networks are needed to be adapted to new technology. The establishment of security technologies to heterogeneous devices is also crucial between homogeneous networks. In this paper, we propose secure, efficient key management in a heterogeneous network environment. Therefore, it provides secure communication between heterogeneous network devices.  相似文献   

17.
Document caching and connection caching are extensively studied problems. In document caching, one has to maintain caches containing documents accessible in a network. In connection caching, one has to maintain a set of open network connections that handle data transfer. Previous work investigated these two problems separately while in practice the problems occur together: In order to load a document, one has to establish a connection between network nodes if the required connection is not already open. In this paper we present the first study that integrates document and connection caching. We first consider a very basic model in which all documents have the same size and the cost of loading a document or establishing a connection is equal to 1. We present deterministic and randomized online algorithms that achieve nearly optimal competitive ratios unless the size of the connection cache is extremely small. We then consider general settings where documents have varying sizes. We investigate a FAULT model in which the loading cost of a document is 1 as well as a BIT model in which the loading cost is equal to the size of the document.  相似文献   

18.
We propose a novel collaborative approach for document classification, combining the knowledge of multiple users for improved organization of data such as individual document repositories or emails. To this end, we distribute locally built classification models in a network of participating users, and combine the shared classifiers into more powerful meta models. In order to increase the propagation efficiency, we apply a method for selecting the most discriminative model components and transmitting them to other participants. In our experiments on four large standard collections for text classification we study the resulting tradeoffs between network cost and classification accuracy. The experimental results show that the proposed model propagation has negligible communication costs and substantially outperforms current approaches with respect to efficiency and classification quality.  相似文献   

19.
Free-viewpoint video is a new type of interactive multimedia service allowing users to control their viewpoint and generate new views of a dynamic scene from any perspective. The uniquely generated and displayed views are composed from two or more high bitrate camera streams that must be delivered to the users depending on their continuously changing perspective. Due to significant network and computational resource requirements, we proposed scalable viewpoint generation and delivery schemes based on multicast forwarding and distributed approach. Our aim was to find the optimal deployment locations of the distributed viewpoint synthesis processes in the network topology by allowing network nodes to act as proxy servers with caching and viewpoint synthesis functionalities. Moreover, a predictive multicast group management scheme was introduced in order to provide all camera views that may be requested in the near future and prevent the viewpoint synthesizer algorithm from remaining without camera streams. The obtained results showed that even 42 % traffic decrease can be realized using distributed viewpoint synthesis and the probability of viewpoint synthesis starvation can be also significantly reduced in future free viewpoint video services.  相似文献   

20.
The developments in positioning and mobile communication technology have made the location-based service (LBS) applications more and more popular. For privacy reasons and due to lack of trust in the LBS providers, k-anonymity and l-diversity techniques have been widely used to preserve privacy of users in distributed LBS architectures in Internet of Things (IoT). However, in reality, there are scenarios where the locations of users are identical or similar/near each other in IoT. In such scenarios the k locations selected by k-anonymity technique are the same and location privacy can be easily compromised or leaked. To address the issue of privacy preservation, in this paper, we introduce the location labels to distinguish locations of mobile users to sensitive and ordinary locations. We design a location-label based (LLB) algorithm for protecting location privacy of users while minimizing the response time for LBS requests. We also evaluate the performance and validate the correctness of the proposed algorithm through extensive simulations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号