首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
In this paper, we investigate a proxy-based integrated cache consistency and mobility management scheme for supporting client–server applications in Mobile IP systems with the objective to minimize the overall network traffic generated. Our cache consistency management scheme is based on a stateful strategy by which cache invalidation messages are asynchronously sent by the server to a mobile host (MH) whenever data objects cached at the MH have been updated. We use a per-user proxy to buffer invalidation messages to allow the MH to disconnect arbitrarily and to reduce the number of uplink requests when the MH is reconnected. Moreover, the user proxy takes the responsibility of mobility management to further reduce the network traffic. We investigate a design by which the MH’s proxy serves as a gateway foreign agent (GFA) as in the MIP Regional Registration protocol to keep track of the address of the MH in a region, with the proxy migrating with the MH when the MH crosses a regional area. We identify the optimal regional area size under which the overall network traffic cost, due to cache consistency management, mobility management, and query requests/replies, is minimized. The integrated cache consistency and mobility management scheme is demonstrated to outperform MIPv6, no-proxy and/or no-cache schemes, as well as a decoupled scheme that optimally but separately manages mobility and service activities in Mobile IPv6 environments.  相似文献   

2.
We propose and analyze an adaptive per-user per-object cache consistency management (APPCCM) scheme for mobile data access in wireless mesh networks. APPCCM supports strong data consistency semantics through integrated cache consistency and mobility management. The objective of APPCCM is to minimize the overall network cost incurred due to data query/update processing, cache consistency management, and mobility management. In APPCCM, data objects can be adaptively cached at the mesh clients directly or at mesh routers dynamically selected by APPCCM. APPCCM is adaptive, per-user and per-object as the decision regarding where to cache a data object accessed by a mesh client is made dynamically, depending on the mesh client’s mobility and data query/update characteristics, and the network’s conditions. We develop analytical models for evaluating the performance of APPCCM and devise a computational procedure for dynamically calculating the overall network cost incurred. We demonstrate via both model-based analysis and simulation validation that APPCCM outperforms non-adaptive cache consistency management schemes that always cache data objects at the mesh client, or at the mesh client’s current serving mesh router for mobile data access in wireless mesh networks.  相似文献   

3.
The continuous partial match query is a partial match query whose result remains consistently in the client’s memory. Conventional cache invalidation methods for mobile clients are record ID-based. However, since the partial match query uses content-based retrieval, the conventional ID-based approaches cannot efficiently manage the cache consistency of mobile clients. In this paper, we propose a predicate-based cache invalidation scheme for continuous partial match queries in mobile computing environments. We represent the cache state of a mobile client as a predicate, and also construct a cache invalidation report (CIR), which the server broadcasts to clients for cache management, with predicates. In order to reduce the amount of information that is needed for cache management, we propose a set of methods for CIR construction (in the server) and identification of invalidated data (in the client). Through experiments, we show that the predicate-based approach is very effective for the cache management of mobile clients.  相似文献   

4.
ProxyServer中Cache的管理和使用   总被引:1,自引:0,他引:1  
使用Cache可以减少Brower的等待时间和降低代理服务器和Internet的通信量,本文主要介绍了Proxy对Cache管理和使用。  相似文献   

5.
Web对象缓存技术是一种减少web服务器访问通信量和访问延迟的重要手段。Web缓存的引入虽然大大减轻了服务器负载,降低了网络拥塞,减少了客户端访问的延迟等优点,但同时也带来缓存的一致性问题,这样使客户端获得web的数据可能不是最新的版本。该文通过分析现有的缓存一致性方针,提出了一个应适于web的强缓存一致性算法。  相似文献   

6.
High-performance Web sites rely on Web server `farms', hundreds of computers serving the same content, for scalability, reliability, and low-latency access to Internet content. Deploying these scalable farms typically requires the power of distributed or clustered file systems. Building Web server farms on file systems complements hierarchical proxy caching. Proxy caching replicates Web content throughout the Internet, thereby reducing latency from network delays and off-loading traffic from the primary servers. Web server farms scale resources at a single site, reducing latency from queuing delays. Both technologies are essential when building a high-performance infrastructure for content delivery. The authors present a cache consistency model and locking protocol customized for file systems that are used as scalable infrastructure for Web server farms. The protocol takes advantage of the Web's relaxed consistency semantics to reduce latencies and network overhead. Our hybrid approach preserves strong consistency for concurrent write sharing with time-based consistency and push caching for readers (Web servers). Using simulation, we compare our approach to the Andrew file system and the sequential consistency file system protocols we propose to replace  相似文献   

7.
集中管理式Web缓存系统及性能分析   总被引:5,自引:0,他引:5  
共享缓存文件是减少网络通信量和服务器负载的重要方法,本文在介绍Web Caching技术及流行的Web缓存通信协议ICP的基础上,提出了一种集中管理式Web缓存系统,该系统通过将用户的HTTP请求,按照一定的算法分发到系统中某一合适的缓存服务器上,从而消除了缓存系统内部服务器之间庞大的通信开销及缓存处理负担,减少了缓存内容的冗余度.通过分析,证明了集中管理式Web缓存系统比基于ICP的简单缓存系统具有缓存效率高、处理开销低、延迟小等优点,并且该系统具有良好的可扩展性.  相似文献   

8.
In mobile ad hoc peer to peer (M-P2P) networks, since nodes are highly resource constrained, it is effective to retrieve data items using a top-k query, in which data items are ordered by the score of a particular attribute and the query-issuing node acquires data items with the k highest scores. However, when network partitioning occurs, the query-issuing node cannot connect to some nodes having data items included in the top-k query result, and thus, the accuracy of the query result decreases. To solve this problem, data replication is a promising approach. However, if each node sends back its own data items (replicas) responding to a query without considering replicas held by others, same data items are sent back to the query-issuing node more than once through long paths, which results in increase of traffic. In this paper, we propose a top-k query processing method considering data replication in M-P2P networks. This method suppresses duplicate transmissions of same data items through long paths. Moreover, an intermediate node stops transmitting a query message on-demand.  相似文献   

9.
无线网络系统的总体结构分为底层网络、基站、上位机端的服务器以及数据应用。底层无线网络对数据进行采集;基站汇集底层网络的数据并发往上位机端;上位机端的服务器对数据进行接收、缓存以及处理。数据缓存是服务器端的重要任务,在实际开发中,是由数据缓存模块来负责。从介绍数据缓存模块的程序架构入手,对该模块的架构进行封装,使其达到可重用性,通用性等目的,为今后这类系统的开发提供便利。  相似文献   

10.
位置感知查询(LAQ)是移动系统中常用的一种查询方式。提出了一种位置感知查询中的协作缓存管理技术(CoMA-LA),该方法包括三方面的内容:(1)缓存中语义相近数据项的合并;(2)相邻缓存间的协作替换策略;(3)缓存间的数据一致性保证。通过仿真实验将CoMA-LA和传统的LRU算法以及一些已有的缓存替换方法进行了比较,实验结果表明采用CoMA-LA技术能够有效提高缓存利用率,从而降低平均访问时间,提高查询命中率。  相似文献   

11.
Web代理服务器缓存能够在一定程度上解决用户访问延迟和网络拥塞问题,Web代理缓存的缓存替换策略直接影响缓存的命中率,从而影响网络请求响应的效果;为此,使用一种通过固定大小的循环滑动窗口提取Web日志数据的多项特征,并使用高斯混合模型对Web日志数据进行聚类分析,预测在窗口时间内可能再次访问到Web对象,结合最近最少使用(LRU)算法,提出一种新的基于高斯混合模型的Web代理服务器缓存替换策略;实验结果表明,与传统的缓存替换策略LRU、LFU、FIFO、GDSF相比,该策略有效提高了Web代理缓存的请求命中率和字节命中率。  相似文献   

12.
HTTP代理服务器的设计与实现   总被引:1,自引:4,他引:1  
实现了一个代理服务器系统,设计了代理的模型,给出了代理各个模块的设计结构图。该系统使用IP地址和用户口令对用户进行限制,使系统具备了一定的安全访问功能;缓存和预取的使用,提高了客户的响应速度,降低了网络流量。  相似文献   

13.
The web is the largest distributed database deploying time-to-live-based weak consistency. Each object has a lifetime-duration assigned to it by its origin server. A copy of the object fetched from its origin server is received with maximum time-to-live (TTL) that equals its lifetime duration. In contrast a copy obtained through a cache have shorter TTL since the age (elapsed time since fetched from the origin) is deducted from its lifetime duration. A request served by a cache constitutes a hit if the cache has a fresh copy of the object. Otherwise, the request is considered a miss and is propagated to another server. It is evident that the number of cache misses depends on the age of the copies the cache receives. Thus, a cache that sends requests to another cache would suffer more misses than a cache that sends requests directly to an authoritative server.In this paper, we model and analyze the effect of age on the performance of various cache configurations. We consider a low-level cache that fetches objects either from their origin servers or from other caches and analyze its miss-rate as function of its fetching policy. We distinguish between three basic fetching policies, namely, fetching always from the origin, fetching always from the same high-level cache, and fetching from a “random” high-level cache. We explore the relationships between these policies in terms of the miss-rate achieved by the low-level cache, both on worst-case sequences, and on sequences generated using particular probability distributions.Guided by web caching practice, we consider two variations of the basic policies. In the first variation the high-level cache uses pre-term refreshes to keep a copy with lower age. In the second variation the low-level cache uses extended lifetime duration. We analyze how these variations affect the miss-rates. Our theoretical results help to understand how age may affect the miss-rate, and imply guidelines for improving performance of web caches.  相似文献   

14.
The technology advance in network has accelerated the development of multimedia applications over the wired and wireless communication. To alleviate network congestion and to reduce latency and workload on multimedia servers, the concept of multimedia proxy has been proposed to cache popular contents. Caching the data objects can relieve the bandwidth demand on the external network, and reduce the average time to load a remote data object to local side. Since the effectiveness of a proxy server depends largely on cache replacement policy, various approaches are proposed in recent years. In this paper, we discuss the cache replacement policy in a multimedia transcoding proxy. Unlike the cache replacement for conventional web objects, to replace some elements with others in the cache of a transcoding proxy, we should further consider the transcoding relationship among the cached items. To maintain the transcoding relationship and to perform cache replacement, we propose in this paper the RESP framework (standing for REplacement with Shortest Path). The RESP framework contains two primary components, i.e., procedure MASP (standing for Minimum Aggregate Cost with Shortest Path) and algorithm EBR (standing for Exchange-Based Replacement). Procedure MASP maintains the transcoding relationship using a shortest path table, whereas algorithm EBR performs cache replacement according to an exchanging strategy. The experimental results show that the RESP framework can approximate the optimal cache replacement with much lower execution time for processing user queries.  相似文献   

15.
移动查询缓存处理的研究   总被引:5,自引:0,他引:5  
客户缓存为提高客户/服务器数据库系统整体性能以及客户方数据可用性提供了有效途径。移动环境下网络资源的贫乏使客户缓存的作用更为重要,语义缓存是基于客户查询语义相关建立的一类缓存,提出一个基于语义缓存的客户缓存机制,给出缓存的内容组织,提出缓存项合并策略;然后讨论了基于语义缓存的查询处理策略;最后,模拟结果表明该客户缓存机制能够提高分布式、特别是移动环境下客户服务器数据库系统的性能。  相似文献   

16.
We argue that cache consistency mechanisms designed for stand-alone proxies do not scale to the large number of proxies in a content distribution network and are not flexible enough to allow consistency guarantees to be tailored to object needs. To meet the twin challenges of scalability and flexibility, we introduce the notion of cooperative consistency along with a mechanism, called cooperative leases, to achieve it. By supporting /spl Delta/-consistency semantics and by using a single lease for multiple proxies, cooperative leases allow the notion of leases to be applied in a flexible, scalable manner to CDNs. Further, the approach employs application-level multicast to propagate server notifications to proxies in a scalable manner. We implement our approach in the Apache Web server and the Squid proxy cache and demonstrate its efficacy using a detailed experimental evaluation. Our results show a factor of 2.5 reduction in server message overhead and a 20 percent reduction in server state space overhead when compared to original leases albeit at an increased interproxy communication overhead.  相似文献   

17.
Many algorithmic efforts have been made to address technical issues in designing a streaming media caching proxy. Typical of those are segment-based caching approaches that efficiently cache large media objects in segments which reduces the startup latency while ensuring continuous streaming. However, few systems have been practically implemented and deployed. The implementation and deployment efforts are hindered by several factors: 1) streaming of media content in complicated data formats is difficult; 2) typical streaming protocols such as RTP often run on UDP; in practice, UDP traffic is likely to be blocked by firewalls at the client side due to security considerations; and 3) coordination between caching discrete object segments and streaming continuous media data is challenging. To address these problems, we have designed and implemented a segment-based streaming media proxy, called SProxy. This proxy system has the following merits. First, SProxy leverages existing Internet infrastructure to address the flash crowd. The content server is now free of the streaming duty while hosting streaming content through a regular Web server. Thus, UDP based streaming traffic from SProxy suffers less dropping and no blocking. Second, SProxy streams and caches media objects in small segments determined by the object popularity, causing very low startup latency, and significantly reducing network traffic. Finally, prefetching techniques are used to pro-actively preload uncached segments that are likely to be used soon, thus providing continuous streaming. SProxy has been extensively tested and we show that it provides high quality streaming delivery in both local area networks and wide area networks (e.g., between Japan and the U.S.).  相似文献   

18.
基于MPLS的层次微移动协议的性能分析*   总被引:2,自引:0,他引:2  
信息系统的权限管理是保证数据安全的必要条件。在论述了基于角色的权限控制模型的基础上,分析了基于用户-功能的权限控制方法存在的问题,提出了一种基于RBAC的B/S体系结构的信息系统权限控制方法,实现了安全的权限控制,取得了较好的效果。  相似文献   

19.
The CACHERPframework leverages the relative object popularity as the sole parameter for dynamic cache size tuning. In the process it consistently maintains the prescribed cache hit ratio on the fly by deriving the popularity ratio from the current statistics. As a result the accuracy of the statistical CACHERPoperation is independent of changes in the Internet traffic pattern that may switch suddenly. The contribution by the novel CACHERPframework is that by adaptively maintaining the given hit ratio it effectively reduces the end-to-end information retrieval roundtrip time (RTT) and frees more bandwidth for sharing. This bandwidth would otherwise be consumed in transferring large amounts of data from remote data sources to the proxy server before it is given to the client or requestor.  相似文献   

20.
提出了用网络计费功能来均衡计算机开放实验室的网络流量,改善校园网性能的一种网络管理方法。根据计算机开放实验室用户流动性强的特点,设计了基于代理服务器数据采集方式的网络计费系统,着重介绍了网络计费软件的实现,以及其关键技术DLPI网络编程接口。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号