首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A scalable low-latency cache invalidation strategy for mobile environments   总被引:3,自引:0,他引:3  
Caching frequently accessed data items on the client side is an effective technique for improving performance in a mobile environment. Classical cache invalidation strategies are not suitable for mobile environments due to frequent disconnections and mobility of the clients. One attractive cache invalidation technique is based on invalidation reports (IRs). However, the IR-based cache invalidation solution has two major drawbacks, which have not been addressed in previous research. First, there is a long query latency associated with this solution since a client cannot answer the query until the next IR interval. Second, when the server updates a hot data item, all clients have to query the server and get the data from the server separately, which wastes a large amount of bandwidth. In this paper, we propose an IR-based cache invalidation algorithm, which can significantly reduce the query latency and efficiently utilize the broadcast bandwidth. Detailed analytical analysis and simulation experiments are carried out to evaluate the proposed methodology. Compared to previous IR-based schemes, our scheme can significantly improve the throughput and reduce the query latency, the number of uplink request, and the broadcast bandwidth requirements.  相似文献   

2.
The diversity of services delivered over wireless channels has increased people's desire in ubiquitously accessing these services from their mobile devices. However, a ubiquitous mobile computing environment faces several challenges such as scarce bandwidth, limited energy resources, and frequent disconnection of the server and mobile devices. Caching frequently accessed data is an effective technique to improve the network performance because it reduces the network congestion, the query delay, and the power consumption. When caching is used, maintaining cache consistency becomes a major challenge since data items that are updated on the server should be also updated in the cache of the mobile devices. In this paper we propose a new cache invalidation scheme called Selective Adaptive Sorted (SAS) cache invalidation strategy that overcomes the false invalidation problem that exists in most of the invalidation strategies found in the literature. The performance of the proposed strategy is evaluated and compared with the selective cache invalidation strategy and the updated invalidation report startegy found in the literature. Results showed that a significant cost reduction can be obtained with the proposed strategy when measuring performance metrics such as delay, bandwidth, and energy.  相似文献   

3.
In a wireless environment, mobile clients may cache frequently accessed data to reduce the contention on the narrow bandwidth of the wireless channel. However, to minimize energy consumption, mobile clients may also often operate in a disconnected mode. As a result, the clients may miss some cache invalidation reports broadcast by a server. Thus, upon reconnection, a cache invalidation scheme must be employed to ensure the validity of the cached data. Existing techniques either require the clients to discard the cached data entirely, or require the clients to transmit uplink messages to a server. While the former eliminates the benefits of caching, the latter may lead to high energy consumption, poor channel utilization, and costs. In this paper, we present a new cache invalidation scheme, called Broadcast-Based Group Invalidation (BGI), that retains the benefits of caching while avoiding unnecessary transmissions (which translates to energy saving, better channel utilization, and lower costs). Under BGI, a pair of invalidation reports is broadcast periodically. While the object invalidation report enables the clients to salvage as many recently cached objects as possible, the group invalidation report cuts down on false invalidation. We conduct extensive studies based on a simulation model. The simulation results show that BGI consumes less energy and is superior over existing techniques.  相似文献   

4.
Caching can reduce the bandwidth requirement in a wireless computing environment as well as minimize the energy consumption of wireless portable computers. To facilitate mobile clients in ascertaining the validity of their cache content, servers periodically broadcast cache invalidation reports that contain information of data that has been updated. However, as mobile clients may operate in a doze or even totally disconnected mode (to conserve energy), it is possible that some reports may be missed and the clients are forced to discard the entire cache content. In this paper, we reexamine the issue of designing cache invalidation strategies. We identify the basic issues in designing cache invalidation strategies. From the solutions to these issues, a large set of cache invalidation schemes can be constructed. We evaluate the performance of four representative algorithms-two of which are known algorithms (i.e., Dual-Report Cache Invalidation and Bit-Sequences) while the other two are their counterparts that exploit selective tuning (namely, Selective Dual-Report Cache Invalidation and Bit-Sequences with Bit Count). Our study shows that the two proposed schemes are not only effective in salvaging the cache content but consume significantly less energy than their counterparts. While the Selective Dual-Report Cache Invalidation scheme performs best in most cases, it is inferior to the Bit-Sequences with the Bit-Count scheme under high update rates  相似文献   

5.
Scalable cache invalidation algorithms for mobile data access   总被引:2,自引:0,他引:2  
In this paper, we address the problem of cache invalidation in mobile and wireless client/server environments. We present cache invalidation techniques that can scale not only to a large number of mobile clients, but also to a large number of data items that can be cached in the mobile clients. We propose two scalable algorithms: the Multidimensional Bit-Sequence (MD-BS) algorithm and the Multilevel Bit-Sequence (ML-BS) algorithm. Both algorithms are based on our prior work on the Basic Bit-Sequences (BS) algorithm. Our study shows that the proposed algorithms are effective for a large number of cached data items with low update rates. The study also illustrates that the algorithms ran be used with other complementary techniques to address the problem of cache invalidation for data items with varied update and access rates.  相似文献   

6.
The continuous partial match query is a partial match query whose result remains consistently in the client’s memory. Conventional cache invalidation methods for mobile clients are record ID-based. However, since the partial match query uses content-based retrieval, the conventional ID-based approaches cannot efficiently manage the cache consistency of mobile clients. In this paper, we propose a predicate-based cache invalidation scheme for continuous partial match queries in mobile computing environments. We represent the cache state of a mobile client as a predicate, and also construct a cache invalidation report (CIR), which the server broadcasts to clients for cache management, with predicates. In order to reduce the amount of information that is needed for cache management, we propose a set of methods for CIR construction (in the server) and identification of invalidated data (in the client). Through experiments, we show that the predicate-based approach is very effective for the cache management of mobile clients.  相似文献   

7.
如何维护移动环境下的客户端缓存中数据的一致性,是移动数据库中的关键技术.而数据广播技术则是利用无线通信网络不对称的特点,使移动客户端数据和服务器端数据保持一致性的最为实用的技术.但由于不同的环境,不同的时段各种参数的变化,失效报告时间窗口ω的大小如何确定是一个难点.根据移动数据库中数据更新的时间间隔,提出了基于多时间窗口的失效报告技术.  相似文献   

8.
Mobile location-dependent information services are gaining increasing interest in both academic and industrial communities. In these services, data values depend on their locations. Caching frequently accessed data on mobile clients can help save wireless bandwidth and improve system performance. However, since client location changes constantly, location-dependent data may become obsolete not only due to updates performed on data items but also because of client movements across the network. To the best of the authors' knowledge, previous work on cache invalidation issues focused on data updates only. This paper considers data inconsistency caused by client movements and proposes three location-dependent cache invalidation schemes. The performance for the proposed schemes is investigated by both analytical study and simulation experiments in a scenario where temporal- and location-dependent updates coexist. Both analytical and experimental results show that, in most cases, the proposed methods substantially outperform the NSI scheme, which drops the entire cache contents when hand-off is performed.  相似文献   

9.
Internet-based vehicular ad hoc network (Ivanet) is an emerging technique that combines a wired Internet and a vehicular ad hoc network (Vanet) for developing an ubiquitous communication infrastructure and improving universal information and service accessibility. A key design optimization technique in Ivanets is to cache the frequently accessed data items in a local storage of vehicles. Since vehicles are not critically limited by the storage/memory space and power consumption, selecting proper data items for caching is not very critical. Rather, an important design issue is how to keep the cached copies valid when the original data items are updated. This is essential to provide fast access to valid data for fast moving vehicles. In this paper, we propose a cooperative cache invalidation (CCI) scheme and its enhancement (ECCI) that take advantage of the underlying location management scheme to reduce the number of broadcast operations and the corresponding query delay. We develop an analytical model for CCI and ECCI techniques for fasthand estimate of performance trends and critical design parameters. Then, we modify two prior cache invalidation techniques to work in Ivanets: a poll-each-read (PER) scheme, and an extended asynchronous (EAS) scheme. We compare the performance of four cache invalidation schemes as a function of query interval, cache update interval, and data size through extensive simulation. Our simulation results indicate that the proposed schemes can reduce the query delay up to 69% and increase the cache hit rate up to 57%, and have the lowest communication overhead compared to the prior PER and EAS schemes.  相似文献   

10.
In the mobile wireless computing environment of the future, a large number of users, equipped with low-powered palmtop machines, will query databases over wireless communication channels. Palmtop-based units will often be disconnected for prolonged periods of time, due to battery power saving measures; palmtops also will frequently relocate between different cells, and will connect to different data servers at different times. Caching of frequently accessed data items will be an important technique that will reduce contention on the narrow-bandwidth, wireless channel. However, cache individualization strategies will be severely affected by the disconnection and mobility of the clients. The server may no longer know which clients are currently residing under its cell, and which of them are currently on. We propose a taxonomy of different cache invalidation strategies, and study the impact of clients' disconnection times on their performance. We study ways to improve further the efficiency of the invalidation techniques described. We also describe how our techniques can be implemented over different network environments.  相似文献   

11.
《Information Systems》2004,29(3):207-234
Although data broadcast has been shown to be an efficient method for disseminating data items in mobile computing systems, the issue on how to ensure consistency and currency of data items provided to mobile transactions (MT), which are generated by mobile clients, has not been examined adequately. While data items are being broadcast, update transactions may install new values for them. If the executions of update transactions and the broadcast of data items are interleaved without any control, mobile transactions may observe inconsistent data values. The problem will be more complex if the mobile clients maintain some cached data items for their mobile transactions. In this paper, we propose a concurrency control method, called ordered update first with order (OUFO), for the mobile computing systems where a mobile transaction consists of a sequence of read operations and each MT is associated with a time constraint on its completion time. Besides ensuring data consistency and maximizing currency of data to mobile transactions, OUFO also aims at reducing data access delay of mobile transactions using client caches. A hybrid re-broadcast/invalidation report (IR) mechanism is designed in OUFO for checking the validity of cached data items so as to improve cache consistency and minimize the overhead of transaction restarts due to data conflicts. This is highly important to the performance of the mobile computing systems where the mobile transactions are associated with a deadline constraint on their completion times. Extensive simulation experiments have been performed to compare the performance of OUFO with two other efficient schemes, the multi-version broadcast method and the periodic IR method. The performance results show that OUFO offers better performance in most aspects, even when network disconnection is common.  相似文献   

12.
13.
在移动计算环境中基于移动代理的缓存失效方案   总被引:2,自引:2,他引:2  
1 引言缓存技术是分布式计算环境中的重要技术,它可以改善系统的整体性能(如查询响应时间、吞吐量等),而移动计算的网络环境是一种特殊的分布式环境,与传统的分布式系统相比,它具有鲜明的特点:移动性、断接性、带宽多样性、可伸缩性、弱可靠性、网络通信的非对称性、电源能力局限性等等。这些特点使得缓存技术在移动计算环境中尤为重要。因为缓存能有效减少带宽需求,并能节省移动计算机的能耗。  相似文献   

14.
Caching data in a wireless mobile computer can significantly reduce the bandwidth requirement. However, due to battery power limitation, a wireless mobile computer may often be forced to operate in a doze or even totally disconnected mode. As a result, the mobile computer may miss some cache invalidation reports. In this paper, we present an energy-efficient cache invalidation method for a wireless mobile computer. The new cache invalidation scheme is called grouping with cold update-set retention (GCORE). Upon waking up, a mobile computer checks its cache validity with the server. To reduce the bandwidth requirement for validity checking, data objects are partitioned into groups. However, instead of simply invalidating a group if any of the objects in the group has been updated, GCORE retains the cold update set of objects in a group if possible. We present an efficient implementation of GCORE and conduct simulations to evaluate its caching effectiveness. The results show that GCORE can substantially improve mobile caching by reducing the communication bandwidth (thus energy consumption) for query processing.  相似文献   

15.
There are many methods to maintain consistency in the distributed computing environment. Ideally, efficient schemes for maintaining consistency should take into account the following factors: lease duration of replicated data, data access pattern and system parameters. One method used to supply strong consistency in the web environment is the lease method. During the proxy’s lease time from a web server, the web server can notify the modification to the proxy by invalidation or update. In this paper, we analyze lease protocol performance by the varying update/invalidation scheme, lease duration and read rates. By using these analyses, we can choose the adaptive lease time and proper protocol (invalidation or update scheme of the modification for each proxy in the web environment). As the number of proxies for web caching increases exponentially, a more efficient method for maintaining consistency needs to be designed. We also present three-tier hierarchies on which each group and node independently and adaptively choose the proper lease time and protocol for each proxy cache. These classifications of the scheme make proxy caching adaptive to client access pattern and system parameters.  相似文献   

16.
Proxy servers have been used to cache web objects to alleviate the load of the web servers and to reduce network congestion on the Internet. In this paper, a central video server is connected to a proxy server via wide area networks (WANs) and the proxy server can reach many clients via local area networks (LANs). We assume a video can be either entirely or partially cached in the proxy to reduce WAN bandwidth consumption. Since the storage space and the sustained disk I/O bandwidth are limited resources in the proxy, how to efficiently utilize these resources to maximize the WAN bandwidth reduction is an important issue. We design a progressive video caching policy in which each video can be cached at several levels corresponding to cached data sizes and required WAN bandwidths. For a video, the proxy server determines to cache a smaller amount of data at a lower level or to gradually accumulate more data to reach a higher level. The proposed progressive caching policy allows the proxy to adjust caching amount for each video based on its resource condition and the user access pattern. We investigate the scenarios in which the access pattern is priorly known or unknown and the effectiveness of the caching policy is evaluated.  相似文献   

17.
数据网格是一个典型的分布式系统,访问其中的广域分布的海量数据需要很大的时间开销.本文介绍一个数据网格系统中实现高速数据统一访问的Cache模型,该模型采用二级Cache机制,使用二个数据缓冲表来快速定位缓冲数据和控制缓冲数据访问,给出了各级Cache的数据替换算法,并提供了灵活的配置方法,可以将Cache与客户机、服务器独立分布,实现了Cache的可扩展性.  相似文献   

18.
One of the most commonly used two-factor user authentication mechanisms nowadays is based on smart-card and password. A scheme of this type is called a smart-card-based password authentication scheme. The core feature of such a scheme is to enforce two-factor authentication in the sense that the client must have the smart-card and know the password in order to gain access to the server. In this paper, we scrutinize the security requirements of this kind of schemes, and propose a new scheme and a generic construction framework for smart-card-based password authentication. We show that a secure password based key exchange protocol can be efficiently transformed to a smart-card-based password authentication scheme provided that there exist pseudorandom functions and target collision resistant hash functions. Our construction appears to be the first one with provable security. In addition, we show that two recently proposed schemes of this kind are insecure.  相似文献   

19.
影响多媒体服务器性能的关键因素研究   总被引:7,自引:0,他引:7  
在构建大规模视频服务系统时 ,基于层次型多服务器群的体系结构在吞吐率、可扩展性、经济性等方面都有其突出的优势 ,尤其适合于在因特网上的应用 .但是 ,要充分发挥和提高视频服务系统的性能 ,还要针对一些主要的瓶颈(如服务器磁盘 I/ O带宽与网络带宽 ) ,解决好一系列的问题 .本文分析了影响多媒体视频服务器性能的一些主要因素 ,如视频服务器的体系结构、服务器与客户端之间的数据传送方式、媒体数据在视频服务器存储子系统中的分布与放置方式、对磁盘访问请求的调度、单服务器中的缓存及多服务器间协同缓存的管理、接入控制策略、流调度策略等 ,这些因素对视频服务器的性能与吞吐率有着极大的影响 .本文还介绍了一些适用于大规模视频服务系统的性能优化技术 ,如广播、批处理等流调度策略 .在构建视频服务器系统时 ,只有综合考虑这些因素 ,才能真正提高服务器乃至整个视频服务系统的吞吐率 ,并较好地满足客户的 Qo S要求  相似文献   

20.
In this paper, we propose an optimal cache replacement policy for data access applications in wireless networks where data updates are injected from all the clients. The goal of the policy is to increase effective hits in the client caches and in turn, make efficient use of the network bandwidth in wireless environment. To serve the applications with the most updated data, we also propose two enhanced cache access policies making copies of data objects strongly consistent. We analytically prove that a cache system, with a combination of our cache access and replacement policy, guarantees the optimal number of effective cache hits and optimal cost (in terms of network bandwidth) per data object access. Results from both analysis and extensive simulations demonstrate that the proposed policies outperform the popular Least Frequently Used (LFU) scheme in terms of both effective hits and bandwidth consumption. Our flexible system model makes the proposed policies equally applicable to applications for the existing 3G, as well as upcoming LTE, LTE Advanced and WiMAX wireless data access networks.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号