首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
In a wireless environment, mobile clients may cache frequently accessed data to reduce the contention on the narrow bandwidth of the wireless channel. However, to minimize energy consumption, mobile clients may also often operate in a disconnected mode. As a result, the clients may miss some cache invalidation reports broadcast by a server. Thus, upon reconnection, a cache invalidation scheme must be employed to ensure the validity of the cached data. Existing techniques either require the clients to discard the cached data entirely, or require the clients to transmit uplink messages to a server. While the former eliminates the benefits of caching, the latter may lead to high energy consumption, poor channel utilization, and costs. In this paper, we present a new cache invalidation scheme, called Broadcast-Based Group Invalidation (BGI), that retains the benefits of caching while avoiding unnecessary transmissions (which translates to energy saving, better channel utilization, and lower costs). Under BGI, a pair of invalidation reports is broadcast periodically. While the object invalidation report enables the clients to salvage as many recently cached objects as possible, the group invalidation report cuts down on false invalidation. We conduct extensive studies based on a simulation model. The simulation results show that BGI consumes less energy and is superior over existing techniques.  相似文献   

2.
Effective caching in mobile ad hoc network increases data availability. However, caching at strategic locations with reduced (controlled) number of copies is needed for many military applications involving UAVs to address security concerns, less maintenance overhead and maintaining availability. In general, existing cooperative caching approaches are deficient in finding the reduced number of strategic cache locations. One such technique to reduce the number of strategic cache locations without affecting the efficacy of data access for a small network topology of UAVs is called “memory and location optimized caching scheme (MELOC)”. However, having a single broker and metadata broadcast across the whole network in MELOC lead to severe performance hindrance in case of a large network topology of UAVs. Moreover, frequent cache replacements due to a change in network topology do not favor cache hit and bandwidth conservation in case of large mobile networks consisting of UAVs. In this paper, we design and evaluate an extended version of “MELOC called MELOC-X”, which suits large network topologies of UAVs by overcoming the above challenges. Our comparison with one such recent scheme with similar objectives showcased a significant improvement in performance. We also evaluate the impact of this scheme with respect to different metrics including the average number of hops, the average roundtrip time (i.e., average query latency), cache hits and mobility to access cached data through extensive simulations.  相似文献   

3.
《Information Systems》2004,29(3):207-234
Although data broadcast has been shown to be an efficient method for disseminating data items in mobile computing systems, the issue on how to ensure consistency and currency of data items provided to mobile transactions (MT), which are generated by mobile clients, has not been examined adequately. While data items are being broadcast, update transactions may install new values for them. If the executions of update transactions and the broadcast of data items are interleaved without any control, mobile transactions may observe inconsistent data values. The problem will be more complex if the mobile clients maintain some cached data items for their mobile transactions. In this paper, we propose a concurrency control method, called ordered update first with order (OUFO), for the mobile computing systems where a mobile transaction consists of a sequence of read operations and each MT is associated with a time constraint on its completion time. Besides ensuring data consistency and maximizing currency of data to mobile transactions, OUFO also aims at reducing data access delay of mobile transactions using client caches. A hybrid re-broadcast/invalidation report (IR) mechanism is designed in OUFO for checking the validity of cached data items so as to improve cache consistency and minimize the overhead of transaction restarts due to data conflicts. This is highly important to the performance of the mobile computing systems where the mobile transactions are associated with a deadline constraint on their completion times. Extensive simulation experiments have been performed to compare the performance of OUFO with two other efficient schemes, the multi-version broadcast method and the periodic IR method. The performance results show that OUFO offers better performance in most aspects, even when network disconnection is common.  相似文献   

4.
移动查询缓存处理的研究   总被引:5,自引:0,他引:5  
客户缓存为提高客户/服务器数据库系统整体性能以及客户方数据可用性提供了有效途径。移动环境下网络资源的贫乏使客户缓存的作用更为重要,语义缓存是基于客户查询语义相关建立的一类缓存,提出一个基于语义缓存的客户缓存机制,给出缓存的内容组织,提出缓存项合并策略;然后讨论了基于语义缓存的查询处理策略;最后,模拟结果表明该客户缓存机制能够提高分布式、特别是移动环境下客户服务器数据库系统的性能。  相似文献   

5.
Data broadcast is an attractive data dissemination method in mobile environments. To improve energy efficiency, existing air indexing schemes for data broadcast have focused on reducing tuning time only, i.e., the duration that a mobile client stays active in data accesses. On the other hand, existing broadcast scheduling schemes have aimed at reducing access latency through nonflat data broadcast to improve responsiveness only. Not much work has addressed the energy efficiency and responsiveness issues concurrently. This paper proposes an energy-efficient indexing scheme called MHash that optimizes tuning time and access latency in an integrated fashion. MHash reduces tuning time by means of hash-based indexing and enables nonflat data broadcast to reduce access latency. The design of hash function and the optimization of bandwidth allocation are investigated in depth to refine MHash. Experimental results show that, under skewed access distribution, MHash outperforms state-of-the-art air indexing schemes and achieves access latency close to optimal broadcast scheduling.  相似文献   

6.
Many network applications requires access to most up-to-date information. An update event makes the corresponding cached data item obsolete, and cache hits due to obsolete data items become simply useless to those applications. Frequently accessed but infrequently updated data items should get higher preference while caching, and infrequently accessed but frequently updated items should have lower preference. Such items may not be cached at all or should be evicted from the cache to accommodate items with higher preference. In wireless networks, remote data access is typically more expensive than in wired networks. Hence, an efficient caching scheme considers both data access and update patterns can better reduce data transmissions in wireless networks. In this paper, we propose a step-wise optimal update-based replacement policy, called the Update-based Step-wise Optimal (USO) policy, for wireless data networks to optimize transmission cost by increasing effective hit ratio. Our cache replacement policy is based on the idea of giving preference to frequently accessed but infrequently updated data, and is supported by an analytical model with quantitative analysis. We also present results from our extensive simulations. We demonstrate that (1) the analytical model is validated by the simulation results and (2) the proposed scheme outperforms the Least Frequently Used (LFU) scheme in terms of effective hit ratio and communication cost.  相似文献   

7.
Data caching at mobile clients is an important technique for improving the performance of wireless data dissemination systems. However, variable data sizes, data updates, limited client resources, and frequent client disconnections make cache management a challenge. We propose a gain-based cache replacement policy, Min-SAUD, for wireless data dissemination when cache consistency must be enforced before a cached item is used. Min-SAUD considers several factors that affect cache performance, namely, access probability, update frequency, data size, retrieval delay, and cache validation cost. The paper employs stretch as the major performance metric since it accounts for the data service time and, thus, is fair when items have different sizes. We prove that Min-SAUD achieves optimal stretch under some standard assumptions. Moreover, a series of simulation experiments have been conducted to thoroughly evaluate the performance of Min-SAUD under various system configurations. The simulation results show that, in most cases, the Min-SAUD replacement policy substantially outperforms two existing policies, namely, LRU and SAIU.  相似文献   

8.
Cooperative caching is an efficient way to improve the performance of data access in mobile wireless networks, by cache nodes selecting different data items in their limited storage in order to reduce total access delay. With more demands on sharing a video or other data, especially for mobile applications in an Internet-based Mobile Ad Hoc Network, considering the relations among data items in cooperative caching becomes more important than before. However, most of the existing works do not consider these inherent relations among data items, such as the logical, temporal, or spatial relations. In this paper, we present a novel solution, Gossip-based Cooperative Caching (GosCC) to address the cache placement problem, and consider the sequential relation among data items. Each mobile node stores the IDs of data items cached locally and the ID of the data item in use into its progress report. Each mobile node also makes use of these progress reports to determine whether a data item should be cached locally. These progress reports are propagated within the network in a gossip-based way. To improve the user experience, GosCC aims to provide users with an uninterrupted data access service. Simulation results show that GosCC achieves better performance than Benefit-based Data Caching and HybridCache, in terms of average interruption intervals and average interruption times, while sacrificing message cost to a certain degree.  相似文献   

9.
Data caching is a popular technique that improves data accessibility in wired or wireless networks. However, in mobile ad hoc networks, improvement in access latency and cache hit ratio may diminish because of the mobility and limited cache space of mobile hosts (MHs). In this paper, an improved cooperative caching scheme called group-based cooperative caching (GCC) is proposed to generalize and enhance the performance of most group-based caching schemes. GCC allows MHs and their neighbors to form a group, and exchange a bitmap data directory periodically used for proposed algorithms, such as the process of data discovery, and cache placement and replacement. The goal is to reduce the access latency of data requests and efficiently use available caching space among MH groups. Two optimization techniques are also developed for GCC to reduce computation and communication overheads. The first technique compresses the directories using an aggregate bitmap. The second employs multi-point relays to develop a forwarding node selection scheme to reduce the number of broadcast messages inside the group. Our simulation results show that the optimized GCC yields better results than existing cooperative caching schemes in terms of cache hit ratio, access latency, and average hop count.  相似文献   

10.
Internet-based vehicular ad hoc network (Ivanet) is an emerging technique that combines a wired Internet and a vehicular ad hoc network (Vanet) for developing an ubiquitous communication infrastructure and improving universal information and service accessibility. A key design optimization technique in Ivanets is to cache the frequently accessed data items in a local storage of vehicles. Since vehicles are not critically limited by the storage/memory space and power consumption, selecting proper data items for caching is not very critical. Rather, an important design issue is how to keep the cached copies valid when the original data items are updated. This is essential to provide fast access to valid data for fast moving vehicles. In this paper, we propose a cooperative cache invalidation (CCI) scheme and its enhancement (ECCI) that take advantage of the underlying location management scheme to reduce the number of broadcast operations and the corresponding query delay. We develop an analytical model for CCI and ECCI techniques for fasthand estimate of performance trends and critical design parameters. Then, we modify two prior cache invalidation techniques to work in Ivanets: a poll-each-read (PER) scheme, and an extended asynchronous (EAS) scheme. We compare the performance of four cache invalidation schemes as a function of query interval, cache update interval, and data size through extensive simulation. Our simulation results indicate that the proposed schemes can reduce the query delay up to 69% and increase the cache hit rate up to 57%, and have the lowest communication overhead compared to the prior PER and EAS schemes.  相似文献   

11.
刘外喜  余顺争  蔡君  高鹰 《软件学报》2013,24(8):1947-1962
为了克服现有 Internet 架构存在的众所周知的缺点,未来网络的研究成为热点.ICN(information-centric networking)在众多新架构中正逐渐被公认为最有前途的方案.它把传输的内容缓存到沿途的节点.高效的缓存机制是它的一个重要研究方面.为此,提出了一种在分布式缓存机制中嵌入中心式缓存决策的机制(content-aware placement,discovery and replacement,简称APDR),它把内容的放置、发现、替换统一起来考虑,实现内容的有序缓存,提高网络的性能.APDR的主要思想是:Interest报文除了携带对内容的请求以外,还收集沿途各节点对该内容的潜在需求、空闲缓存等信息,使得Interest的汇聚点和目的地节点可以据此计算出一个缓存方案,并把该方案附加在Data报文上,通知返程途中的某些节点缓存该内容并设置指定的缓存时间.在多种实验条件下对APDR进行了仿真验证,结果表明,APDR 可以改善网络性能,包括缓存命中率、接入代价、替换数量、转发效率以及缓存鲁棒性等;而且APDR的额外开销也不大.  相似文献   

12.
为了提高移动计算环境中缓存数据效率,在分析现有成果的基础上,提出了基于变周期数据广播的缓存一致性维护的策略。该策略根据数据的访问用户数目以及数据更新的频率等方法来动态调整服务器广播数据更新报告的频率及内容,在客户端接收服务器广播的数据更新报告后将缓存中已被更新过数据项的值用新值替换掉,而不直接将该数据项立即移出缓存。通过对该方法进行性能的分析,表明该方法能良好地适应数据更新频率不断变化的移动数据计算环境。  相似文献   

13.
We propose and analyze an adaptive per-user per-object cache consistency management (APPCCM) scheme for mobile data access in wireless mesh networks. APPCCM supports strong data consistency semantics through integrated cache consistency and mobility management. The objective of APPCCM is to minimize the overall network cost incurred due to data query/update processing, cache consistency management, and mobility management. In APPCCM, data objects can be adaptively cached at the mesh clients directly or at mesh routers dynamically selected by APPCCM. APPCCM is adaptive, per-user and per-object as the decision regarding where to cache a data object accessed by a mesh client is made dynamically, depending on the mesh client’s mobility and data query/update characteristics, and the network’s conditions. We develop analytical models for evaluating the performance of APPCCM and devise a computational procedure for dynamically calculating the overall network cost incurred. We demonstrate via both model-based analysis and simulation validation that APPCCM outperforms non-adaptive cache consistency management schemes that always cache data objects at the mesh client, or at the mesh client’s current serving mesh router for mobile data access in wireless mesh networks.  相似文献   

14.
15.
Towards Intelligent Semantic Caching for Web Sources   总被引:2,自引:0,他引:2  
An intelligent semantic caching scheme suitable for web sources is presented. Since web sources typically have weaker querying capabilities than conventional databases, existing semantic caching schemes cannot be directly applied. Our proposal takes care of the difference between the query capabilities of an end user system and web sources. In addition, an analysis on the match types between a user's input query and cached queries is presented. Based on this analysis, we present an algorithm that finds the best matched query under different circumstances. Furthermore, a method to use semantic knowledge, acquired from the data, to avoid unnecessary access to web sources by transforming the cache miss to the cache hit is presented. To verify the effectiveness of the proposed semantic caching scheme, we first show how to generate synthetic queries exhibiting different levels of semantic localities. Then, using the test sets, we show that the proposed query matching technique is an efficient and effective way for semantic caching in web databases.  相似文献   

16.
一种基于移动环境的语义缓存一致性维护技术   总被引:1,自引:0,他引:1  
在深入研究缓存失效广播技术和语义缓存的基础上,提出了一种新的基于移动环境的语义缓存一致性维护技术——基于语义缓存的异步有状态(BSCAS)技术。BSCAS技术可以支持移动客户的各种断接方式,减少无线通信的开销,让移动客户有更好的自治性。  相似文献   

17.
The continuous partial match query is a partial match query whose result remains consistently in the client’s memory. Conventional cache invalidation methods for mobile clients are record ID-based. However, since the partial match query uses content-based retrieval, the conventional ID-based approaches cannot efficiently manage the cache consistency of mobile clients. In this paper, we propose a predicate-based cache invalidation scheme for continuous partial match queries in mobile computing environments. We represent the cache state of a mobile client as a predicate, and also construct a cache invalidation report (CIR), which the server broadcasts to clients for cache management, with predicates. In order to reduce the amount of information that is needed for cache management, we propose a set of methods for CIR construction (in the server) and identification of invalidated data (in the client). Through experiments, we show that the predicate-based approach is very effective for the cache management of mobile clients.  相似文献   

18.
This paper studies a challenging problem of cache placement in wireless multi-hop ad hoc networks. More specifically, we study how to achieve an optimal tradeoff between total access delay and caching overheads, by properly selecting a subset of wireless nodes as cache nodes when the network topology changes. We assume a data source updates a data item to be accessed by other client nodes. Most of the existing cache placement algorithms use hop counts to measure the total cost of a caching system, but hop delay in wireless networks varies much due to the contentions among these nodes and the traffic load on each link. Therefore, we evaluate the per-hop delay for each link according to the contentions detected by a wireless node from the MAC layer. We propose two heuristic cache placement algorithms, named Centralized Contention-aware Caching Algorithm (CCCA) and Distributed Contention-aware Caching Algorithm (DCCA), both of which detect the variation of contentions and the change of the traffic flows, in order to evaluate the benefit of selecting a node as a cache node. We also apply a TTL-based cache consistency strategy to maintain the delta consistency among all the cache nodes. Simulation results show that the proposed algorithms achieve better performance than other alternative ones in terms of average query delay, caching overheads, and query success ratio.  相似文献   

19.
Scalable cache invalidation algorithms for mobile data access   总被引:2,自引:0,他引:2  
In this paper, we address the problem of cache invalidation in mobile and wireless client/server environments. We present cache invalidation techniques that can scale not only to a large number of mobile clients, but also to a large number of data items that can be cached in the mobile clients. We propose two scalable algorithms: the Multidimensional Bit-Sequence (MD-BS) algorithm and the Multilevel Bit-Sequence (ML-BS) algorithm. Both algorithms are based on our prior work on the Basic Bit-Sequences (BS) algorithm. Our study shows that the proposed algorithms are effective for a large number of cached data items with low update rates. The study also illustrates that the algorithms ran be used with other complementary techniques to address the problem of cache invalidation for data items with varied update and access rates.  相似文献   

20.
As wireless networks become an integral component of the current communication infrastructure, energy efficiency is a crucial design consideration because of the limited battery life of mobile terminals. Data broadcast is an effective data dissemination method in mobile environments. The current air indexing schemes for data broadcast focused on energy efficiency (reducing tuning time) only, and current broadcast scheduling schemes reduce access latency through nonflat data broadcast to improve only responsiveness. Few studies have addressed energy efficiency and responsiveness issues concurrently. This study proposes a fast data access scheme that concurrently supports the energy saving protocol, which constructs broadcast channels according to the access frequency of each type of message to improve energy efficiency in mobile devices. The windmill scheduling algorithm that is presented in this paper was used to organize all types of messages in the broadcast channel in the most symmetrical distribution, to reduce tuning and access time. The performance of the proposed mechanism was analyzed, and the efficiency improvement over existing methods was demonstrated numerically. Results indicate that the proposed mechanism is capable of improving both tuning and access time because of the presence of skewness in the access distribution among disseminated messages.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号