首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 171 毫秒
1.
移动查询缓存处理的研究   总被引:5,自引:0,他引:5  
客户缓存为提高客户/服务器数据库系统整体性能以及客户方数据可用性提供了有效途径。移动环境下网络资源的贫乏使客户缓存的作用更为重要,语义缓存是基于客户查询语义相关建立的一类缓存,提出一个基于语义缓存的客户缓存机制,给出缓存的内容组织,提出缓存项合并策略;然后讨论了基于语义缓存的查询处理策略;最后,模拟结果表明该客户缓存机制能够提高分布式、特别是移动环境下客户服务器数据库系统的性能。  相似文献   

2.
断接下查询的缓存处理   总被引:5,自引:0,他引:5  
吴婷婷  章文嵩  周兴铭 《计算机学报》2003,26(10):1393-1399
移动环境下,由于无线网络可靠性低、费用高,移动主机本身受电源、资源等方面的限制,移动主机经常会主动或被动地处于断接,即没有网络连接的状态.为了提高断接时移动客户对数据的访问能力,有效利用移动缓存,该文提出断接下基于语义缓存的查询处理QPID算法.该算法的主要思路是先找出缓存中与当前查询相关的缓存项,再通过对相关项数据的进一步处理获得缓存中满足查询的结果.试验表明,基于QPID算法的查询处理可以更好地满足断接下客户的查询请求.  相似文献   

3.
语义缓存的最小权值项LWI替换策略   总被引:5,自引:0,他引:5  
在客户-服务器数据库系统中,语义缓存是基于客户查询语义相关建立的一种客户缓存.语义缓存的内容由以往查询的结果以及相应的描述构成.针对语义缓存的特征,提出语义缓存下最小权值项LWI(1east weight itern)替换策略.该策略由缓存项投影属性的访问频率和缓存项与查询的条件匹配情况,结合数据访问的时间局部性考虑决定缓存项的权值,替换最小权值项.通过性能分析实验,在语义缓存中,基于LWI替换策略的系统性能要优于基于传统LRU和LFU替换策略的系统性能.  相似文献   

4.
语义缓存技术一直以来是数据库研究的热点之一。语义缓存是指将用户向服务器端提交的查询语句和查询实际得到的结果数据同一存储在指定的缓存中,因而其不仅包含有数据,还有对数据的相关描述信息,从而可以提高查询效率。在语义缓存之前,还存在页面缓存,元组缓存,块缓存等缓存技术。本文首先介绍语义缓存的模型,然后阐述和分析语义缓存查询匹配算法和缓存替换策略,最后对语义缓存技术进行总结,从而为语义缓存技术的进一步研究提供一定的理论基础。  相似文献   

5.
语义缓存技术一直以来是数据库研究的热点之一。语义缓存是指将用户向服务器端提交的查询语句和查询实际得到的结果数据同一存储在指定的缓存中,因而其不仅包含有数据,还有对数据的相关描述信息,从而可以提高查询效率。在语义缓存之前,还存在页面缓存、元组缓存、块缓存等缓存技术。文中首先介绍语义缓存的模型,然后阐述和分析语义缓存查询匹配算法和缓存替换策略,最后对语义缓存技术进行总结,从而为语义缓存技术的进一步研究提供一定的理论基础。  相似文献   

6.
移动环境下缓存弱一致性的研究   总被引:5,自引:0,他引:5  
在移动环境下,客户缓存为提高客户一服务器数据库系统的整体性能提供了有效途径。缓存与服务方数据的同步策略是缓存研究的重要内容。移动环境下,考虑到网络的带宽、开销和可靠性等因素,客户有可能允许缓存维护弱一致性,即允许缓存与服务方数据存在偏差。本文针对基于语义的缓存,给出客户限定偏差范围的方法,并且提出基于有效期的缓存同步算法。  相似文献   

7.
语义缓存的聚集查询匹配研究   总被引:1,自引:1,他引:0  
为提高海量数据库系统的查询效率,围绕海量数据库系统中的聚集查询技术,把通常应用于小型数据库查询的语义缓存技术拓展到海量数据库的聚集查询中.首先研究了面向聚集查询的语义缓存形式化描述,在此基础上讨论了利用缓存处理查询的条件并对查询匹配进行了分类,提出并实现了包含匹配判定算法和相交匹配判定算法,最后给出了相应的实验结果.在某大型实际工程中的应用表明上述判定算法是有效的.  相似文献   

8.
随着通信技术和网络技术的发展,嵌入式移动数据库正成为数据库领域的一个新的研究方向.针对嵌入式移动数据库中频繁断接性、低带宽等特性,介绍了数据广播和客户缓存两种用于嵌入式移动数据库中处理断接操作的方法,提出了基于语义的查询优化策略.研究表明,该策略能够提高移动数据库查询的效率.  相似文献   

9.
嵌入式移动数据库查询优化策略研究   总被引:1,自引:0,他引:1  
李荣鑫  洪胜华 《微机发展》2005,15(5):145-147
随着通信技术和网络技术的发展,嵌入式移动数据库正成为数据库领域的一个新的研究方向。针对嵌入式移动数据库中频繁断接性、低带宽等特性,介绍了数据广播和客户缓存两种用于嵌入式移动数据库中处理断接操作的方法,提出了基于语义的查询优化策略。研究表明,该策略能够提高移动数据库查询的效率。  相似文献   

10.
一种基于移动环境的语义缓存一致性维护技术   总被引:1,自引:0,他引:1  
在深入研究缓存失效广播技术和语义缓存的基础上,提出了一种新的基于移动环境的语义缓存一致性维护技术——基于语义缓存的异步有状态(BSCAS)技术。BSCAS技术可以支持移动客户的各种断接方式,减少无线通信的开销,让移动客户有更好的自治性。  相似文献   

11.
The continuous partial match query is a partial match query whose result remains consistently in the client’s memory. Conventional cache invalidation methods for mobile clients are record ID-based. However, since the partial match query uses content-based retrieval, the conventional ID-based approaches cannot efficiently manage the cache consistency of mobile clients. In this paper, we propose a predicate-based cache invalidation scheme for continuous partial match queries in mobile computing environments. We represent the cache state of a mobile client as a predicate, and also construct a cache invalidation report (CIR), which the server broadcasts to clients for cache management, with predicates. In order to reduce the amount of information that is needed for cache management, we propose a set of methods for CIR construction (in the server) and identification of invalidated data (in the client). Through experiments, we show that the predicate-based approach is very effective for the cache management of mobile clients.  相似文献   

12.
基于缓存技术的XML代数查询优化   总被引:1,自引:1,他引:0  
针对XML代数查询优化,采用缓存技术将经常被查询到的模式树保存在缓存中,根据匹配规则判断查询模式树是否与缓存中的模式树匹配,对匹配的部分直接从缓存中获取部分查询结果,以提高查询效率。分析查询与缓存的模式匹配规则,并通过实验证明该规则的可行性和有效性。  相似文献   

13.
Due to the proliferation of Internet and Intranet, the distributed storage systems have received a lot of attention. These systems span a large number of machines and store huge amount of data for a lot of users. In the distributed storage systems, a row can be directly accessed using a row key. We concentrate on a problem of efficient processing of queries whose predicate is on a column but not a row key. In this paper, we present a cache management technique, called DICE which maintains query results of range queries to support the next range queries. To accelerate the search time of the cached query results, we use modified Interval Ski Lists. In addition, we devise a novel cache replacement policy since DICE maintains an interval rather than a data item. Since our cache replacement policy considers the properties of intervals, our proposed technique is more efficient than traditional buffer replacement algorithms. Our experimental result demonstrates the efficiency of our proposed technique.  相似文献   

14.
传统缓存替换策略对语义缓存替换缺乏针对性,不能有效重用缓存中的数据,影响缓存命中率。为此,针对可扩展标记语言查询(XML)代数查询的特点,提出一种基于语义贡献值的XML查询缓存替换策略,根据用户历史查询的缓存项进行聚类分析,预测其对未来查询的语义贡献值,当缓存空间不足时替换语义贡献值最小的项,从而提高用户查询效率。实验结果表明,相比最不经常使用策略和最近最少使用策略,该策略能有效缩短查询时间,提高缓存命中率。  相似文献   

15.
Caching frequently accessed data items on the client side is an effective technique to improve the system performance in wireless networks. Due to cache size limitations, cache replacement algorithms are used to find a suitable subset of items for eviction from the cache. Many existing cache replacement algorithms employ a value function of different factors such as time since last access, entry time of the item in the cache, transfer time, item expiration time and so on. However, most of the existing algorithms are designed for WWW environment under weak consistency model. Their choices of value functions are based on experience and on a value function which only works for a specific performance metric.In this paper, we propose a generalized value function for cache replacement algorithms for wireless networks under a strong consistency model. The distinctive feature of our value function is that it is generalized and can be used for various performance metrics by making the necessary changes. Further, we prove that the proposed value function can optimize the access cost in our system model. To demonstrate the practical effectiveness of the generalized value function, we derive two specific functions and evaluate them by setting up two different targets: minimizing the query delay and minimizing the downlink traffic. Compared to previous schemes, our algorithm significantly improves the performance in terms of query delay or in terms of bandwidth utilization depending on the specified target.  相似文献   

16.
Fragmentation has been used to distribute the contents of a database across the sites of a distributed database system. During run time, the system must determine which fragments can be used to answer each query. This process requires solving the predicate implication problem. In order to speed processing, it is desirable to do as much preprocessing as possible on the prestored fragments, without knowledge of the run-time query. In this paper, performing preprocessing on database fragments to speed later run-time implication checking is investigated. The investigation is based on a new concept, separation among predicates. When two predicates are properly separated, their union cannot be implied by any other conjunctive predicate unless one of them is implied by the conjunctive predicate. A polynomial time algorithm for checking the pair-wise separation among a collection of fragment predicates is introduced and its complexity is theoretically analyzed. The separation checking algorithm is accompanied by a query processing algorithm which makes use of the result of the separation properties of the fragments to speed real time query processing. The two algorithms presented are scalable according to available preprocessing time in the sense that the preprocessing algorithm can be run for shorter periods to produce partial preprocessing that can still be used by the query processing algorithm.  相似文献   

17.
本文讨论了对SQL语言的递归查询和逻辑谓词功能上的扩充方法,给出了递归查询的关系代数表达式,并就考虑路径的递归查询和不考虑路径的递归查询两方面对SQL在句法和算法上,对SQL中逻辑谓词扩充的理论,句法和算法进行了讨论。  相似文献   

18.
A scalable low-latency cache invalidation strategy for mobile environments   总被引:3,自引:0,他引:3  
Caching frequently accessed data items on the client side is an effective technique for improving performance in a mobile environment. Classical cache invalidation strategies are not suitable for mobile environments due to frequent disconnections and mobility of the clients. One attractive cache invalidation technique is based on invalidation reports (IRs). However, the IR-based cache invalidation solution has two major drawbacks, which have not been addressed in previous research. First, there is a long query latency associated with this solution since a client cannot answer the query until the next IR interval. Second, when the server updates a hot data item, all clients have to query the server and get the data from the server separately, which wastes a large amount of bandwidth. In this paper, we propose an IR-based cache invalidation algorithm, which can significantly reduce the query latency and efficiently utilize the broadcast bandwidth. Detailed analytical analysis and simulation experiments are carried out to evaluate the proposed methodology. Compared to previous IR-based schemes, our scheme can significantly improve the throughput and reduce the query latency, the number of uplink request, and the broadcast bandwidth requirements.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号