首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
一种基于半监督学习的多模态Web查询精化方法   总被引:1,自引:0,他引:1  
Web搜索系统往往通过与用户的交互来精化查询以提高搜索性能.除文字之外,网页中还存在着大量其它模态的信息,如图像、音频和视频等.以往对于查询精化的研究很少涉及对多模态信息的利用.文中提出了一种基于半监督学习的多模态Web查询精化方法M2S2QR,将Web查询精化转化为一个机器学习问题加以解决.首先,基于用户判断后的网页信息,分别为不同模态训练相应的学习器,然后利用未经用户判断的网页信息来提高学习器性能,最后将不同模态学习器结合起来使用.实验验证了文中方法的有效性.  相似文献   

2.
语义查询优化技术研究综述   总被引:1,自引:0,他引:1  
1.引言传统的查询优化器利用语法变换对查询进行优化,从生成的查询计划中选择一个具有最小代价的执行计划。然而,随着数据库技术和网络技术的发展,尤其是在异构数据库环境下和面向对象的数据库中,处理的对象结构更为复杂,传统的查询优化器显得力不从心。语义查询优化利用数据库上的语义规则将一个查询变为一个语义等价且更加高效地查询,以此来弥补传统查询优化技术的不足。虽然,语义查询优化能够产生较好的优化效果,但必须有效地解决以下一些问题:  相似文献   

3.
陈井爽  陈珂  寿黎但  江大伟  陈刚 《软件学报》2022,33(12):4688-4703
学习型索引通过学习数据分布可以准确地预测数据存取的位置,在保持高效稳定的查询下,显著降低索引的内存占用.现有的学习型索引主要针对只读查询进行优化,而对插入和更新支持不足.针对上述挑战,设计了一种基于Radix Tree的工作负载自适应学习型索引ALERT. ALERT使用Radix Tree来管理不定长的分段,段内采用具有最大误差界的线性插值模型进行预测.同时,ALERT使用一种高效的插入缓冲来降低数据插入更新的代价.针对点查询和范围查询提出两种自适应重组优化方法,通过对工作负载进行感知,动态地调整插入缓冲的组织结构.经实验验证,ALERT与业界流行的学习型索引相比,构建时间平均降低了81%,内存占用平均降低了75%,在保持了优秀读性能的同时,使插入延迟平均降低了50%;此外, ALERT使用自适应重组优化能有效感知查询工作负载特征,与不使用自适应重组优化相比,查询延迟平均降低了15%.  相似文献   

4.
最近,通过学习型索引取代传统索引以减少索引大小和提高查询效率受到广泛关注.轨迹点在路网和时间维度的连续性难以刻画,数据分布倾斜普遍存在,现存的学习型索引不能有效地支持其查询.提出一种基于路网时窗排序的回归模型树,以支持点和范围查询,含数据排序和模型训练两个阶段:首先,结合希尔伯特曲线和模拟退火寻找保持道路临近性的路段排序,进而采用两层划分获取轨迹点的一维排序,保证时空近邻点排序后彼此靠近;其次,引入回归模型树映射轨迹点和存储位置,提出批量加载和周期更新两种训练模式.真实和模拟数据集上的实验表明,在保证和传统索引可比的查询性能前提下,大幅度降低索引大小,有效地支持以读为主的历史轨迹数据查询.  相似文献   

5.
现有排序学习算法忽视了查询之间的差异,在建立排序模型的过程中等同对待训练样本集中的所有查询及其相关文档,影响了排序模型的性能.文中描述了查询之间的差异,并在训练过程中考虑查询之间的差异,提出了一种基于有监督学习的多排序模型融合方法.这种方法首先使用每一个查询及其相关文档训练出子排序模型,并将每一个子排序模型的输出转化为体现查询差异的特征数据,使用监督学习方法,实现了多排序模型的融合.更进一步,针对排序问题的特性,文中提出了一种直接优化排序性能的融合函数融合子排序模型,使用梯度上升方法优化其下界函数.文中证明了直接优化排序性能的融合函数融合子排序模型的性能优于子排序模型线性合并的性能.基于较大规模真实数据应用的实验结果表明,直接优化性能指标的多排序模型融合方法可以比传统排序学习模型具有更好的排序性能.  相似文献   

6.
PostgreSQL查询优化器分析研究   总被引:1,自引:0,他引:1  
作为开源数据库的代表,PostgreSQL的应用范围越来越广泛.文中的目的是研究PostgreSQL查询优化器的工作原理,介绍了PostgreSQL查询优化器的工作流程,分析了PostgreSQL查询优化器的工作原理,深入剖析了PostgreSQL查询优化器实现的具体细节和采用的两种优化算法.结合图论中查找最小生成树的算法提出了改进策略,并简要论证了可行性.研究发现,Po8tgSQL查询优化器可以处理任意复杂的请求,并能尽快地给出比较合理的执行路径.  相似文献   

7.
目前主流的RDF存储系统都是基于关系数据库的,其查询引擎都是将SPARQL转换为SQL,然后由数据库的查询引擎来执行查询.但是,目前的数据库查询优化器对于连接查询的选择度估计都是基于属性独立假设的,这往往导致估计错误而选择了效率低的执行计划,所以属性相关性信息对于SPARQL查询优化器能否找到效率高的执行计划是非常重要的.针对SPARQL转换为SQL后,因连接操作没有优化导致查询效率不高的问题,提出了利用本体信息自动计算属性相关性的方法,从而调整连接操作的选择度估计值,调整连接顺序,提高SPARQL查询中基本图模式的连接查询效率.  相似文献   

8.
胡潇炜  陈羽中 《计算机科学》2021,48(z1):206-212
查询推荐的目的是发掘搜索引擎用户的查询意图,并给出相关查询推荐.传统的查询推荐方法主要依靠人工提取查询的相关特征,如查询频率、查询时间、用户点击次数和停留时间等,并使用统计学习算法或排序算法给出查询推荐.近年来,深度学习方法在查询推荐问题上获得了广泛应用.现有的用于查询推荐的深度学习方法大多是基于循环神经网络,通过对查询日志中所有查询的语义特征进行建模以预测用户的下一查询.但是,现有的深度学习方法生成的查询推荐上下文感知能力较差,难以准确捕捉用户查询意图,且未充分考虑时间因素对查询推荐的影响,缺乏时效性和多样性.针对上述问题,文中提出了一种结合自编码器与强化学习的查询推荐模型(Latent Variable Hierarchical Recurrent Encoder-Decoder with Time Informa-tion of Query and Reinforcement Learning,VHREDT-RL).VHREDT-RL引入了强化学习联合训练生成器和判别器,从而增强了生成查询推荐的上下文感知能力;利用融合查询时间信息的隐变量分层递归自编码器作为生成器,使得生成查询推荐有更好的时效性和多样性.AOL数据集上的实验结果表明,文中提出的VHREDT-RL模型获得了优于基准方法的精度、鲁棒性和稳定性.  相似文献   

9.
大数据时代,数据规模庞大、数据管理应用场景复杂,传统数据库和数据管理技术面临很大的挑战.人工智能技术因其强大的学习、推理、规划能力,为数据库系统提供了新的发展机遇.人工智能赋能的数据库系统通过对数据分布、查询负载、性能表现等特征进行建模和学习,自动地进行查询负载预测、数据库配置参数调优、数据分区、索引维护、查询优化、查询调度等,以不断提高数据库针对特定硬件、数据和负载的性能.同时,一些机器学习模型可以替代数据库系统中的部分组件,有效减少开销,如学习型索引结构等.分析了人工智能赋能的数据管理新技术的研究进展,总结了现有方法的问题和解决思路,并对未来研究方向进行了展望.  相似文献   

10.
建立高效的索引结构是提升数据库存取性能的关键技术之一.在数据呈爆发式增长、海量聚集、高维复杂的大数据环境下,传统索引结构(例如B+树)处理海量数据时面临空间代价高、查询效率低、存取开销大等难题.学习型索引技术通过对底层数据分布、查询负载等特征进行建模和学习,有效的提升了索引性能,并减少了访存空间开销.本文从学习型索引技术的基础模型入手,对RMI基础模型实现原理、构造和查询过程进行了分析,并总结了基础模型的优点和存在的问题;以此为基础,按照索引结构特点对学习型索引技术进行分类,从索引创建方式和更新策略两方面对学习型索引技术进行了系统梳理,并对比分析了典型学习型索引技术的优点及不足之处.另外,本文总结了学习型索引技术的扩展研究.最后,对学习型索引的未来研究方向进行了展望.  相似文献   

11.
Recently, learned query optimizers typically driven by deep learning models have attracted wide attention as they can offer similar or even better performance than state-of-the-art commercial optimizers. A successful learning optimizer often relies on enough high-quality load queries as training data, and poor-quality training will lead to the query failure of learned query optimizers. In this paper, we propose a novel training framework AlphaQO for robust learned query optimizers based on Reinforcement Learning (RL), and the robustness of the optimizers can be improved by finding the bad queries in advance. AlphaQO is a loop system consisting of two main components, namely the query generator and the learned optimizer. A query generator aims at generating ``difficult'' queries (i.e., queries that the learned optimizer provides poor estimates). The learned optimizer will be trained using these generated queries, as well as providing feedback (in terms of numerical rewards) to the query generator for updates. If the generated queries are good, the query generator will get a high reward; otherwise, the query generator will get a low reward. The above process is performed iteratively, with the main goal that within a small budget, the learned optimizer can be trained and generalized well to a wide range of unseen queries. Extensive experiments show that AlphaQO can generate a relatively small number of queries and train a learned optimizer to outperform commercial optimizers. Moreover, learned optimizers require much fewer queries from AlphaQO than randomly generated queries for the quality training of the learned optimizer.  相似文献   

12.
优化处理并行数据库查询的并行数据流方法   总被引:1,自引:0,他引:1  
李建中 《软件学报》1998,9(3):174-180
本文使用并行数据流技术优化和处理并行数据库查询的方法,提出了一整套相关算法,并给出了一个基于并行数据流方法的并行数据库查询优化处理器的完整设计.这些算法和相应的查询优化处理器已经用于作者自行设计的并行数据库管理系统原型.实践证明,并行数据流方法不仅能够快速有效地实现并行数据库管理系统,也能够有效地进行并行数据库查询的优化处理.  相似文献   

13.
Query optimizers rely on statistical models that succinctly describe the underlying data. Models are used to derive cardinality estimates for intermediate relations, which in turn guide the optimizer to choose the best query execution plan. The quality of the resulting plan is highly dependent on the accuracy of the statistical model that represents the data. It is well known that small errors in the model estimates propagate exponentially through joins, and may result in the choice of a highly sub-optimal query execution plan. Most commercial query optimizers make the attribute value independence assumption: all attributes are assumed to be statistically independent. This reduces the statistical model of the data to a collection of one-dimensional synopses (typically in the form of histograms), and it permits the optimizer to estimate the selectivity of a predicate conjunction as the product of the selectivities of the constituent predicates. However, this independence assumption is more often than not wrong, and is considered to be the most common cause of sub-optimal query execution plans chosen by modern query optimizers. We take a step towards a principled and practical approach to performing cardinality estimation without making the independence assumption. By carefully using concepts from the field of graphical models, we are able to factor the joint probability distribution over all the attributes in the database into small, usually two-dimensional distributions, without a significant loss in estimation accuracy. We show how to efficiently construct such a graphical model from the database using only two-way join queries, and we show how to perform selectivity estimation in a highly efficient manner. We integrate our algorithms into the PostgreSQL DBMS. Experimental results indicate that estimation errors can be greatly reduced, leading to orders of magnitude more efficient query execution plans in many cases. Optimization time is kept in the range of tens of milliseconds, making this a practical approach for industrial-strength query optimizers.  相似文献   

14.
Both the quality and quantity of training data have significant impact on the accuracy of rank functions in web search. With the global search needs, a commercial search engine is required to expand its well tailored service to small countries as well. Due to heterogeneous intrinsic of query intents and search results on different domains (i.e., for different languages and regions), it is difficult for a generic ranking function to satisfy all type of queries. Instead, each domain should use a specific well tailored ranking function. In order to train each ranking function for each domain with a scalable strategy, it is critical to leverage existing training data to enhance the ranking functions of those domains without sufficient training data. In this paper, we present a boosting framework for learning to rank in the multi-task learning context to attack this problem. In particular, we propose to learn non-parametric common structures adaptively from multiple tasks in a stage-wise way. An algorithm is developed to iteratively discover super-features that are effective for all the tasks. The estimation of the regression function for each task is then learned as linear combination of those super-features. We evaluate the accuracy of multi-task learning methods for web search ranking using data from multiple domains from a commercial search engine. Our results demonstrate that multi-task learning methods bring significant relevance improvements over existing baseline method.  相似文献   

15.
在信息检索领域的排序任务中, 神经网络排序模型已经得到广泛使用. 神经网络排序模型对于数据的质量要求极高, 但是, 信息检索数据集通常含有较多噪音, 不能精确得到与查询不相关的文档. 为了训练一个高性能的神经网络排序模型, 获得高质量的负样本, 则至关重要. 借鉴现有方法doc2query的思想, 本文提出了深度、端到端的模型AQGM, 通过学习不匹配查询文档对, 生成与文档不相关、原始查询相似的对抗查询, 增加了查询的多样性,增强了负样本的质量. 本文利用真实样本和AQGM模型生成的样本, 训练基于BERT的深度排序模型, 实验表明,与基线模型BERT-base对比, 本文的方法在MSMARCO和TrecQA数据集上, MRR指标分别提升了0.3%和3.2%.  相似文献   

16.
The index selection problem (ISP) concerns the selection of an appropriate index set to minimize the total cost for a given workload containing read and update queries. Since the ISP has been proven to be an NP-hard problem, most studies focus on heuristic algorithms to obtain approximate solutions. However, even approximate algorithms still consume a large amount of computing time and disk space because these systems must record all query statements and frequently request from the database optimizers the cost estimation of each query in each considered index. This study proposes a novel algorithm without repeated optimizer estimations. When a query is delivered to a database system, the optimizer evaluates the costs of various query plans and chooses an access path for the query. The information from the evaluation stage is aggregated and recorded with limited space. The proposed algorithm can recommend indexes according to the readily available information without querying the optimizer again. The proposed algorithm was tested in a PostgreSQL database system using TPC-H data. Experimental results show the effectiveness of the proposed approach.  相似文献   

17.
Cost-based query optimizers need to estimate the selectivity of conjunctive predicates when comparing alternative query execution plans. To this end, advanced optimizers use multivariate statistics to improve information about the joint distribution of attribute values in a table. The joint distribution for all columns is almost always too large to store completely, and the resulting use of partial distribution information raises the possibility that multiple, non-equivalent selectivity estimates may be available for a given predicate. Current optimizers use cumbersome ad hoc methods to ensure that selectivities are estimated in a consistent manner. These methods ignore valuable information and tend to bias the optimizer toward query plans for which the least information is available, often yielding poor results. In this paper we present a novel method for consistent selectivity estimation based on the principle of maximum entropy (ME). Our method exploits all available information and avoids the bias problem. In the absence of detailed knowledge, the ME approach reduces to standard uniformity and independence assumptions. Experiments with our prototype implementation in DB2 UDB show that use of the ME approach can improve the optimizer’s cardinality estimates by orders of magnitude, resulting in better plan quality and significantly reduced query execution times. For almost all queries, these improvements are obtained while adding only tens of milliseconds to the overall time required for query optimization.  相似文献   

18.
The Domain Adaptation problem in machine learning occurs when the distribution generating the test data differs from the one that generates the training data. A common approach to this issue is to train a standard learner for the learning task with the available training sample (generated by a distribution that is different from the test distribution). One can view such learning as learning from a not-perfectly-representative training sample. The question we focus on is under which circumstances large sizes of such training samples can guarantee that the learned classifier preforms just as well as one learned from target generated samples. In other words, are there circumstances in which quantity can compensate for quality (of the training data)? We give a positive answer, showing that this is possible when using a Nearest Neighbor algorithm. We show this under some assumptions about the relationship between the training and the target data distributions (the assumptions of covariate shift as well as a bound on the ratio of certain probability weights between the source (training) and target (test) distribution). We further show that in a slightly different learning model, when one imposes restrictions on the nature of the learned classifier, these assumptions are not always sufficient to allow such a replacement of the training sample: For proper learning, where the output classifier has to come from a predefined class, we prove that any learner needs access to data generated from the target distribution.  相似文献   

19.
In recent years more and more queries are generated automatically by query managers/builders with end-users providing only specific parameters through GUIs. Queries generated automatically can be quite different from queries written by humans. In particular, they contain non-declarative features, most notorious of which is the CASE expression. Current query optimizers are often ill-prepared for the new types of queries as they do not deal well with procedural ‘insertions’. In this paper, we discuss the inefficiencies of CASE expressions and present several new optimization techniques to address them. We also describe experimental evaluation of the prototype implemented in DB2 UDB V8.2.  相似文献   

20.
为实现数据集成查询我们会用到查询优化器,而传统的查询优化器生成的执行计划会由于以下几个原因产生不良的结果:成本估计不正确,运行时可用的内存不足和数据传输率无法预测,所有这些问题都要求助于动态策略来修正静态的查询执行计划。介绍了一个动态的查询处理框架和这个框架用到的动态策略。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号