首页 | 本学科首页   官方微博 | 高级检索  
     

基于中间层的可扩展学习索引技术
引用本文:高远宁,叶金标,杨念祖,高晓沨,陈贵海.基于中间层的可扩展学习索引技术[J].软件学报,2020,31(3):620-633.
作者姓名:高远宁  叶金标  杨念祖  高晓沨  陈贵海
作者单位:上海市可扩展计算与系统重点实验室,上海200240;上海交通大学计算机科学与工程系,上海200240;上海市可扩展计算与系统重点实验室,上海200240;上海交通大学计算机科学与工程系,上海200240;上海市可扩展计算与系统重点实验室,上海200240;上海交通大学计算机科学与工程系,上海200240;上海市可扩展计算与系统重点实验室,上海200240;上海交通大学计算机科学与工程系,上海200240;上海市可扩展计算与系统重点实验室,上海200240;上海交通大学计算机科学与工程系,上海200240
基金项目:国家重点研发项目(2018YFB1004700);国家自然科学基金(61872238,61972254,61832005);上海市科技创新行动计划(17510740200);CCF-华为数据库创新研究计划(CCF-Huawei DBIR2019002A).
摘    要:在大数据与云计算时代,数据访问速度是衡量大规模存储系统性能的一个重要指标.因此,如何设计一种轻量、高效的数据索引结构,从而满足系统高吞吐率、低内存占用的需求,是当前数据库领域的研究热点之一.Kraska等人提出使用机器学习模型代替传统的B树索引,并在真实数据集上取得了不错的效果,但其提出的模型假设工作负载是静态的、只读的,对于索引更新问题没有提出很好的解决办法.提出了基于中间层的可扩展的学习索引模型Dabble,用来解决索引更新引发的模型重训练问题.首先,Dabble模型利用K-Means聚类算法将数据集划分为K个区域,并训练K个神经网络分别学习不同区域的数据分布.在模型训练阶段,创新性地把数据的访问热点信息融入到神经网络中,从而提高模型对热点数据的预测精度.在数据插入时,借鉴了LSM树延迟更新的思想,提高了数据写入速度.在索引更新阶段,提出一种基于中间层的机制将模型解耦,从而缓解由于数据插入带来的模型更新问题.分别在Lognormal数据集以及Weblogs数据集上进行实验验证,结果表明,与当前先进的方法相比,Dabble模型在查询以及索引更新方面都取得了非常好的效果.

关 键 词:学习索引  聚类  神经网络  动态更新
收稿时间:2019/7/20 0:00:00
修稿时间:2019/11/25 0:00:00

Middle Layer Based Scalable Learned Index Scheme
GAO Yuan-Ning,YE Jin-Biao,YANG Nian-Zu,GAO Xiao-Feng and CHEN Gui-Hai.Middle Layer Based Scalable Learned Index Scheme[J].Journal of Software,2020,31(3):620-633.
Authors:GAO Yuan-Ning  YE Jin-Biao  YANG Nian-Zu  GAO Xiao-Feng and CHEN Gui-Hai
Affiliation:Shanghai Key Laboratory of Scalable Computing and Systems, Department of Computer Science and Engineering, Shanghai JiaoTong University, Shanghai 200240, China,Shanghai Key Laboratory of Scalable Computing and Systems, Department of Computer Science and Engineering, Shanghai JiaoTong University, Shanghai 200240, China,Shanghai Key Laboratory of Scalable Computing and Systems, Department of Computer Science and Engineering, Shanghai JiaoTong University, Shanghai 200240, China,Shanghai Key Laboratory of Scalable Computing and Systems, Department of Computer Science and Engineering, Shanghai JiaoTong University, Shanghai 200240, China and Shanghai Key Laboratory of Scalable Computing and Systems, Department of Computer Science and Engineering, Shanghai JiaoTong University, Shanghai 200240, China
Abstract:In the era of big data and cloud computing, efficient data access is an important metric to measure the performance of a large-scale storage system. Therefore, design a lightweight and efficient index structure, which can meet the system''s demand for high throughput and low memory footprint, is one of the research hotspots in the current database field. Recently, Kraska et al proposed using machine learning models instead of traditional B-tree indexes, and achieved remarkable results on real data sets. However, the proposed model assumes that the workload is static and read-only, failing to handle the index update problem. In this paper, we propose Dabble, a middle layer based scalable learning index model, which is used to mitigate the index update problem. Dabble first uses K-Means algorithm to divide the data set into K regions, and trains K neural networks to learn the data distribution of different regions. During the training phase, we innovatively integrate the data access patterns into the neural network, which can improve the prediction accuracy of the model for hotspot data. For data insertion, we borrowed the idea of LSM tree, i.e., delay update mechanism, which greatly improved the data writing speed. In the index update phase, we propose a middle layer based mechanism for model decoupling, thus easing the problem of index updating cost. We evaluate Dabble model on two datasets, the Lognormal distribution dataset and the real-world Weblogs dataset. The experiment results demonstrate the effectiveness and efficiency of our model compared with the state-of-the-art methods.
Keywords:learned index  clustering  neural network  dynamic update
本文献已被 维普 万方数据 等数据库收录!
点击此处可从《软件学报》浏览原始摘要信息
点击此处可从《软件学报》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号