首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 156 毫秒
1.
李鸣鹏  高宏  邹兆年 《软件学报》2016,27(9):2265-2277
研究了基于图压缩的最大Steiner连通k核查询处理,提出了一种支持最大Steiner连通k核查询的图压缩算法SC,证明了基于SC压缩算法的查询正确性.由于最大Steiner连通k核查询仅需要找到符合要求的连通区域,提出了图压缩算法TC,进一步将压缩图压缩为树.证明了基于压缩树的查询正确性,并提出了线性时间的无需解压缩的查询处理算法.真实和虚拟数据上的实验结果表明:压缩算法平均可将原始图压缩掉88%,且对于稠密的原始图,压缩算法的压缩效果更好,可将原始图压缩掉90%,与在原始图上直接进行查询处理相比,基于压缩图的查询处理算法效率更好,平均提升了1~2个数量级.  相似文献   

2.
周宇  赵威  刘国华  貟慧  翟红敏  万小妹 《软件学报》2014,25(S2):136-146
查询结果重复率高是top-k查询处理过程中亟待解决的问题,已有的解决方法需要遍历初始结果集中所有的对象,因此,查询处理的效率较低.为了提高查询处理的效率,把初始结果集映射到欧氏空间中,根据拉式策略,可选用基于得分或基于距离两种方法之一从该空间选出差异最优子空间,在基于距离的方法中,对欧氏子空间进行分割并且利用探测位置和Voronoi图的几何特性减少二次查询对象的数目.在此基础上,提出了top-k查询结果有界多样化算法,并证明了算法的正确性.实验结果表明,所提出的算法提高了top-k查询处理效率.  相似文献   

3.
移动对象连续k近邻(CKNN)查询是指给定一个连续移动的对象集合,对于任意一个k近邻查询q,实时计算查询qk近邻并在查询有效时间内对查询结果进行实时更新.现实生活中,交通出行、社交网络、电子商务等领域许多基于位置的应用服务都涉及移动对象连续k近邻查询这一基础问题.已有研究工作解决连续k近邻查询问题时,大多需要通过多次迭代确定一个包含k近邻的查询范围,而每次迭代需要根据移动对象的位置计算当前查询范围内移动对象的数量,整个迭代过程的计算代价占查询代价的很大部分.为此,提出了一种基于网络索引和混合高斯函数移动对象分布密度的双重索引结构(grid GMM index,GGI),并设计了移动对象连续k近邻增量查询算法(incremental search for continuous k nearest neighbors,IS-CKNN).GGI索引结构的底层采用网格索引对海量移动对象进行维护,上层构建混合高斯模型模拟移动对象在二维空间中的分布.对于给定的k近邻查询q,IS-CKNN算法能够基于混合高斯模型直接确定一个包含qk近邻的查询区域,减少了已有算法求解该区域的多次迭代过程;当移动对象和查询q位置发生变化时,进一步提出一种高效的增量查询策略,能够最大限度地利用已有查询结果减少当前查询的计算量.最后,在滴滴成都网约车数据集以及两个模拟数据集上进行大量实验,充分验证了算法的性能.  相似文献   

4.
雷小锋  陈皎  毛善君  谢昆青 《软件学报》2018,29(12):3764-3785
建立邻接图上的批量边删除聚类算法通用框架,提出基于高斯平滑模型的批量边删除判定准则,定义了适于聚类的邻接图的一般性质,提出并证明在kNN图基础上引入随机因子构造的随机kNN图,可以增强顶点之间的局部连通性,使聚类结果不再强烈依赖于某条边或某些边的保留或删除.RkNNClus算法简洁高效,依赖参数少,无需指定类簇数目,模拟和真实数据上的实验均有证明.  相似文献   

5.
现实生活中的网络通常存在社区结构,社区查询是图数据挖掘的基本任务.现有研究工作提出了多种模型来识别网络中的社区,如基于k-核的模型和基于k-truss的模型.然而,这些模型通常只限制社区内节点或边的邻居数量,忽略了邻居之间的关系,即节点的邻域结构,从而导致社区内节点的局部稠密性较低.针对这一问题,本文将节点的邻域结构信息融入k-核稠密子图中,提出一种新的基于邻域连通k-核的社区模型,并定义了社区的稠密度.基于这一新模型,研究了最稠密单社区搜索问题,即返回包含查询节点集且具有最高稠密度的社区.在现实生活图数据中,一组查询节点可能会分布在多个不相交的社区中.为此,本文进一步研究了基于稠密度阈值的多社区搜索问题,即返回包含查询节点集的多个社区,且每个社区的稠密度不低于用户指定的阈值.针对最稠密单社区搜索和基于稠密度阈值的多社区搜索问题,首先定义了边稠密度的概念,并提出了基于边稠密度的基线算法.为了提高搜索效率,设计了索引树和改进索引树结构,能够支持在多项式时间内返回查询结果.通过与基线算法在多组数据集上的对比,验证了基于邻域连通k-核的社区模型的有效性和所提出查询算法的效率.  相似文献   

6.
针对DBSCAN聚类算法不能对变密度分布数据集进行有效聚类,VDBSCAN算法借助k-dist图来自动获取各个密度层次的数据对象的邻域半径,解决了具有不同密度层次分布数据集的聚类问题. k-VDBSCAN算法通过对k值的自动获取,减小了VDBSCAN中参数k对最终聚类结果的影响. 针对k值的自动获取,在原有的k-VDBSCAN聚类算法基础上,依据数据集本身,利用数据对象间距离的特征,提出了一种k值改进自动获取聚类算法. 理论分析与实验结果表明,新的改进算法能够有效的自动获得参数k的值,并且在聚类结果、时间效率方面都有明显的提高.  相似文献   

7.
李晨  申德荣  朱命冬  寇月  聂铁铮  于戈 《软件学报》2016,27(9):2278-2289
互联网上每天都会产生大量的带地理位置标签和时间标签的信息,比如微博、新闻、团购等等,如何在众多的信息中找到在时间和空间地理位置上都满足用户查询需求的信息十分重要.针对这一需求,提出了一种对地理位置和时间信息的k近邻查询(ST-kNN查询)处理方法.首先,利用时空相似度对数据对象的地理位置变量和时间变量进行映射变换,将数据对象映射到新的三维空间中,用三维空间中两点之间的距离相似度来近似代替两个对象之间实际的时空相似度;然后,针对这个三维空间设计了一种ST-Rtree(spatial temporal rtree)索引,该索引综合了空间因素和时间因素,保证在查询时每个对象至多遍历1次;最后,在该索引的基础上提出了一种精确的k近邻查询算法,并通过一次计算确定查询结果范围,从而找到前k个结果,保证了查询的高效性.基于大量数据集的实验,证明了该查询处理方法的高效性.  相似文献   

8.
尚敬文  王朝坤  辛欣  应翔 《软件学报》2017,28(3):648-662
社区结构是复杂网络的一个重要特征,社区发现对研究网络结构有重要的应用价值.k-均值等经典聚类算法是解决社区发现问题的一类基本方法.然而,在处理网络的高维矩阵时,使用这些经典聚类方法得到的社区往往不够准确.提出一种基于深度稀疏自动编码器的社区发现算法CoDDA,尝试提高使用这些经典方法处理高维邻接矩阵进行社区发现的准确性.首先,提出基于跳数的处理方法,对稀疏的邻接矩阵进行优化处理.得到的相似度矩阵不仅能反映网络拓扑结构中相连节点间的相似关系,同时能反映不相连节点间的相似关系.接着,基于无监督深度学习方法,构建深度稀疏自动编码器,对相似度矩阵进行特征提取,得到低维的特征矩阵.与邻接矩阵相比,特征矩阵对网络拓扑结构有更强的特征表达能力.最后,使用k-均值算法对低维特征矩阵聚类得到社区结构.实验结果显示,与6种典型的社区发现算法相比,CoDDA算法能够发现更准确的社区结构.同时,参数实验结果显示,CoDDA算法发现的社区结构比直接使用高维邻接矩阵的基本k-均值算法发现的社区结构更为准确.  相似文献   

9.
贾洪杰  丁世飞  史忠植 《软件学报》2015,26(11):2836-2846
谱聚类将聚类问题转化成图划分问题,是一种基于代数图论的聚类方法.在求解图划分目标函数时,一般利用Rayleigh熵的性质,通过计算Laplacian矩阵的特征向量将原始数据点映射到一个低维的特征空间中,再进行聚类.然而在谱聚类过程中,存储相似矩阵的空间复杂度是O(n2),对Laplacian矩阵特征分解的时间复杂度一般为O(n3),这样的复杂度在处理大规模数据时是无法接受的.理论证明,Normalized Cut图聚类与加权核k-means都等价于矩阵迹的最大化问题.因此,可以用加权核k-means算法来优化Normalized Cut的目标函数,这就避免了对Laplacian矩阵特征分解.不过,加权核k-means算法需要计算核矩阵,其空间复杂度依然是O(n2).为了应对这一挑战,提出近似加权核k-means算法,仅使用核矩阵的一部分来求解大数据的谱聚类问题.理论分析和实验对比表明,近似加权核k-means的聚类表现与加权核k-means算法是相似的,但是极大地减小了时间和空间复杂性.  相似文献   

10.
蒋涛  张彬  余法红  柳晴  周傲英 《软件学报》2015,26(9):2297-2310
不同于传统的k-Skyband 查询方法,提出一种相互k-Skyband 查询(MkSB),它从对称角度执行Skyline查询,找出所有既在q的动态k-Skyband(DkSB)中又在q的反向k-Skyband(RkSB)中的数据对象.进一步地,为了更好地支持用户决策和数据分析,排序操作被引入到MkSB算法中.因为MkSB 需要执行q的DkSB 和反向RkSB,故它需要遍历索引多次,从而导致了大量冗余的I/O 开销.利用信息重用技术和若干有效的修剪方法,MkSB 将多次的索引搜索合并成单次,极大地降低了I/O访问次数.同时,证明了基于窗口查询的MkSB(WMkSB)算法具有最低的I/O 代价.在真实与合成数据集上的实验结果表明,所提出的算法是有效的且明显胜过基于BBS 的算法,尤其WMkSB 算法具有极少的I/O 开销,通常能够减少95%以上的冗余I/O.  相似文献   

11.
The ever increasing size of graphs makes them difficult to query and store. In this paper, we present Shrink, a compression method that reduces the size of the graph while preserving the distances between the nodes. The compression is based on the iterative merging of the nodes. During each merging, a system of linear equations is solved to define new edge weights in a way that the new weights have the least effect on the distances. Merging nodes continues until the desired size for the compressed graph is reached. The compressed graph, also known as the coarse graph, can be queried without decompression. As the complexity of distance-based queries such as shortest path queries is highly dependent on the size of the graph, Shrink improves the performance in terms of time and storage. Shrink not only provides the length of the shortest path but also identifies the nodes on the path. The approach has been applied to both weighted and unweighted graphs including road network, friendship network, collaboration network, web graph and social network. In the experiment, a road network with more than 2.5 million nodes is reduced to fifth while the average relative error is less than 1%.  相似文献   

12.
割点求解是图应用中的一个重要操作.深度优先搜索树算法可以解决割点求解问题.但是该算法存在缺点,导致它不能在实际问题中得到很好的应用.这是因为当今数据的两大特点,一是数据规模庞大,对于很多图操作提出了挑战性的要求;二是数据多变,每天数据的大量更新使得传统算法必须依据更新重复计算,浪费了时间和空间.深度优先搜索树算法的时间复杂度为O(|V|+|E|),其中,|V|和|E|分别为图的顶点的数目和边的数目.它能够很好地适应第1个特点,但是对于第2个特点该算法则无能为力.提出一种基于压缩的割点求解算法来解决这个问题.该算法通过点的朴素相似来压缩图,时间复杂度为O(|E|).在得到的无损压缩图上进行割点求解,同时在压缩图上动态地维护点和边的更新,在不解压图的情况下完成图的更新,在更新后的图上进行割点求解,极大地降低了时间和空间消耗.该压缩算法得到的压缩图对其他图操作同样适用.  相似文献   

13.
Querying graph data is a fundamental problem that witnesses an increasing interest especially for massive graph databases which come as a promising alternative to relational databases for big data modeling. In this paper, we study the problem of subgraph isomorphism search which consists to enumerate the embedding of a query graph in a data graph. The most known solutions of this NP-complete problem are backtracking-based and result in a high computational cost when we deal with massive graph databases. We address this problem and its challenges via graph compression with modular decomposition. In our approach, subgraph isomorphism search is performed on compressed graphs without decompressing them yielding substantial reduction of the search space and consequently a significant saving in processing time as well as in storage space for the graphs. We evaluated our algorithms on nine real-word datasets. The experimental results show that our approach is efficient and scalable.  相似文献   

14.
Array partitioning is an important research problem in array management area, since the partitioning strategies have important influence on storage, query evaluation, and other components in array management systems. Meanwhile, compression is highly needed for the array data due to its growing volume. Observing that the array partitioning can affect the compression performance significantly, this paper aims to design the efficient partitioning method for array data to optimize the compression performance. As far as we know, there still lacks research efforts on this problem. In this paper, the problem of array partitioning for optimizing the compression performance (PPCP for short) is firstly proposed. We adopt a popular compression technique which allows to process queries on the compressed data without decompression. Secondly, because the above problem is NP-hard, two essential principles for exploring the partitioning solution are introduced, which can explain the core idea of the partitioning algorithms proposed by us. The first principle shows that the compression performance can be improved if an array can be partitioned into two parts with different sparsities. The second principle introduces a greedy strategy which can well support the selection of the partitioning positions heuristically. Supported by the two principles, two greedy strategy based array partitioning algorithms are designed for the independent case and the dependent case respectively. Observing the expensive cost of the algorithm for the dependent case, a further optimization based on random sampling and dimension grouping is proposed to achieve linear time cost. Finally, the experiments are conducted on both synthetic and real-life data, and the results show that the two proposed partitioning algorithms achieve better performance on both compression and query evaluation.  相似文献   

15.
Semi-supervised graph clustering: a kernel approach   总被引:6,自引:0,他引:6  
Semi-supervised clustering algorithms aim to improve clustering results using limited supervision. The supervision is generally given as pairwise constraints; such constraints are natural for graphs, yet most semi-supervised clustering algorithms are designed for data represented as vectors. In this paper, we unify vector-based and graph-based approaches. We first show that a recently-proposed objective function for semi-supervised clustering based on Hidden Markov Random Fields, with squared Euclidean distance and a certain class of constraint penalty functions, can be expressed as a special case of the weighted kernel k-means objective (Dhillon et al., in Proceedings of the 10th International Conference on Knowledge Discovery and Data Mining, 2004a). A recent theoretical connection between weighted kernel k-means and several graph clustering objectives enables us to perform semi-supervised clustering of data given either as vectors or as a graph. For graph data, this result leads to algorithms for optimizing several new semi-supervised graph clustering objectives. For vector data, the kernel approach also enables us to find clusters with non-linear boundaries in the input data space. Furthermore, we show that recent work on spectral learning (Kamvar et al., in Proceedings of the 17th International Joint Conference on Artificial Intelligence, 2003) may be viewed as a special case of our formulation. We empirically show that our algorithm is able to outperform current state-of-the-art semi-supervised algorithms on both vector-based and graph-based data sets.  相似文献   

16.
异质信息网络(HINs)是包含多种类型对象(顶点)和链接(边)的有向图,能够表达丰富复杂的语义和结构信息.HINs中的稠密子图查询问题,即给定一个查询点q,在HINs中查询包含q的稠密子图,已成为该领域的热点和重点研究问题,并在活动策划、生物分析和商品推荐等领域具有广泛应用.但现有方法主要存在以下两个问题:(1)基于模体团和关系约束查询的稠密子图具有多种类型顶点,导致其不能解决仅关注某种特定类型顶点的场景;(2)基于元路径的方法虽然可查询到某种特定类型顶点的稠密子图,但其忽略了子图中顶点之间基于元路径的连通度.为此,首先在HINs中提出了基于元路径的边不相交路径的连通度,即路径连通度;然后,基于路径连通度提出了k-路径连通分量(k-PCC)模型,该模型要求子图的路径连通度至少为k;其次,基于k-PCC模型提出了最大路径连通Steiner分量(SMPCC)概念,其为包含q的具有最大路径连通度的k-PCC;最后,提出一种高效的基于图分解的k-PCC发现算法,并在此基础上提出了优化查询SMPCC算法.大量基于真实和合成HINs数据的实验结果验证了所提出模型和算法的有效性和高效性.  相似文献   

17.
知识图谱划分算法研究综述   总被引:6,自引:0,他引:6  
知识图谱是人工智能的重要基石,因其包含丰富的图结构和属性信息而受到广泛关注.知识图谱可以精确语义描述现实世界中的各种实体及其联系,其中顶点表示实体,边表示实体间的联系.知识图谱划分是大规模知识图谱分布式处理的首要工作,对知识图谱分布式存储、查询、推理和挖掘起基础支撑作用.随着知识图谱数据规模及分布式处理需求的不断增长,如何对其进行划分已成为目前知识图谱研究的热点问题.从知识图谱和图划分的定义出发,系统性地介绍当前知识图谱数据划分的各类算法,包括基本、多级、流式、分布式和其他类型图划分算法.首先,介绍4种基本图划分算法:谱划分算法、几何划分算法、分支定界算法、KL及其衍生算法,这类算法通常用于小规模图数据或作为其他划分算法的一部分;然后,介绍多级图划分算法,这类算法对图粗糙化后进行划分再投射回原始图,根据粗糙化过程分为基于匹配的算法和基于聚合的算法;其次,描述3种流式图划分算法,这类算法将顶点或边加载为序列后进行划分,包括Hash算法、贪心算法、Fennel算法,以及这3种算法的衍生算法;再次,介绍以KaPPa、JA-BE-JA和轻量级重划分为代表的分布式图划分算法及它们的衍生算法;同时,在其他类型图划分算法中,介绍近年来新兴的2种图划分算法:标签传播算法和基于查询负载的算法.通过在合成与真实知识图谱数据集上的丰富实验,比较了5类知识图谱代表性划分算法在划分效果、查询处理与图数据挖掘方面的性能差异,分析实验结果并推广到推理层面,获得了基于实验的知识图谱划分算法性能评价结论.最后,在对已有方法分析和比较的基础上,总结目前知识图谱数据划分面临的主要挑战,提出相应的研究问题,并展望未来的研究方向.  相似文献   

18.
Coloring a k-colorable graph using k colors (k≥3) is a notoriously hard problem. Considering average case analysis allows for better results. In this work we consider the uniform distribution over k-colorable graphs with n vertices and exactly cn edges, c greater than some sufficiently large constant. We rigorously show that all proper k-colorings of most such graphs lie in a single “cluster”, and agree on all but a small, though constant, portion of the vertices. We also describe a polynomial time algorithm that whp finds a proper k-coloring of such a random k-colorable graph, thus asserting that most such graphs are easy to color. This should be contrasted with the setting of very sparse random graphs (which are k-colorable whp), where experimental results show some regime of edge density to be difficult for many coloring heuristics.  相似文献   

19.
Bit-vectors are widely used for indexing and summarizing data due to their efficient processing in modern computers. Sparse bit-vectors can be further compressed to reduce their space requirement. Special compression schemes based on run-length encoders have been designed to avoid explicit decompression and minimize the decoding overhead during query execution. Moreover, highly compressed bit-vectors can exhibit a faster query time than the non-compressed ones. However, for hard-to-compress bit-vectors, compression does not speed up queries and can add considerable overhead. In these cases, bit-vectors are often stored verbatim (non-compressed). On the other hand, queries are answered by executing a cascade of bit-wise operations involving indexed bit-vectors and intermediate results. Often, even when the original bit-vectors are hard to compress, the intermediate results become sparse. It could be feasible to improve query performance by compressing these bit-vectors as the query is executed. In this scenario, it would be necessary to operate verbatim and compressed bit-vectors together. In this paper, we propose a hybrid framework where compressed and verbatim bitmaps can coexist and design algorithms to execute queries under this hybrid model. Our query optimizer is able to decide at run time when to compress or decompress a bit-vector. Our heuristics show that the applications using higher-density bitmaps can benefit from using this hybrid model, improving both their query time and memory utilization.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号