首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 187 毫秒
1.
In recent years, many information networks have become available for analysis, including social networks, road networks, sensor networks, biological networks, etc. Graph clustering has shown its effectiveness in analyzing and visualizing large networks. The goal of graph clustering is to partition vertices in a large graph into clusters based on various criteria such as vertex connectivity or neighborhood similarity. Many existing graph clustering methods mainly focus on the topological structures, but largely ignore the vertex properties which are often heterogeneous. Recently, a new graph clustering algorithm, SA-cluster, has been proposed which combines structural and attribute similarities through a unified distance measure. SA-Cluster performs matrix multiplication to calculate the random walk distances between graph vertices. As part of the clustering refinement, the graph edge weights are iteratively adjusted to balance the relative importance between structural and attribute similarities. As a consequence, matrix multiplication is repeated in each iteration of the clustering process to recalculate the random walk distances which are affected by the edge weight update. In order to improve the efficiency and scalability of SA-cluster, in this paper, we propose an efficient algorithm In-Cluster to incrementally update the random walk distances given the edge weight increments. Complexity analysis is provided to estimate how much runtime cost Inc-Cluster can save. We further design parallel matrix computation techniques on a multicore architecture. Experimental results demonstrate that Inc-Cluster achieves significant speedup over SA-Cluster on large graphs, while achieving exactly the same clustering quality in terms of intra-cluster structural cohesiveness and attribute value homogeneity.  相似文献   

2.
In this paper, a bottom-up salient object detection method is proposed by modeling image as a random graph. The proposed method starts with portioning input image into superpixels and extracting color and spatial features for each superpixel. Then, a complete graph is constructed by employing superpixels as nodes. A high edge weight is assigned into a pair of superpixels if they have high similarity. Next, a random walk prior on nodes is assumed to generate the probability distribution on edges. On the other hand, a complete directed graph is created that each edge weight represents the probability for transmitting random walker from current node to next node. By considering a threshold and eliminating edges with higher probability than the threshold, a random graph is created to model input image. The inbound degree vector of a random graph is computed to determine the most salient nodes (regions). Finally, a propagation technique is used to form saliency map. Experimental results on two challenging datasets: MSRA10K and SED2 demonstrate the efficiency of the proposed unsupervised RG method in comparison with the state-of-the-art unsupervised methods.  相似文献   

3.
In graph embedding based methods, we usually need to manually choose the nearest neighbors and then compute the edge weights using the nearest neighbors via L2 norm (e.g. LLE). It is difficult and unstable to manually choose the nearest neighbors in high dimensional space. So how to automatically construct a graph is very important. In this paper, first, we give a L2-graph like L1-graph. L2-graph calculates the edge weights using the total samples, avoiding manually choosing the nearest neighbors; second, a L2-graph based feature extraction method is presented, called collaborative representation based projections (CRP). Like SPP, CRP aims to preserve the collaborative representation based reconstruction relationship of data. CRP utilizes a L2 norm graph to characterize the local compactness information. CRP maximizes the ratio between the total separability information and the local compactness information to seek the optimal projection matrix. CRP is much faster than SPP since CRP calculates the objective function with L2 norm while SPP calculate the objective function with L1 norm. Experimental results on FERET, AR, Yale face databases and the PolyU finger-knuckle-print database demonstrate that CRP works well in feature extraction and leads to a good recognition performance.  相似文献   

4.
During the last decades, a host of efficient algorithms have been developed for solving the minimum spanning tree problem in deterministic graphs, where the weight associated with the graph edges is assumed to be fixed. Though it is clear that the edge weight varies with time in realistic applications and such an assumption is wrong, finding the minimum spanning tree of a stochastic graph has not received the attention it merits. This is due to the fact that the minimum spanning tree problem becomes incredibly hard to solve when the edge weight is assumed to be a random variable. This becomes more difficult if we assume that the probability distribution function of the edge weight is unknown. In this paper, we propose a learning automata-based heuristic algorithm to solve the minimum spanning tree problem in stochastic graphs wherein the probability distribution function of the edge weight is unknown. The proposed algorithm taking advantage of learning automata determines the edges that must be sampled at each stage. As the presented algorithm proceeds, the sampling process is concentrated on the edges that constitute the spanning tree with the minimum expected weight. The proposed learning automata-based sampling method decreases the number of samples that need to be taken from the graph by reducing the rate of unnecessary samples. Experimental results show the superiority of the proposed algorithm over the well-known existing methods both in terms of the number of samples and the running time of algorithm.  相似文献   

5.
宋畅  禹可  吴晓非 《计算机科学》2020,47(2):251-255
社交媒体系统为人们提供了便利的共享、交流和协作平台。人们在享受社交媒体的开放性和便利性时,可能会发生许多恶意行为,例如欺凌、恐怖袭击计划和欺诈信息传播。因此,尽可能准确、及早地发现这些异常活动,以防止灾难和袭击,是非常重要的。近年来,随着在线社交网络(OSN)如Twitter,Facebook,Google+,LinkedIN等的成功,丰厚的利益资源使得它们成为了攻击者的目标。社交网络的开放性,使其特别容易受到异常账号攻击的威胁。现有基于图形的最先进分类模型大多使用首先为图的边分配权重,在加权图中迭代地传播节点的信誉分数,并使用最终的后验分数来对节点进行分类的方法。边权重的分配是其中一项重要的任务,此参数将直接影响检测结果的准确度。为此,文中针对社交媒体中异常账号的检测任务,分析了基于社交图全局结构的方法,通过在成对马尔可夫随机场模型中改进边权重的计算方法,使其能够在迭代过程中自适应优化,提出了准确度更高的GANG+LW,GANG+LOGW和GANG+PLOGW算法。这3种算法使用了不同的改进边权重的方法。实验证明,新提出的方法相对于基本的成对马尔可夫随机场模型,可取得更准确的异常账号检测结果,3种算法中GANG+PLOGW得到的结果最好。结果证明,此改进模型在检测社交网络中的异常账号时,能够更有效地解决问题。  相似文献   

6.
针对如何在保持低参数量和低计算量前提下构建高性能模型的问题,提出一种轻量级多信息图卷积神经网络(LMI-GCN)。LMI-GCN通过将关节坐标、关节速度、骨骼边、骨骼边速度四种不同信息编码至高维空间的方式进行信息融合,并引入可以聚合重要特征的多通道自适应图和分流时间卷积块以减少模型参数量。同时,提出一种随机池数据预处理方法。在NTU-RGB+D120数据集上与基线方法SGN(语义引导神经网络)相比,在两种评估设置cross-subject和cross-setup上提高5.4%和4.7%。实验结果表明,LMI-GCN性能高于SGN。  相似文献   

7.
针对传统边缘方法提取的边缘具有不连续、不完整、倾斜、抖动和缺口等问题,提出基于图论的边缘提取方法。该方法视像素为节点,在水平或垂直方向上连接两个相邻的节点构成一个边,从而将图像看作无向图。它包括三个阶段:在像素相似性计算阶段,无向图的边上被赋予权值,权值代表了像素间的相似性;在阈值确定阶段,将所有权值的均值(整幅图像的相似度)确定为阈值;在边缘确定阶段,只保留权值小于阈值的水平边的左边节点与垂直边的上边节点,从而获得了图像的边缘。实验表明,该方法适用于具有明显目标与背景的图像的边缘提取,能够克服不连续、不完整、倾斜、抖动和缺口等缺陷,并且对Speckle噪声和高斯噪声具有抗噪性能。  相似文献   

8.
胡正平  孟鹏权 《自动化学报》2011,37(10):1279-1284
目前的显著性检测算法主要依赖像素间的相互对比,缺乏对显著目标自身特性的分析理解. 依据显著目标是显眼、紧凑和完整的思路,提出一种基于目标全局孤立性和局部同质性的 随机游走显著目标检测算法,将视觉显著性检测公式化为马尔科夫随机游走问题. 首先将输入图像进行分块,根据像素块之间颜色特征和方向特征的相似性确定边的权重, 从而构建图模型;然后通过全连通图搜索提取全局特性,突出全局较孤立的区域; 同时通过k-regular图搜索提取局部特性,增强局部较均匀的区域;最后将全局特性和局部 特性相结合得到显著图,进而确定感兴趣区域位置. 实验结果表明,相比于其他两种具有代表性的算法,所提方法检测结果更加准确、合理, 证明该算法切实可行.  相似文献   

9.
Graph shift regularization is a new and effective graph-based semi-supervised classification method, but its performance is closely related to the representation graphs. Since directed graphs can convey more information about the relationship between vertices than undirected graphs, an intelligent method called graph shift regularization with directed graphs (GSR-D) is presented for fault diagnosis of rolling bearings. For greatly improving the diagnosis performance of GSR-D, a directed and weighted k-nearest neighbor graph is first constructed by treating each sample (i.e., each vibration signal segment) as a vertex, in which the similarity between samples is measured by cosine distance instead of the commonly used Euclidean distance, and the edge weights are also defined by cosine distance instead of the commonly used heat kernel. Then, the labels of samples are considered as the graph signals indexed by the vertices of the representation graph. Finally, the states of unlabeled samples are predicted by finding a graph signal that has minimal total variation and satisfies the constraint given by labeled samples as much as possible. Experimental results indicate that GSR-D is better and more stable than the standard convolutional neural network and support vector machine in rolling bearing fault diagnosis, and GSR-D only has two tuning parameters with certain robustness.  相似文献   

10.
图聚集技术是将一个大规模图用简洁的小规模图来表示,同时保留原始图的结构和属性信息的技术。现有算法未同时考虑节点的属性信息与边的权重信息,导致图聚集后与原始图存在较大差异。因此,提出一种同时考虑节点属性信息与边权重信息的图聚集算法,使得聚集图既保留了节点属性相似度又保留了边权重信息。该算法首先定义了闭邻域结构相似度,通过一种剪枝策略来计算节点之间的结构相似度;其次使用最小哈希(MinHash)技术计算节点之间的属性相似度,并调节结构相似与属性相似所占的比例;最后,根据2方面相似度的大小对加权图进行聚集。实验表明了该算法可行且有效。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号