首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper exploits the properties of the commute time for the purposes of graph simplification and matching. Our starting point is the lazy random walk on the graph, which is determined by the heat kernel of the graph and can be computed from the spectrum of the graph Laplacian. We characterise the random walk using the commute time between nodes, and show how this quantity may be computed from the Laplacian spectrum using the discrete Green's function. In this paper, we explore two different, but essentially dual, simplified graph representations delivered by the commute time. The first representation decomposes graphs into concentric layers. To do this we augment the graph with an auxiliary node which acts as a heat source. We use the pattern of commute times from this node to decompose the graph into a sequence of layers. Our second representation is based on the minimum spanning tree of the commute time matrix. The spanning trees located using commute time prove to be stable to structural variations. We match the graphs by applying a tree-matching method to the spanning trees. We experiment with the method on synthetic and real-world image data, where it proves to be effective.  相似文献   

2.
This work presents a new perspective on characterizing the similarity between elements of a database or, more generally, nodes of a weighted and undirected graph. It is based on a Markov-chain model of random walk through the database. More precisely, we compute quantities (the average commute time, the pseudoinverse of the Laplacian matrix of the graph, etc.) that provide similarities between any pair of nodes, having the nice property of increasing when the number of paths connecting those elements increases and when the "length" of paths decreases. It turns out that the square root of the average commute time is a Euclidean distance and that the pseudoinverse of the Laplacian matrix is a kernel matrix (its elements are inner products closely related to commute times). A principal component analysis (PCA) of the graph is introduced for computing the subspace projection of the node vectors in a manner that preserves as much variance as possible in terms of the Euclidean commute-time distance. This graph PCA provides a nice interpretation to the "Fiedler vector," widely used for graph partitioning. The model is evaluated on a collaborative-recommendation task where suggestions are made about which movies people should watch based upon what they watched in the past. Experimental results on the MovieLens database show that the Laplacian-based similarities perform well in comparison with other methods. The model, which nicely fits into the so-called "statistical relational learning" framework, could also be used to compute document or word similarities, and, more generally, it could be applied to machine-learning and pattern-recognition tasks involving a relational database  相似文献   

3.
现有的全局流形学习算法都敏感于邻域大小这一难以高效选取的参数,它们都采用了基于欧氏距离的邻域图创建方法,从而使邻域图容易产生“短路”边。本文提出了一种基于随机游走模型的全局 流形学习算法(Random walk-based isometric mapping,RW-ISOMAP)。和欧氏距离相比,由随机游走模型得到的通勤时间距离是由给定两点间的所有通路以概率为权组合而成的,不但鲁棒性更高,而且还能在一定程度上度量具有非线性几何结构的数据之间的相似性。因此采用通勤时间距离来创建邻域图的RW-ISOMAP算法将不再敏感于邻域大小参数,从而可以更容易地选取邻域大小参数,同时还具有更高的鲁棒性。最后的实验结果证实了该算法的有效性。  相似文献   

4.
在图理论的基础上,将非负Laplacian嵌入应用于颅脑核磁共振图像分割。该算法首先利用均值漂移算法对图像进行预分割,然后利用Gabor滤波器组对分块进行纹理特征提取,再将各分块映射为图中的每一个顶点,最后对图进行非负Laplacian嵌入得到每一小块的聚类结果。实验表明,相对于传统的Laplacian嵌入方法,非负Laplacian嵌入能够得到更好的分割结果。  相似文献   

5.
交互式图像分割通过先验信息指导获取图像中人们感兴趣的部分,但是现有算法无法在效率和精度上实现平衡。为了解决此问题,提出了一种基于超像素和随机游走的快速交互式分割算法(random walk on superpixel, SPRW)。首先,将图像预分割为具有局部相似性的超像素区域,使用像素颜色均值对超像素区域表示;其次,根据人工标记的先验信息建立F-B图结构,扩展随机游走的范围,并使用随机游走的方法求解,获得硬分割结果;最后,针对分割结果的边界不光滑问题,提出改进的抠图算法(fast robust matting, FRB)进行二次处理,得到软分割结果。在BSD500和MSRC数据集上的实验证实,所提出的硬分割方法与其他算法在时间和平均交并比等指标上有较大优势;在Alpha Matting数据集上的实验充分证实所提出的软算法在提高效率的同时精度也有一定的提升;此外,在生活照更换背景的实验上展现了该算法的应用价值。  相似文献   

6.
In this paper, we investigate the use of heat kernels as a means of embedding the individual nodes of a graph in a vector space. The reason for turning to the heat kernel is that it encapsulates information concerning the distribution of path lengths and hence node affinities on the graph. The heat kernel of the graph is found by exponentiating the Laplacian eigensystem over time. In this paper, we explore how graphs can be characterized in a geometric manner using embeddings into a vector space obtained from the heat kernel. We explore two different embedding strategies. The first of these is a direct method in which the matrix of embedding co-ordinates is obtained by performing a Young–Householder decomposition on the heat kernel. The second method is indirect and involves performing a low-distortion embedding by applying multidimensional scaling to the geodesic distances between nodes. We show how the required geodesic distances can be computed using parametrix expansion of the heat kernel. Once the nodes of the graph are embedded using one of the two alternative methods, we can characterize them in a geometric manner using the distribution of the node co-ordinates. We investigate several alternative methods of characterization, including spatial moments for the embedded points, the Laplacian spectrum for the Euclidean distance matrix and scalar curvatures computed from the difference in geodesic and Euclidean distances. We experiment with the resulting algorithms on the COIL database.  相似文献   

7.
In this paper, we make use of the relationship between the Laplace-Beltrami operator and the graph Laplacian, for the purposes of embedding a graph onto a Riemannian manifold. To embark on this study, we review some of the basics of Riemannian geometry and explain the relationship between the Laplace-Beltrami operator and the graph Laplacian. Using the properties of Jacobi fields, we show how to compute an edge-weight matrix in which the elements reflect the sectional curvatures associated with the geodesic paths on the manifold between nodes. For the particular case of a constant sectional curvature surface, we use the Kruskal coordinates to compute edge weights that are proportional to the geodesic distance between points. We use the resulting edge-weight matrix to embed the nodes of the graph onto a Riemannian manifold. To do this, we develop a method that can be used to perform double centring on the Laplacian matrix computed from the edge-weights. The embedding coordinates are given by the eigenvectors of the centred Laplacian. With the set of embedding coordinates at hand, a number of graph manipulation tasks can be performed. In this paper, we are primarily interested in graph-matching. We recast the graph-matching problem as that of aligning pairs of manifolds subject to a geometric transformation. We show that this transformation is Pro-crustean in nature. We illustrate the utility of the method on image matching using the COIL database.  相似文献   

8.
This paper presents a random-walk-based feature extraction method called commute time guided transformation (CTG) in the graph embedding framework. The paper contributes to the corresponding field in two aspects. First, it introduces the usage of a robust probability metric, i.e., the commute time (CT), to extract visual features for face recognition via a manifold way. Second, the paper designs the CTG optimization to find linear orthogonal projections that would implicitly preserve the commute time of high dimensional data in a low dimensional subspace. Compared with previous CT embedding algorithms, the proposed CTG is a graph-independent method. Existing CT embedding methods are graph-dependent that could only embed the data on the training graph in the subspace. Differently, CTG paradigm can be used to project the out-of-sample data into the same embedding space as the training graph. Moreover, CTG projections are robust to the graph topology that it can always achieve good recognition performance in spite of different initial graph structures. Owing to these positive properties, when applied to face recognition, the proposed CTG method outperforms other state-of-the-art algorithms on benchmark datasets. Specifically, it is much efficient and effective to recognize faces with noise.  相似文献   

9.
Compared with conventional graph data analysis methods, the graph embedding algorithm provides a new graph data analysis strategy. It aims to encode graph nodes into vectors to mine or analyze graph data more effectively using neural network related technologies. Some classic tasks have been improved significantly by graph embedding methods, such as node classification, link prediction, and traffic flow prediction. Although substantial breakthroughs have been made by former researchers in graph embedding, the nodes embedding problem over temporal graph has been seldom studied. In this study, we propose an adaptive temporal graph embedding (ATGED), attempting to encode temporal graph nodes into vectors by combining previous research and the information propagation characteristics. First, an adaptive cluster method is proposed by solving the situation that nodes active frequency varies types of graph. Then, a new node walk strategy is designed in order to store the time sequence between nodes, and also the walking list will be stored in a bidirectional multi-tree in the walking process to get complete walking lists fast. Last, based on the basic walking characteristics and graph topology, an important node sampling strategy is proposed to train the satisfied neural network as soon as possible. Sufficient experiments demonstrate that the proposed method surpasses existing embedding methods in terms of node clustering, reachability prediction, and node classification in temporal graphs.  相似文献   

10.
目的 鉴于随机游走过程对人类视觉注意力的良好描述能力,提出一种基于惰性随机游走的视觉显著性检测算法。方法 首先通过对背景超像素赋予较大的惰性因子,即以背景超像素作为惰性种子节点,在由图像超像素组成的无向图上演化惰性随机游走过程,获得初始显著性图;然后利用空间位置先验及颜色对比度先验信息对初始显著图进行修正;最终通过基于前景的惰性随机游走产生鲁棒的视觉显著性检测结果。结果 为验证算法有效性,在MSRA-1000数据库上进行了仿真实验,并与主流相关算法进行了定性与定量比较。本文算法的Receiver ROC(operating characteristic)曲线及F值均高于其他相关算法。结论 与传统基于随机过程的显著性检测算法相比,普通随机游走过程无法保证收敛到稳定状态,本文算法从理论上有效克服了该问题,提高了算法的适用性;其次,本文算法通过利用视觉转移的往返时间来刻画显著性差异,在生物视觉的模拟上更加合理贴切,与普通随机游走过程采用的单向转移时间相比,效果更加鲁棒。  相似文献   

11.
In this paper, a bottom-up salient object detection method is proposed by modeling image as a random graph. The proposed method starts with portioning input image into superpixels and extracting color and spatial features for each superpixel. Then, a complete graph is constructed by employing superpixels as nodes. A high edge weight is assigned into a pair of superpixels if they have high similarity. Next, a random walk prior on nodes is assumed to generate the probability distribution on edges. On the other hand, a complete directed graph is created that each edge weight represents the probability for transmitting random walker from current node to next node. By considering a threshold and eliminating edges with higher probability than the threshold, a random graph is created to model input image. The inbound degree vector of a random graph is computed to determine the most salient nodes (regions). Finally, a propagation technique is used to form saliency map. Experimental results on two challenging datasets: MSRA10K and SED2 demonstrate the efficiency of the proposed unsupervised RG method in comparison with the state-of-the-art unsupervised methods.  相似文献   

12.
基于滑降的随机游走图像分割算法   总被引:1,自引:0,他引:1  
为了提高传统的随机游走分割算法的性能,提出一种基于滑降算法的随机游走图像分割算法.利用图像的局部灰度信息进行滑降分割,将图像分割成多个小区域;把每个小区域作为一个节点,采用万有引力定律来定义各个节点之间的权值,利用随机游走算法产生最终的分割结果.实验结果表明,该算法有效地结合了滑降算法和随机游走算法的优点,提高了图像分割的速度和精度.  相似文献   

13.
This paper shows how to construct a generative model for graph structure through the embedding of the nodes of the graph in a vector space. We commence from a sample of graphs where the correspondences between nodes are unknown ab initio. We also work with graphs where there may be structural differences present, i.e. variations in the number of nodes in each graph and their edge structure. We characterise the graphs using the heat-kernel, and this is obtained by exponentiating the Laplacian eigensystem with time. The idea underpinning the method is to embed the nodes of the graphs into a vector space by performing a Young-Householder decomposition of the heat-kernel into an inner product of node co-ordinate matrices. The co-ordinates of the nodes are determined by the eigenvalues and eigenvectors of the Laplacian matrix, together with a time-parameter which can be used to scale the embedding. Node correspondences are located by applying Scott and Longuet-Higgins algorithm to the embedded nodes. We capture variations in graph structure using the covariance matrix for corresponding embedded point positions. We construct a point-distribution model for the embedded node positions using the eigenvalues and eigenvectors of the covariance matrix. We show how to use this model to both project individual graphs into the eigenspace of the point position covariance matrix and how to fit the model to potentially noisy graphs to reconstruct the Laplacian matrix. We illustrate the utility of the resulting method for shape analysis using data from the Caltech–Oxford and COIL databases.  相似文献   

14.
ABSTRACT

Semantic segmentation methods based on deep learning considerably improve the segmentation performance of remote sensing images. However, with the extensive application of high-resolution remote sensing images, additional details introduce considerable interference to the learning process for classification, thereby diminishing the accuracy of segmentation and resulting in blurry object boundaries. To address this problem, this study designed Random-Walk-SegNet (RWSNet), a semantic segmentation network based on SegNet combined with random walk. First, SegNet is used as the basic architecture with the sliding window strategy that optimizes the network output to improve the continuity and smoothness of segmentation. Second, seed regions of the random walk are selected in accordance with the classification output of SegNet. Third, the weights of the undirected graph edge are determined by fusing the gradient of the original image and probability map of SegNet. Finally, random walk is implemented on the entire image, thus reducing edge blur and realizing high-performance semantic segmentation of remote sensing images. In comparison with mainstream and other improved methods, the proposed network has lower complexity but better performance, and the algorithm is state-of-the-art and robust.  相似文献   

15.
Object Recognition as Many-to-Many Feature Matching   总被引:2,自引:0,他引:2  
Object recognition can be formulated as matching image features to model features. When recognition is exemplar-based, feature correspondence is one-to-one. However, segmentation errors, articulation, scale difference, and within-class deformation can yield image and model features which don’t match one-to-one but rather many-to-many. Adopting a graph-based representation of a set of features, we present a matching algorithm that establishes many-to-many correspondences between the nodes of two noisy, vertex-labeled weighted graphs. Our approach reduces the problem of many-to-many matching of weighted graphs to that of many-to-many matching of weighted point sets in a normed vector space. This is accomplished by embedding the initial weighted graphs into a normed vector space with low distortion using a novel embedding technique based on a spherical encoding of graph structure. Many-to-many vector correspondences established by the Earth Mover’s Distance framework are mapped back into many-to-many correspondences between graph nodes. Empirical evaluation of the algorithm on an extensive set of recognition trials, including a comparison with two competing graph matching approaches, demonstrates both the robustness and efficacy of the overall approach.  相似文献   

16.
刘庆烽  刘哲  宋余庆  朱彦 《计算机科学》2018,45(7):243-247, 258
精确的肺部肿瘤区域分割对于放射治疗和手术计划的制定至关重要。针对目前基于单模态图像的肺部肿瘤区域分割的精度较低等问题,综合PET和CT图像的优缺点,提出一种全新的多模态肺部肿瘤图像分割方法。首先,使用区域生长法和数学形态学法对PET图像进行预分割以获取初始轮廓,初始轮廓用于获取PET图像和CT图像上随机游走所需的种子点,同时作为约束加入到CT图像的随机游走过程中;依据CT图像解剖特征较强的特点,利用CT解剖特征改进PET图像上随机游走的权值;最终将 PET图像和CT图像上随机游走所获得的相似度矩阵进行加权,在PET图像和CT图像上获得一个相同的分割轮廓。实验表明,相较于其他传统分割算法,所提方法在肺部肿瘤区域分割上具有更高的精确度和更好的稳定性。  相似文献   

17.
针对传统物体识别算法中只依赖于视觉特征进行识别的单一性缺陷,提出了一种结合先验关系的物体识别算法。在训练阶段,通过图模型结构化表示先验关系,分别构建了图像-图像、语义-语义两个子图以及两子图之间的联系,利用该图模型建立随机游走模型;在识别阶段,建立待识别图像与随机游走模型中的图像节点和语义节点的关系,在该概率模型上进行随机游走,将随机游走的结果作为物体识别的结果。实验结果证明了结合先验关系的物体识别算法的有效性;提出的物体识别算法具有较强的识别性能。  相似文献   

18.
We study a semi-supervised learning method based on the similarity graph and regularized Laplacian. We give convenient optimization formulation of the regularized Laplacian method and establish its various properties. In particular, we show that the kernel of the method can be interpreted in terms of discrete and continuous-time random walks and possesses several important properties of proximity measures. Both optimization and linear algebra methods can be used for efficient computation of the classification functions. We demonstrate on numerical examples that the regularized Laplacian method is robust with respect to the choice of the regularization parameter and outperforms the Laplacian-based heat kernel methods.  相似文献   

19.
针对传统的图卷积网络节点嵌入过程中接受邻域范围小的问题,本文提出了一种基于改进GraphSAGE算法的高光谱图像分类网络.首先,利用超像素分割算法对原始图像进行预处理,减少图节点的个数,既最大化保留了原始图像的局部拓扑结构信息,又降低了算法的复杂度,缩短运算时间;其次,采用改进的GraphSAGE算法,对目标节点进行平均采样,选用平均聚合函数对邻居节点进行聚合,降低空间复杂度.在公开的高光谱图像数据集Pavia University和Kenndy Space Center上与相关模型进行对比,实验证明,基于改进GraphSAGE算法的高光谱图像分类网络可以取得较好的分类结果.  相似文献   

20.
Guo  Kun  Wang  Qinze  Lin  Jiaqi  Wu  Ling  Guo  Wenzhong  Chao  Kuo-Ming 《Applied Intelligence》2022,52(9):9919-9937

The Network representation learning methods based on random walk aim to learn a low-dimensional embedding vector for each node in a network by randomly traversing the network to capture the features of nodes and edges, which is beneficial to many downstream machine learning tasks such as community detection. Most of the existing random-walk-based network representation learning algorithms emphasize the neighborhood of nodes but ignore the communities they may form and apply the same random walk strategy to all nodes without distinguishing the characteristics of different nodes. In addition, it is time-consuming to determine the most suitable random walk parameters for a given network. In this paper, we propose a novel overlapping community detection algorithm based on network representation learning which integrates community information into embedding vectors to improve the cohesion degree of similar nodes in the embedding space. First, a node-centrality-based walk strategy is designed to determine the parameters of random walk automatically to avoid the time-consuming manual selection. Second, two community-aware random walk strategies for high and low degree nodes are developed to capture the characteristics of the community centers and boundaries. The experimental results on the synthesized and real-world datasets demonstrate the effectiveness and efficiency of our algorithm on overlapping community detection compared with the state-of-the-art algorithms

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号