首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 797 毫秒
1.
A new approach to the problem of graph and subgraph isomorphism detection from an input graph to a database of model graphs is proposed in this paper. It is based on a preprocessing step in which the model graphs are used to create a decision tree. At run time, subgraph isomorphisms are detected by means of decision tree traversal. If we neglect the time needed for preprocessing, the computational complexity of the new graph algorithm is only polynomial in the number of input graph vertices. In particular, it is independent of the number of model graphs and the number of edges in any of the graphs. However, the decision tree is of exponential size. Several pruning techniques which aim at reducing the size of the decision tree are presented. A computational complexity analysis of the new method is given and its behavior is studied in a number of practical experiments with randomly generated graphs.  相似文献   

2.
图修正问题是指在一个图中进行删除点、删除边或加边操作,使这个图转变成另一个具有某种特殊性质的图。图修正问题一直被广泛研究,尤其对弦图、区间图以及单位区间图的图修正问题的研究更是如此。弦图是完美图中最重要的一类图,也是(单位)区间图的父类图,很多经典的NP难问题在弦图上都是多项式可解的。区间图以及单位区间图在生物计算上有着广泛的应用。对这几类图的图修正问题的研究对计算机理论和实践有很大的贡献。首先介绍并总结了关于弦图、区间图以及单位区间图的图修正问题的重要算法和技术,然后对这些问题的研究现状进行分析,并提出了今后研究中值得关注的问题。  相似文献   

3.
This paper contributes a method for combining sparse parallel graph algorithms with dense parallel linear algebra algorithms in order to understand dynamic graphs including the temporal behavior of vertices. Our method is the first to cluster vertices in a dynamic graph based on arbitrary temporal behaviors. In order to successfully implement this method, we develop a feature based pipeline for dynamic graphs and apply Nonnegative Matrix Factorization (NMF) to these features. We demonstrate these steps with a sample of the Twitter mentions graph as well as a CAIDA network traffic graph. We contribute and analyze a parallel NMF algorithm presenting both theoretical and empirical studies of performance. This work can be leveraged by graph/network analysts to understand the temporal behavior cluster structure and segmentation structure of dynamic graphs.  相似文献   

4.
The firefighter problem is a deterministic discrete-time model for the spread and containment of fire on a graph. Once the fire breaks out at a set of vertices, the goal addressed in this work is to save as many vertices as possible from burning. Although the problem finds applications in various real-world problems, such as the spread of diseases or hoaxes contention in communication networks, this problem has not been addressed from a practical point of view so far, in the sense of finding a good strategy for the general case. In this work, we develop and compare several integer linear programming techniques and heuristic methods. Random graphs are used for the purpose of comparison. The obtained results shed some light on the challenges for computational tools as caused by graph topology, graph size, and the number of firefighters per iteration, when looking for the best strategy for an a priori unknown graph.  相似文献   

5.
We consider how continuous-time quantum walks can be used for graph matching. We focus in detail on both exact and inexact graph matching, and consider in depth the problem of measuring graph similarity. We commence by constructing an auxiliary graph, in which the two graphs to be matched are co-joined by a layer of indicator vertices (one for each potential correspondence between a pair of vertices). We simulate a continuous-time quantum walk in parallel on the two graphs. The layer of connecting indicator vertices in the auxiliary graph allow quantum interference to take place between the two walks. The interference amplitudes on the indicator vertices are determined by differences in the two walks, and can be used to calculate probabilities for matches between pairs of vertices from the graphs. By applying the Hungarian (Kuhn-Munkres) algorithm to these probabilities, we recover a correspondence mapping between the graphs. To calculate graph similarity, we combine these probabilities with edge-consistency information to give a consistency measure. Based on the consistency measure, we define two graph similarity measures, one of which requires correspondence matches while the second does not. We analyse our approach experimentally using synthetic and real-world graphs. This reveals that our method gives results that are intermediate between the most sophisticated iterative techniques available, and simpler less complex ones.  相似文献   

6.
In this paper we present a depth first search algorithm to generate the family of maximal independent sets of an undirected graph lexicographically. Extensive computational experience on more than 1000 randomly generated graphs ranging from 20 to 220 vertices and from 20% to 90% density has shown that the proposed algorithm is (a) two to fifteen times faster than the Bron and Kerbosch algorithm and (b) at least three times faster than the algorithm of Tsukiyamaet al. and becomes increasingly more efficient as both the density and size of the graph increase. A further characteristic of the proposed algorithm is that it employs four one-dimensional arrays of working computer memory only, each of which has length equal to the number of vertices of the graph.  相似文献   

7.
Large-scale graph computation is often required in a variety of emerging applications such as social network computation and Web services. Such graphs are typically large and frequently updated with minor changes. However, re-computing an entire graph when a few vertices or edges are updated is often prohibitively expensive. To reduce the cost of such updates, this study proposes an incremental graph computation model called IncPregel, which leverages the non-after-effect property of the first-order Markov chain and provides incremental programming abstractions to avoid redundant computation and message communication. This is accomplished by employing an efficient and fine-grained reuse mechanism. We implemented this model on Hama, a popular open source framework based on Pregel, to construct an incremental graph processing system called IncHama. IncHama automatically detects changes in input in order to recognize “changed vertices” and to exchange reusable data by means of shuffling. The evaluation results on large-scale graphs show that, compared with Hama, IncHama is 1.1–2.7 times faster and can reduce communication messages by more than 50% when the incremental edges increase in number from 0.1 to 100k.  相似文献   

8.
These days, large‐scale graph processing becomes more and more important. Pregel, inspired by Bulk Synchronous Parallel, is one of the highly used systems to process large‐scale graph problems. In Pregel, each vertex executes a function and waits for a superstep to communicate its data to other vertices. Superstep is a very time‐consuming operation, used by Pregel, to synchronize distributed computations in a cluster of computers. However, it may become a bottleneck when the number of communications increases in a graph with million vertices. Superstep works like a barrier in Pregel that increases the side effect of skew problem in distributed computing environment. ExPregel is a Pregel‐like model that is designed to reduce the number of communication messages between two vertices resided on two different computational nodes. We have proven that ExPregel reduces the number of exchanged messages as well as the number of supersteps for all graph topologies. Enhancing parallelism in our new computational model is another important feature that manifolds the speed of graph analysis programs. More interestingly, ExPregel uses the same model of programming as Pregel. Our experiments on large‐scale real‐world graphs show that ExPregel can reduce network traffic as well as number of supersteps from 45% to 96%. Runtime speed up in the proposed model varies from 1.2× to 30×. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

9.
Coloring large graphs based on independent set extraction   总被引:1,自引:0,他引:1  
This paper presents an effective approach (EXTRACOL) to coloring large graphs. The proposed approach uses a preprocessing method to extract large independent sets from the graph and a memetic algorithm to color the residual graph. Each preprocessing application identifies, with a dedicated tabu search algorithm, a number of pairwise disjoint independent sets of a given size in order to maximize the vertices removed from the graph. We evaluate EXTRACOL on the 11 largest graphs (with 1000 to 4000 vertices) of the DIMACS challenge benchmarks and show improved results for four very difficult graphs (DSJC1000.9, C2000.5, C2000.9, C4000.5). The behavior of the proposed algorithm is also analyzed.  相似文献   

10.
In this paper, a graph problem on connected, weighted, undirected graphs, called the searchlight guarding problem, is considered. Assume that there is a fugitive who moves along the edges of the graph at a random speed. The task involves placing a set of searchlights at vertices to search the edges of the graph and to spot the fugitive. Suppose that placing a searchlight at some vertex incurs some building cost. The searchlight guarding problem is to allocate a set S of searchlights at the vertices such that the total cost of the vertices in S is minimized. If there is more than one set of searchlights, each with a minimum building cost, then identify the set with the minimum search time, that is, where the time slots needed to spot the fugitive is the minimum. As is well established, the problem is NP-hard on weighted bipartite graphs but is linear-time solvable on weighted trees. In this paper, the design of a linear-time optimal algorithm for the searchlight guarding problem on weighted interval graphs is presented. It entails two phases. In the first phase, a set of searchlights with minimum guarding cost is identified and the search directions of all edges are assigned. To achieve this task, a new problem, called the edge-direction assignment problem, is first defined and the problem on weighted complete-split graphs is solved by the greedy strategy. Based on this computational result, the problem of finding the set of searchlights with minimum guarding cost and assigning the search directions of all edges is solved by the dynamic programming strategy. Then, in the second phase, the search time slots of each edge are determined on the basis of the results of the first phase and on some properties of interval graphs.  相似文献   

11.
This paper studies continuous pattern detection over large evolving graphs, which plays an important role in monitoring-related applications. The problem is challenging due to the large size and dynamic updates of graphs, the massive search space of pattern detection and inconsistent query results on dynamic graphs. This paper first introduces a snapshot isolation requirement, which ensures that the query results come from a consistent graph snapshot instead of a mixture of partial evolving graphs. Second, we propose an SSD (single sink directed acyclic graph) plan friendly to vertex-centric-distributed graph processing frameworks. SSD plan can guide the message transformation and transfer among graph vertices, and determine the satisfaction of the pattern on graph vertices for the sink vertex. Third, we devise strategies for major steps in the SSD evaluation, including the location of valid messages to achieve snapshot isolation, AO-List to determine the satisfaction of transition rule over dynamic graph, and message-on-change policy to reduce outgoing messages. The experiments on billion-edge graphs using Giraph, an open source implementation of Pregel, illustrate the efficiency and effectiveness of our method.  相似文献   

12.
For a family F of graphs, a graph U is said to be F-induced-universal if every graph of F is an induced subgraph of U. We give a construction for an induced-universal graph for the family of graphs on n vertices with degree at most k. For k even, our induced-universal graph has O(nk/2) vertices and for k odd it has O(nk/2⌉−1/klog2+2/kn) vertices. This construction improves a result of Butler by a multiplicative constant factor for the even case and by almost a multiplicative n1/k factor for the odd case. We also construct induced-universal graphs for the class of oriented graphs with bounded incoming and outgoing degree, slightly improving another result of Butler.  相似文献   

13.
The Graph Theorist, GT, is a system that performs mathematical research in graph theory. From the definitions in its input knowledge base, GT constructs examples of mathematical concepts, conjectures and proves mathematical theorems about concepts, and discovers new concepts. Discovery is driven both by examples and by definitional form. The discovery processes construct a semantic net that links all of GT's concepts together.
Each definition is an algebraic expression whose semantic interpretation is a stylized algorithm to generate a class of graphs correctly and completely. From a knowledge base of these concept definitions, GT is able to conjecture and prove such theorems as "The set of acyclic, connected graphs is precisely the set of trees" and "There is no odd-regular graph on an odd number of vertices." GT explores new concepts either to develop an area of knowledge or to link a newly acquired concept into a pre-existing knowledge base. New concepts arise from the specialization of an existing concept, the generalization of an existing concept, and the merger of two or more existing concepts. From an initial knowledge base containing only the definition of "graph," GT discovers such concepts as acyclic graphs, connected graphs, and bipartite graphs.  相似文献   

14.
We proposed a novel solution schema called the Hierarchical Labeling Schema (HLS) to answer reachability queries in directed graphs. Different from many existing approaches that focus on static directed acyclic graphs (DAGs), our schema focuses on directed cyclic graphs (DCGs) where vertices or arcs could be added to a graph incrementally. Unlike many of the traditional approaches, HLS does not require the graph to be acyclic in constructing its index. Therefore, it could, in fact, be applied to both DAGs and DCGs. When vertices or arcs are added to a graph, the HLS is capable of updating the index incrementally instead of re-computing the index from the scratch each time, making it more efficient than many other approaches in the practice. The basic idea of HLS is to create a tree for each vertex in a graph and link the trees together so that whenever two vertices are given, we can immediately know whether there is a path between them by referring to the appropriate trees. We conducted extensive experiments on both real-world datasets and synthesized datasets. We compared the performance of HLS, in terms of index construction time, query processing time and space consumption, with two state-of-the-art methodologies, the path-tree method and the 3-hop method. We also conducted simulations to model the situation when a graph is updated incrementally. The performance comparison of different algorithms against HLS on static graphs has also been studied. Our results show that HLS is highly competitive in the practice and is particularly useful in the cases where the graphs are updated frequently.  相似文献   

15.
基于谱聚类与混合模型的SAR图像多尺度分割   总被引:2,自引:2,他引:0       下载免费PDF全文
针对谱聚类方法应用于合成孔径雷达(SAR)图像分割时Laplace矩阵的特征值和特征向量难以计算的问题,结合SAR图像在多个尺度的统计信息,给出了一个包含顶点凝聚、初始分割和分割细化3个步骤的SAR图像多尺度分割方法。首先,用一个顶点数不断减少的凝聚图序列来逼近从SAR图像得到的图;然后应用谱聚类方法对最粗尺度的凝聚图进行分割得到初始分割结果;最后根据SAR图像的统计性质,利用基于混合模型估计的分类后验概率将初始分割结果逐尺度进行细化得到SAR图像的最终分割。实验结果表明了方法的有效性。  相似文献   

16.
大多用语言描述偏好的决策问题假设语言是均匀、对称分布的,然而有些问题需要采用非均衡语言.针对这一问题,提出一种基于符号化方法的语言计算模型.首先,构造一种基于基础语言集合的加权图,用图形中的点描述非均衡语言;然后,定义图形中任意两点间的曼哈顿距离公式,用于计算非均衡语言的距离;最后,将其用于逼近于理想值的排序方法(TOPSIS),并给出算例.所提出的方法不仅图像化非均衡语言,而且在解决TOPSIS问题时比欧氏距离测度更具优越性.  相似文献   

17.
Graph shift regularization is a new and effective graph-based semi-supervised classification method, but its performance is closely related to the representation graphs. Since directed graphs can convey more information about the relationship between vertices than undirected graphs, an intelligent method called graph shift regularization with directed graphs (GSR-D) is presented for fault diagnosis of rolling bearings. For greatly improving the diagnosis performance of GSR-D, a directed and weighted k-nearest neighbor graph is first constructed by treating each sample (i.e., each vibration signal segment) as a vertex, in which the similarity between samples is measured by cosine distance instead of the commonly used Euclidean distance, and the edge weights are also defined by cosine distance instead of the commonly used heat kernel. Then, the labels of samples are considered as the graph signals indexed by the vertices of the representation graph. Finally, the states of unlabeled samples are predicted by finding a graph signal that has minimal total variation and satisfies the constraint given by labeled samples as much as possible. Experimental results indicate that GSR-D is better and more stable than the standard convolutional neural network and support vector machine in rolling bearing fault diagnosis, and GSR-D only has two tuning parameters with certain robustness.  相似文献   

18.
A linear time recognition algorithm for proper interval graphs   总被引:1,自引:0,他引:1  
We propose a linear time recognition algorithm for proper interval graphs. The algorithm is based on certain ordering of vertices, called bicompatible elimination ordering (BCO). Given a BCO of a biconnected proper interval graph G, we also propose a linear time algorithm to construct a Hamiltonian cycle of G.  相似文献   

19.
Structural and behavioral parameters of many real networks such as social networks are unpredictable, uncertain, and have time-varying parameters, and for these reasons, deterministic graphs for modeling such networks are too restrictive to solve most of the real-network problems. It seems that stochastic graphs, in which weights associated to the vertices are random variables, might be better graph models for real-world networks. Once we use a stochastic graph as the model for a network, every feature of the graph such as path, spanning tree, clique, dominating set, and cover set should be treated as a stochastic feature. For example, choosing a stochastic graph as a graph model of an online social network and defining community structure in terms of clique, the concept of a stochastic clique may be used to study community structures’ properties or define spreading of influence according to the coverage of influential users; the concept of stochastic vertex covering may be used to study spread of influence. In this article, minimum vertex covering in stochastic graphs is first defined, and then four learning, automata-based algorithms are proposed for solving a minimum vertex-covering problem in stochastic graphs where the probability distribution functions of the weights associated with the vertices of the graph are unknown. It is shown that through a proper choice of the parameters of the proposed algorithms, one can make the probability of finding minimum vertex cover in a stochastic graph as close to unity as possible. Experimental results on synthetic stochastic graphs reveal that at a certain confidence level the proposed algorithms significantly outperform the standard sampling method in terms of the number of samples needed to be taken from the vertices of the stochastic graph.  相似文献   

20.
Narayan Vikas 《Algorithmica》2013,67(2):180-206
The compaction problem is to partition the vertices of an input graph G onto the vertices of a fixed target graph H, such that adjacent vertices of G remain adjacent in H, and every vertex and non-loop edge of H is covered by some vertex and edge of G respectively, i.e., the partition is a homomorphism of G onto H (except the loop edges). Various computational complexity results, including both NP-completeness and polynomial time solvability, have been presented earlier for this problem for various classes of target graphs H. In this paper, we pay attention to the input graphs G, and present polynomial time algorithms for the problem for some class of input graphs, keeping the target graph H general as any reflexive or irreflexive graph. Our algorithms also give insight as for which instances of the input graphs, the problem could possibly be NP-complete for certain target graphs. With the help of our results, we are able to further refine the structure of the input graph that would be necessary for the problem to be possibly NP-complete, when the target graph is a cycle. Thus, when the target graph is a cycle, we enhance the class of input graphs for which the problem is polynomial time solvable. We also present analogous results for a variation of the compaction problem, which we call the vertex-compaction problem. Using our results, we also provide important relationships between compaction, retraction, and vertex-compaction to cycles.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号