首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Many hard algorithmic problems dealing with graphs, circuits, formulas and constraints admit polynomial-time upper bounds if the underlying graph has small treewidth. The same problems often encourage reducing the maximal degree of vertices to simplify theoretical arguments or address practical concerns. Such degree reduction can be performed through a sequence of splittings of vertices, resulting in an expansion of the original graph. We observe that the treewidth of a graph may increase dramatically if the splittings are not performed carefully. In this context we address the following natural question: is it possible to reduce the maximum degree to a constant without substantially increasing the treewidth?We answer the above question affirmatively. We prove that any simple undirected graph G=(V,E) admits an expansion G′=(V′,E′) with the maximum degree ≤3 and tw(G′)≤tw(G)+1, where tw(?) is the treewidth of a graph. Furthermore, such an expansion will have no more than 2|E|+|V| vertices and 3|E| edges; it can be computed efficiently from a tree-decomposition of G. We also construct a family of examples for which the increase by 1 in treewidth cannot be avoided.  相似文献   

2.
Efficiently answering reachability queries, which checks whether one vertex can reach another in a directed graph, has been studied extensively during recent years. However, the size of the graph that people are facing and generating nowadays is growing so rapidly that simple algorithms, such as BFS and DFS, are no longer feasible. Although Refined Online Search algorithms can scale to large graphs, they all suffer from the false positive problem. In this paper, we analyze the cause of false positive and propose an efficient High Dimensional coordinate generating method based on Graph Dominance Drawing (HD-GDD) to answer reachability queries in linear or even constant time. We conduct experiments on different graph structures and different graph sizes to fully evaluate the performance and behavior of our proposal. Empirical results demonstrate that our method outperforms state-of-the-art algorithms and can handle extensive graphs.  相似文献   

3.
In the List H- Homomorphism Problem, for a graph H that is a parameter of the problem, an instance consists of an undirected graph G with a list constraint \({L(v) \subseteq V(H)}\) for each variable \({v \in V(G)}\), and the objective is to determine whether there is a list H-homomorphism \({f:V(G) \to V(H)}\), that is, \({f(v) \in L(v)}\) for every \({v \in V(G)}\) and \({(f(u),f(v)) \in E(H)}\) whenever \({(u,v) \in E(G)}\).We consider the problem of testing list H-homomorphisms in the following weighted setting: An instance consists of an undirected graph G, list constraints L, weights imposed on the vertices of G, and a map \({f:V(G) \to V(H)}\) given as an oracle access. The objective is to determine whether f is a list H-homomorphism or far from any list H-homomorphism. The farness is measured by the total weight of vertices \({v \in V(G)}\) for which f(v) must be changed so as to make f a list H-homomorphism. In this paper, we classify graphs H with respect to the number of queries to f required to test the list H-homomorphisms. Specifically, we show that (i) list H-homomorphisms are testable with a constant number of queries if and only if H is a reflexive complete graph or an irreflexive complete bipartite graph and (ii) list H-homomorphisms are testable with a sublinear number of queries if and only if H is a bi-arc graph.  相似文献   

4.
Why-not and why questions can be posed by database users to seek clarifications on unexpected query results. Specifically, why-not questions aim to explain why certain expected tuples are absent from the query results, while why questions try to clarify why certain unexpected tuples are present in the query results. This paper systematically explores the why-not and why questions on reverse top-k queries, owing to its importance in multi-criteria decision making. We first formalize why-not questions on reverse top-k queries, which try to include the missing objects in the reverse top-k query results, and then, we propose a unified framework called WQRTQ to answer why-not questions on reverse top-k queries. Our framework offers three solutions to cater for different application scenarios. Furthermore, we study why questions on reverse top-k queries, which aim to exclude the undesirable objects from the reverse top-k query results, and extend the framework WQRTQ to efficiently answer why questions on reverse top-k queries, which demonstrates the flexibility of our proposed algorithms. Extensive experimental evaluation with both real and synthetic data sets verifies the effectiveness and efficiency of the presented algorithms under various experimental settings.  相似文献   

5.
The distance graph G(n, 2, 1) is a graph where vertices are identified with twoelement subsets of {1, 2,..., n}, and two vertices are connected by an edge whenever the corresponding subsets have exactly one common element. A random subgraph G p (n, 2, 1) in the Erd?os–Rényi model is obtained by selecting each edge of G(n, 2, 1) with probability p independently of other edges. We find a lower bound on the independence number of the random subgraph G1/2(n, 2, 1).  相似文献   

6.
A grid graph \(G_{\mathrm{g}}\) is a finite vertex-induced subgraph of the two-dimensional integer grid \(G^\infty \). A rectangular grid graph R(mn) is a grid graph with horizontal size m and vertical size n. A rectangular grid graph with a rectangular hole is a rectangular grid graph R(mn) such that a rectangular grid subgraph R(kl) is removed from it. The Hamiltonian path problem for general grid graphs is NP-complete. In this paper, we give necessary conditions for the existence of a Hamiltonian path between two given vertices in an odd-sized rectangular grid graph with a rectangular hole. In addition, we show that how such paths can be computed in linear time.  相似文献   

7.
This paper proposes a new spatial query called a reverse direction-based surrounder (RDBS) query, which retrieves a user who is seeing a point of interest (POI) as one of their direction-based surrounders (DBSs). According to a user, one POI can be dominated by a second POI if the POIs are directionally close and the first POI is farther from the user than the second is. Two POIs are directionally close if their included angle with respect to the user is smaller than an angular threshold ??. If a POI cannot be dominated by another POI, it is a DBS of the user. We also propose an extended query called competitor RDBS query. POIs that share the same RDBSs with another POI are defined as competitors of that POI. We design algorithms to answer the RDBS queries and competitor queries. The experimental results show that the proposed algorithms can answer the queries efficiently.  相似文献   

8.
Images take lot of computer space; in many practical situations, we cannot store all original images, we have to use compression. Moreover, in many such situations, compression ratio provided by even the best lossless compression is not sufficient, so we have to use lossy compression. In a lossy compression, the reconstructed image ? is, in general, different from the original image I. There exist many different lossy compression methods, and most of these methods have several tunable parameters. In different situations, different methods lead to different quality reconstruction, so it is important to select, in each situation, the best compression method. A natural idea is to select the compression method for which the average value of some metric d(I,?) is the smallest possible. The question is then: which quality metric should we choose? In this paper, we show that under certain reasonable symmetry conditions, L p metrics d(I,?)=∫|I(x)??(x)| p dx are the best, and that the optimal value of p can be selected depending on the expected relative size r of the informative part of the image.  相似文献   

9.
Resource-conscious technologies for cutting sheet material include the ICP and ECP technologies that allow for aligning fragments of the contours of cutouts. In this work, we show the mathematical model for the problem of cutting out parts with these technologies and algorithms for finding cutting tool routes that satisfy technological constraints. We give a solution for the problem of representing a cutting plan as a plane graph G = (V,F,E), which is a homeomorphic image of the cutting plan. This has let us formalize technological constraints on the trajectory of cutting the parts according to the cutting plan and propose a series of algorithms for constructing a route in the graph G = (V,F,E), which is an image of an admissible trajectory. Using known coordinates of the preimages of vertices of graph G = (V,F,E) and the locations of fragments of the cutting plan that are preimages of edges of graph G = (V,F,E), the resulting route in the graph G = (V,E) can be interpreted as the cutting tool’s trajectory.The proposed algorithms for finding routes in a connected graph G have polynomial computational complexity. To find the optimal route in an unconnected graph G, we need to solve, for every dividing face f of graph G, a travelling salesman problem on the set of faces incident to f.  相似文献   

10.
Organization of an efficient self-diagnosis of the multicomponent computer and communication systems of diverse structures always attracted attention of the researchers and engineers. A method to solve these problems is presented in the paper by way of the example of a system whose structure is modeled by a uniform ordinary bipartite graph of diameter d = 3, any degree s > 1, and any number n of vertices, where n = s(s ? 1) + 1. The method requires checking of (s ? 1)3 graph loops of length eight each, which is smaller than the number s 2(s ? 1) + s of checks of single graph edges.  相似文献   

11.
Recently, Reverse k Nearest Neighbors (RkNN) queries, returning every answer for which the query is one of its k nearest neighbors, have been extensively studied on the database research community. But the RkNN query cannot retrieve spatio-textual objects which are described by their spatial location and a set of keywords. Therefore, researchers proposed a RSTkNN query to find these objects, taking both spatial and textual similarity into consideration. However, the RSTkNN query cannot control the size of answer set and to be sorted according to the degree of influence on the query. In this paper, we propose a new problem Ranked Reverse Boolean Spatial Keyword Nearest Neighbors query called Ranked-RBSKNN query, which considers both spatial similarity and textual relevance, and returns t answers with most degree of influence. We propose a separate index and a hybrid index to process such queries efficiently. Experimental results on different real-world and synthetic datasets show that our approaches achieve better performance.  相似文献   

12.
k-nearest neighbor (k-NN) queries are well-known and widely used in a plethora of applications. However, in the original definition of k-NN queries there is no concern regarding diversity of the answer set with respect to the user’s interests. For instance, travelers may be looking for touristic sites that are close to where they are, but that would also lead them to see different parts of the city. Likewise, if one is looking for restaurants close by, it may be more interesting to learn about restaurants of different categories or ethnicities which are nonetheless relatively close. The interesting novel aspect of this type of query is that there are two competing criteria to be optimized: closeness and diversity. We propose two approaches that leverage the notion of linear skyline queries in order to find the k diverse nearest neighbors within a radius r from a given query point, or (k, r)-DNNs for short. Our proposed approaches return a relatively small set containing all optimal solutions for any linear combination of the weights a user could give to the two competing criteria, and we consider three different notions of diversity: spatial, categorical and angular. Our experiments, varying a number of parameters and exploring synthetic and real datasets, in both Euclidean space and road networks, respectively, show that our approaches are several orders of magnitude faster than a straightforward approach.  相似文献   

13.
We propose techniques for processing SPARQL queries over a large RDF graph in a distributed environment. We adopt a “partial evaluation and assembly” framework. Answering a SPARQL query Q is equivalent to finding subgraph matches of the query graph Q over RDF graph G. Based on properties of subgraph matching over a distributed graph, we introduce local partial match as partial answers in each fragment of RDF graph G. For assembly, we propose two methods: centralized and distributed assembly. We analyze our algorithms from both theoretically and experimentally. Extensive experiments over both real and benchmark RDF repositories of billions of triples confirm that our method is superior to the state-of-the-art methods in both the system’s performance and scalability.  相似文献   

14.
Processor (vertex) faults and link (edge) faults may happen when a network is used, and it is meaningful to consider networks (graphs) with faulty processors and/or links. A k-regular Hamiltonian and Hamiltonian connected graph G is optimal fault-tolerant Hamiltonian and Hamiltonian connected if G remains Hamiltonian after removing at most k?2 vertices and/or edges and remains Hamiltonian connected after removing at most k?3 vertices and/or edges. In this paper, we investigate in constructing optimal fault-tolerant Hamiltonian and optimal fault-tolerant Hamiltonian connected graphs. Therefore, some of the generalized hypercubes, twisted-cubes, crossed-cubes, and Möbius cubes are optimal fault-tolerant Hamiltonian and optimal fault-tolerant Hamiltonian connected.  相似文献   

15.
The Doob graph D(m, n), where m > 0, is a Cartesian product of m copies of the Shrikhande graph and n copies of the complete graph K 4 on four vertices. The Doob graph D(m, n) is a distance-regular graph with the same parameters as the Hamming graph H(2m + n, 4). We give a characterization of MDS codes in Doob graphs D(m, n) with code distance at least 3. Up to equivalence, there are m 3/36+7m 2/24+11m/12+1?(m mod 2)/8?(m mod 3)/9 MDS codes with code distance 2m + n in D(m, n), two codes with distance 3 in each of D(2, 0) and D(2, 1) and with distance 4 in D(2, 1), and one code with distance 3 in each of D(1, 2) and D(1, 3) and with distance 4 in each of D(1, 3) and D(2, 2).  相似文献   

16.
In recent years, many layered indexing techniques over distributed hash table (DHT)-based peer-to-peer (P2P) systems have been proposed to realize distributed range search. In this paper, we present a fault tolerant constant degree dynamic Distributed Spatial Data Structure called DSDS that supports orthogonal range search on a set of N d-dimensional points published on n nodes. We describe a total order binary relation algorithm to publish points among supernodes and determine supernode keys. A non-redundant rainbow skip graph is used to coordinate message passing among nodes. The worst case orthogonal range search cost in a d-dimensional DSDS with n nodes is \(O\left (\log n+m+\frac {K}{B}\right )\) messages, where m is the number of nodes intersecting the query, K is the number of points reported in range, and B is the number of points that can fit in one message. A complete backup copy of data points stored in other nodes provides redundancy for our DSDS. This redundancy permits answering a range search query in the case of failure of a single node. For single node failure, the DSDS routing system can be recovered to a fully functional state at a cost of O(log n) messages. Backup sets in DSDS nodes are used to first process a query in the most efficient dimension, and then used to process a query containing the data in a failed node in d-dimensional space. The DSDS search algorithm can process queries in d-dimensional space and still tolerate failure of one node. Search cost in the worst case with a failed node increases to \(O\left (d\log n+dm+\frac {K}{B}\right )\) messages for d dimensions.  相似文献   

17.
A k-core of a graph is a maximal connected subgraph in which every vertex is connected to at least k vertices in the subgraph. k-core decomposition is often used in large-scale network analysis, such as community detection, protein function prediction, visualization, and solving NP-hard problems on real networks efficiently, like maximal clique finding. In many real-world applications, networks change over time. As a result, it is essential to develop efficient incremental algorithms for dynamic graph data. In this paper, we propose a suite of incremental k-core decomposition algorithms for dynamic graph data. These algorithms locate a small subgraph that is guaranteed to contain the list of vertices whose maximum k-core values have changed and efficiently process this subgraph to update the k-core decomposition. We present incremental algorithms for both insertion and deletion operations, and propose auxiliary vertex state maintenance techniques that can further accelerate these operations. Our results show a significant reduction in runtime compared to non-incremental alternatives. We illustrate the efficiency of our algorithms on different types of real and synthetic graphs, at varying scales. For a graph of 16 million vertices, we observe relative throughputs reaching a million times, relative to the non-incremental algorithms.  相似文献   

18.
The starting point of our research is the following problem: given a doubling metric ?=(V,d), can one (efficiently) find an unweighted graph G′=(V′,E′) with V?V′ whose shortest-path metric d′ is still doubling, and which agrees with d on V×V? While it is simple to show that the answer to the above question is negative if distances must be preserved exactly. However, allowing a (1+ε) distortion between d and d′ enables us bypass this hurdle, and obtain an unweighted graph G′ with doubling dimension at most a factor O(log?ε ?1) times the doubling dimension of G.More generally, this paper gives algorithms that construct graphs G′ whose convex (or geodesic) closure has doubling dimension close to that of ?, and the shortest-path distances in G′ closely approximate those of ? when restricted to V×V. Similar results are shown when the metric ? is an additive (tree) metric and the graph G′ is restricted to be a tree.  相似文献   

19.
A graph is König-Egerváry if the size of a minimum vertex cover equals that of a maximum matching in the graph. These graphs have been studied extensively from a graph-theoretic point of view. In this paper, we introduce and study the algorithmic complexity of finding König-Egerváry subgraphs of a given graph. In particular, given a graph G and a nonnegative integer k, we are interested in the following questions:
  1. 1.
    does there exist a set of k vertices (edges) whose deletion makes the graph König-Egerváry?
     
  2. 2.
    does there exist a set of k vertices (edges) that induce a König-Egerváry subgraph?
     
We show that these problems are NP-complete and study their complexity from the points of view of approximation and parameterized complexity. Towards this end, we first study the algorithmic complexity of Above Guarantee Vertex Cover, where one is interested in minimizing the additional number of vertices needed beyond the maximum matching size for the vertex cover. Further, while studying the parameterized complexity of the problem of deleting k vertices to obtain a König-Egerváry graph, we show a number of interesting structural results on matchings and vertex covers which could be useful in other contexts.
  相似文献   

20.
A degree-constrained graph orientation of an undirected graph G is an assignment of a direction to each edge in G such that the outdegree of every vertex in the resulting directed graph satisfies a specified lower and/or upper bound. Such graph orientations have been studied for a long time and various characterizations of their existence are known. In this paper, we consider four related optimization problems introduced in reference (Asahiro et al. LNCS 7422, 332–343 (2012)): For any fixed non-negative integer W, the problems MAX W-LIGHT, MIN W-LIGHT, MAX W-HEAVY, and MIN W-HEAVY take as input an undirected graph G and ask for an orientation of G that maximizes or minimizes the number of vertices with outdegree at most W or at least W. As shown in Asahiro et al. LNCS 7422, 332–343 (2012)).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号