首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
2.
Given a simple undirected graph G = (V, E) and an integer k < |V|, the Sparsest k-Subgraph problem asks for a set of k vertices which induces the minimum number of edges. As a generalization of the classical independent set problem, Sparsest k-Subgraph is ????-hard and even not approximable unless ?????? in general graphs. Thus, we investigate Sparsest k-Subgraph in graph classes where independent set is polynomial-time solvable, such as subclasses of perfect graphs. Our two main results are the ????-hardness of Sparsest k-Subgraph on chordal graphs, and a greedy 2-approximation algorithm. Finally, we also show how to derive a P T A S for Sparsest k-Subgraph on proper interval graphs.  相似文献   

3.
Two mobile agents, starting from different nodes of a network at possibly different times, have to meet at the same node. This problem is known as rendezvous. Agents move in synchronous rounds. Each agent has a distinct integer label from the set \(\{1,\ldots ,L\}\). Two main efficiency measures of rendezvous are its time (the number of rounds until the meeting) and its cost (the total number of edge traversals). We investigate tradeoffs between these two measures. A natural benchmark for both time and cost of rendezvous in a network is the number of edge traversals needed for visiting all nodes of the network, called the exploration time. Hence we express the time and cost of rendezvous as functions of an upper bound E on the time of exploration (where E and a corresponding exploration procedure are known to both agents) and of the size L of the label space. We present two natural rendezvous algorithms. Algorithm Cheap has cost O(E) (and, in fact, a version of this algorithm for the model where the agents start simultaneously has cost exactly E) and time O(EL). Algorithm Fast has both time and cost \(O(E\log L)\). Our main contributions are lower bounds showing that, perhaps surprisingly, these two algorithms capture the tradeoffs between time and cost of rendezvous almost tightly. We show that any deterministic rendezvous algorithm of cost asymptotically E (i.e., of cost \(E+o(E)\)) must have time \(\varOmega (EL)\). On the other hand, we show that any deterministic rendezvous algorithm with time complexity \(O(E\log L)\) must have cost \(\varOmega (E\log L)\).  相似文献   

4.
How do the k-core structures of real-world graphs look like? What are the common patterns and the anomalies? How can we exploit them for applications? A k-core is the maximal subgraph in which all vertices have degree at least k. This concept has been applied to such diverse areas as hierarchical structure analysis, graph visualization, and graph clustering. Here, we explore pervasive patterns related to k-cores and emerging in graphs from diverse domains. Our discoveries are: (1) Mirror Pattern: coreness (i.e., maximum k such that each vertex belongs to the k-core) is strongly correlated with degree. (2) Core-Triangle Pattern: degeneracy (i.e., maximum k such that the k-core exists) obeys a 3-to-1 power-law with respect to the count of triangles. (3) Structured Core Pattern: degeneracy–cores are not cliques but have non-trivial structures such as core–periphery and communities. Our algorithmic contributions show the usefulness of these patterns. (1) Core-A, which measures the deviation from Mirror Pattern, successfully spots anomalies in real-world graphs, (2) Core-D, a single-pass streaming algorithm based on Core-Triangle Pattern, accurately estimates degeneracy up to 12 \(\times \) faster than its competitor. (3) Core-S, inspired by Structured Core Pattern, identifies influential spreaders up to 17 \(\times \) faster than its competitors with comparable accuracy.  相似文献   

5.
In the framework of parameterized complexity, exploring how one parameter affects the complexity of a different parameterized (or unparameterized problem) is of general interest. A well-developed example is the investigation of how the parameter treewidth influences the complexity of (other) graph problems. The reason why such investigations are of general interest is that real-world input distributions for computational problems often inherit structure from the natural computational processes that produce the problem instances (not necessarily in obvious, or well-understood ways). The max leaf number ml(G) of a connected graph G is the maximum number of leaves in a spanning tree for G. Exploring questions analogous to the well-studied case of treewidth, we can ask: how hard is it to solve 3-Coloring, Hamilton Path, Minimum Dominating Set, Minimum Bandwidth or many other problems, for graphs of bounded max leaf number? What optimization problems are W[1]-hard under this parameterization? We do two things:
  1. (1)
    We describe much improved FPT algorithms for a large number of graph problems, for input graphs G for which ml(G)≤k, based on the polynomial-time extremal structure theory canonically associated to this parameter. We consider improved algorithms both from the point of view of kernelization bounds, and in terms of improved fixed-parameter tractable (FPT) runtimes O *(f(k)).
     
  2. (2)
    The way that we obtain these concrete algorithmic results is general and systematic. We describe the approach, and raise programmatic questions.
     
  相似文献   

6.
Most entropy notions \({H(.)}\) like Shannon or min-entropy satisfy a chain rule stating that for random variables \({X,Z,}\) and \({A}\) we have \({H(X|Z,A)\ge H(X|Z)-|A|}\). That is, by conditioning on \({A}\) the entropy of \({X}\) can decrease by at most the bitlength \({|A|}\) of \({A}\). Such chain rules are known to hold for some computational entropy notions like Yao’s and unpredictability-entropy. For HILL entropy, the computational analogue of min-entropy, the chain rule is of special interest and has found many applications, including leakage-resilient cryptography, deterministic encryption, and memory delegation. These applications rely on restricted special cases of the chain rule. Whether the chain rule for conditional HILL entropy holds in general was an open problem for which we give a strong negative answer: we construct joint distributions \({(X,Z,A)}\), where \({A}\) is a distribution over a single bit, such that the HILL entropy H HILL \({(X|Z)}\) is large but H HILL \({(X|Z,A)}\) is basically zero.Our counterexample just makes the minimal assumption that \({{\mathbf{NP}} \nsubseteq{\mathbf{P/poly}}}\). Under the stronger assumption that injective one-way function exist, we can make all the distributions efficiently samplable.Finally, we show that some more sophisticated cryptographic objects like lossy functions can be used to sample a distribution constituting a counterexample to the chain rule making only a single invocation to the underlying object.  相似文献   

7.
Despite many algorithms for embedding graphs on unbounded grids, only a few results on embedding graphs on restricted grids have been published. In this paper, we study the problem of embedding paths and cycles on solid grid graphs. We show that a cycle of length k is unit-length embeddable on a solid grid graph G if k is an even integer between four and the length of the longest cycle of G. In addition, our result shows that a path of length k is unit-length embeddable on G, between its two given vertices s and t, if \(k\le L\) and \(k\equiv L (\mathrm{mod}\ 2)\), in which L is the length of the longest path between s and t. Our presented two algorithms show that such embeddings can be found in linear time for cycles and quadratic time for paths, with respect to the size of graph G. In the case of rectangular grid graphs, the running time of the algorithms can be improved to O(k) and O\((k^2)\), respectively. In addition, we extend our results to \(m\times n\times o\) 3D grids. A application of our result is in the interconnection network mapping in parallel processing.  相似文献   

8.
In the List H- Homomorphism Problem, for a graph H that is a parameter of the problem, an instance consists of an undirected graph G with a list constraint \({L(v) \subseteq V(H)}\) for each variable \({v \in V(G)}\), and the objective is to determine whether there is a list H-homomorphism \({f:V(G) \to V(H)}\), that is, \({f(v) \in L(v)}\) for every \({v \in V(G)}\) and \({(f(u),f(v)) \in E(H)}\) whenever \({(u,v) \in E(G)}\).We consider the problem of testing list H-homomorphisms in the following weighted setting: An instance consists of an undirected graph G, list constraints L, weights imposed on the vertices of G, and a map \({f:V(G) \to V(H)}\) given as an oracle access. The objective is to determine whether f is a list H-homomorphism or far from any list H-homomorphism. The farness is measured by the total weight of vertices \({v \in V(G)}\) for which f(v) must be changed so as to make f a list H-homomorphism. In this paper, we classify graphs H with respect to the number of queries to f required to test the list H-homomorphisms. Specifically, we show that (i) list H-homomorphisms are testable with a constant number of queries if and only if H is a reflexive complete graph or an irreflexive complete bipartite graph and (ii) list H-homomorphisms are testable with a sublinear number of queries if and only if H is a bi-arc graph.  相似文献   

9.
Human experts as well as autonomous agents in a referral network must decide whether to accept a task or refer to a more appropriate expert, and if so to whom. In order for the referral network to improve over time, the experts must learn to estimate the topical expertise of other experts. This article extends concepts from Multi-agent Reinforcement Learning and Active Learning to referral networks for distributed learning in referral networks. Among a wide array of algorithms evaluated, Distributed Interval Estimation Learning (DIEL), based on Interval Estimation Learning, was found to be superior for learning appropriate referral choices, compared to ??-Greedy, Q-learning, Thompson Sampling and Upper Confidence Bound (UCB) methods. In addition to a synthetic data set, we compare the performance of the stronger learning-to-refer algorithms on a referral network of high-performance Stochastic Local Search (SLS) SAT solvers where expertise does not obey any known parameterized distribution. An evaluation of overall network performance and a robustness analysis is conducted across the learning algorithms, with an emphasis on capacity constraints and evolving networks, where experts with known expertise drop off and new experts of unknown performance enter — situations that arise in real-world scenarios but were heretofore ignored.  相似文献   

10.
Local search algorithms based on the Configuration Checking (CC) strategy have been shown to be efficient in solving satisfiable random k-SAT instances. The purpose of the CC strategy is to avoid the cycling problem, which corresponds to revisiting already flipped variables too soon. It is done by considering the neighborhood of the formula variables. In this paper, we propose to improve the CC strategy on the basis of an empirical study of a powerful algorithm using this strategy. The first improvement introduces a new and simple criterion, which refines the selection of the variables to flip for the 3-SAT instances. The second improvement is achieved by using the powerful local search algorithm Novelty with the adaptive noise setting. This algorithm enhances the efficiency of the intensification and diversification phases when solving k-SAT instances with k ≥ 4. We name the resulting local search algorithm Ncca+ and show its effectiveness when treating satisfiable random k-SAT instances issued from the SAT Challenge 2012. Ncca+ won the bronze medal of the random SAT track of the SAT Competition 2013.  相似文献   

11.
Dataflow programs are widely used. Each program is a directed graph where nodes are computations and edges indicate the flow of data. In prior work, we reverse-engineered legacy dataflow programs by deriving their optimized implementations from a simple specification graph using graph transformations called refinements and optimizations. In MDE speak, our derivations were PIM-to-PSM mappings. In this paper, we show how extensions complement refinements, optimizations, and PIM-to-PSM derivations to make the process of reverse engineering complex legacy dataflow programs tractable. We explain how optional functionality in transformations can be encoded, thereby enabling us to encode product lines of transformations as well as product lines of dataflow programs. We describe the implementation of extensions in the \(\mathtt{ReFlO}\) tool and present two non-trivial case studies as evidence of our work’s generality.  相似文献   

12.
Ensembles of classifiers are among the best performing classifiers available in many data mining applications, including the mining of data streams. Rather than training one classifier, multiple classifiers are trained, and their predictions are combined according to a given voting schedule. An important prerequisite for ensembles to be successful is that the individual models are diverse. One way to vastly increase the diversity among the models is to build an heterogeneous ensemble, comprised of fundamentally different model types. However, most ensembles developed specifically for the dynamic data stream setting rely on only one type of base-level classifier, most often Hoeffding Trees. We study the use of heterogeneous ensembles for data streams. We introduce the Online Performance Estimation framework, which dynamically weights the votes of individual classifiers in an ensemble. Using an internal evaluation on recent training data, it measures how well ensemble members performed on this and dynamically updates their weights. Experiments over a wide range of data streams show performance that is competitive with state of the art ensemble techniques, including Online Bagging and Leveraging Bagging, while being significantly faster. All experimental results from this work are easily reproducible and publicly available online.  相似文献   

13.
In this paper we investigate the problem of partitioning an input string T in such a way that compressing individually its parts via a base-compressor C gets a compressed output that is shorter than applying C over the entire T at once. This problem was introduced in Buchsbaum et al. (Proc. of 11th ACM-SIAM Symposium on Discrete Algorithms, pp. 175–184, 2000; J. ACM 50(6):825–851, 2003) in the context of table compression, and then further elaborated and extended to strings and trees by Ferragina et al. (J. ACM 52:688–713, 2005; Proc. of 46th IEEE Symposium on Foundations of Computer Science, pp. 184–193, 2005) and Mäkinen and Navarro (Proc. of 14th Symposium on String Processing and Information Retrieval, pp. 229–241, 2007). Unfortunately, the literature offers poor solutions: namely, we know either a cubic-time algorithm for computing the optimal partition based on dynamic programming (Buchsbaum et al. in J. ACM 50(6):825–851, 2003; Giancarlo and Sciortino in Proc. of 14th Symposium on Combinatorial Pattern Matching, pp. 129–143, 2003), or few heuristics that do not guarantee any bounds on the efficacy of their computed partition (Buchsbaum et al. in Proc. of 11th ACM-SIAM Symposium on Discrete Algorithms, pp. 175–184, 2000; J. ACM 50(6):825–851, 2003), or algorithms that are efficient but work in some specific scenarios (such as the Burrows-Wheeler Transform, see e.g. Ferragina et al. in J. ACM 52:688–713, 2005; Mäkinen and Navarro in Proc. of 14th Symposium on String Processing and Information Retrieval, pp. 229–241, 2007) and achieve compression performance that might be worse than the optimal-partitioning by a Ω(log?n/log?log?n) factor. Therefore, computing efficiently the optimal solution is still open (Buchsbaum and Giancarlo in Encyclopedia of Algorithms, pp. 939–942, 2008). In this paper we provide the first algorithm which computes in O(nlog?1+ε n) time and O(n) space, a partition of T whose compressed output is guaranteed to be no more than (1+ε)-worse the optimal one, where ε may be any positive constant fixed in advance. This result holds for any base-compressor C whose compression performance can be bounded in terms of the zero-th or the k-th order empirical entropy of the text T. We will also discuss extensions of our results to BWT-based compressors and to the compression booster of Ferragina et al. (J. ACM 52:688–713, 2005).  相似文献   

14.
A grid graph \(G_{\mathrm{g}}\) is a finite vertex-induced subgraph of the two-dimensional integer grid \(G^\infty \). A rectangular grid graph R(mn) is a grid graph with horizontal size m and vertical size n. A rectangular grid graph with a rectangular hole is a rectangular grid graph R(mn) such that a rectangular grid subgraph R(kl) is removed from it. The Hamiltonian path problem for general grid graphs is NP-complete. In this paper, we give necessary conditions for the existence of a Hamiltonian path between two given vertices in an odd-sized rectangular grid graph with a rectangular hole. In addition, we show that how such paths can be computed in linear time.  相似文献   

15.
An outer-connected dominating set in a graph G = (V, E) is a set of vertices D ? V satisfying the condition that, for each vertex v ? D, vertex v is adjacent to some vertex in D and the subgraph induced by V?D is connected. The outer-connected dominating set problem is to find an outer-connected dominating set with the minimum number of vertices which is denoted by \(\tilde {\gamma }_{c}(G)\). In this paper, we determine \(\tilde {\gamma }_{c}(S(n,k))\), \(\tilde {\gamma }_{c}(S^{+}(n,k))\), \(\tilde {\gamma }_{c}(S^{++}(n,k))\), and \(\tilde {\gamma }_{c}(S_{n})\), where S(n, k), S +(n, k), S ++(n, k), and S n are Sierpi\(\acute {\mathrm {n}}\)ski-like graphs.  相似文献   

16.
The Planar Feedback Vertex Set problem asks whether an n-vertex planar graph contains at most k vertices meeting all its cycles. The Face Cover problem asks whether all vertices of a plane graph G lie on the boundary of at most k faces of G. Standard techniques from parameterized algorithm design indicate that both problems can be solved by sub-exponential parameterized algorithms (where k is the parameter). In this paper we improve the algorithmic analysis of both problems by proving a series of combinatorial results relating the branchwidth of planar graphs with their face cover. Combining this fact with duality properties of branchwidth, allows us to derive analogous results on feedback vertex set. As a consequence, it follows that Planar Feedback Vertex Set and Face Cover can be solved in \(O(2^{15.11\cdot\sqrt{k}}+n^{2})\) and \(O(2^{10.1\cdot\sqrt {k}}+n^{2})\) steps, respectively.  相似文献   

17.
The Surjective Homomorphism problem is to test whether a given graph G called the guest graph allows a vertex-surjective homomorphism to some other given graph H called the host graph. The bijective and injective homomorphism problems can be formulated in terms of spanning subgraphs and subgraphs, and as such their computational complexity has been extensively studied. What about the surjective variant? Because this problem is NP-complete in general, we restrict the guest and the host graph to belong to graph classes \({{\mathcal G}}\) and \({{\mathcal H}}\), respectively. We determine to what extent a certain choice of \({{\mathcal G}}\) and \({{\mathcal H}}\) influences its computational complexity. We observe that the problem is polynomial-time solvable if \({{\mathcal H}}\) is the class of paths, whereas it is NP-complete if \({{\mathcal G}}\) is the class of paths. Moreover, we show that the problem is even NP-complete on many other elementary graph classes, namely linear forests, unions of complete graphs, cographs, proper interval graphs, split graphs and trees of pathwidth at most 2. In contrast, we prove that the problem is fixed-parameter tractable in k if \({{\mathcal G}}\) is the class of trees and \({{\mathcal H}}\) is the class of trees with at most k leaves, or if \({{\mathcal G}}\) and \({{\mathcal H}}\) are equal to the class of graphs with vertex cover number at most k.  相似文献   

18.
How can we discover interesting patterns from time-evolving high-speed data streams? How to analyze the data streams quickly and accurately, with little space overhead? How to guarantee the found patterns to be self-consistent? High-speed data stream has been receiving increasing attention due to its wide applications such as sensors, network traffic, social networks, etc. The most fundamental task on the data stream is frequent pattern mining; especially, focusing on recentness is important in real applications. In this paper, we develop two algorithms for discovering recently frequent patterns in data streams. First, we propose TwMinSwap to find top-k recently frequent items in data streams, which is a deterministic version of our motivating algorithm TwSample providing theoretical guarantees based on item sampling. TwMinSwap improves TwSample in terms of speed, accuracy, and memory usage. Both require only O(k) memory spaces and do not require any prior knowledge on the stream such as its length and the number of distinct items in the stream. Second, we propose TwMinSwap-Is to find top-k recently frequent itemsets in data streams. We especially focus on keeping self-consistency of the discovered itemsets, which is the most important property for reliable results, while using O(k) memory space with the assumption of a constant itemset size. Through extensive experiments, we demonstrate that TwMinSwap outperforms all competitors in terms of accuracy and memory usage, with fast running time. We also show that TwMinSwap-Is is more accurate than the competitor and discovers recently frequent itemsets with reasonably large sizes (at most 5–7) depending on datasets. Thanks to TwMinSwap and TwMinSwap-Is, we report interesting discoveries in real world data streams, including the difference of trends between the winner and the loser of U.S. presidential candidates, and temporal human contact patterns.  相似文献   

19.
Aggregate similarity search, also known as aggregate nearest-neighbor (Ann) query, finds many useful applications in spatial and multimedia databases. Given a group Q of M query objects, it retrieves from a database the objects most similar to Q, where the similarity is an aggregation (e.g., \({{\mathrm{sum}}}\), \(\max \)) of the distances between each retrieved object p and all the objects in Q. In this paper, we propose an added flexibility to the query definition, where the similarity is an aggregation over the distances between p and any subset of \(\phi M\) objects in Q for some support \(0< \phi \le 1\). We call this new definition flexible aggregate similarity search and accordingly refer to a query as a flexible aggregate nearest-neighbor ( Fann ) query. We present algorithms for answering Fann queries exactly and approximately. Our approximation algorithms are especially appealing, which are simple, highly efficient, and work well in both low and high dimensions. They also return near-optimal answers with guaranteed constant-factor approximations in any dimensions. Extensive experiments on large real and synthetic datasets from 2 to 74 dimensions have demonstrated their superior efficiency and high quality.  相似文献   

20.
We define and prove a formal semantics divided into two complementary interacting components: the strictly linguistic (i.e. linguistically marked) semantics, we call linguistic agent (LA), and the strictly logical and referential semantics, we call rational agent (RA). This Linguistic \(\leftrightarrow \) Rational Agents’ Semantics (LRA semantics) applies to Deep Dependency trees (DD-trees) or more generally, to discourses, i.e. sequences of DD-trees, and interprets them by functional structures we call Meaning Representation Structures (MRS), similar to the DRT, but interpreted very differently. LRA semantics incrementally interprets the discourses by minimal finite models, called proto-models, in a monotonic logic of the LA and checks the proto-models with respect to the classical models of the RA. The proto-model is considered as the linguistic sense of the discourse. We define in full detail the LA which, as we believe, must be universal. On the other hand, we don’t propose a particular RA. We only define the scheme of interaction between the two agents and the stimuli of the RA used by the LA. After all, every discourse has in LRA semantics the single meaning and the single sense for every Rational Agent used to interact with the Linguistic Agent.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号