首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
This paper presents some tentative experiments in using a special case of rewriting rules in Mizar (Mizar homepage: http://www.mizar.org/): rewriting a term as its subterm. A similar technique, but based on another Mizar mechanism called functor identification (Korni?owicz 2009) was used by Caminati, in his paper on basic first-order model theory in Mizar (Caminati, J Form Reason 3(1):49–77, 2010, Form Math 19(3):157–169, 2011). However for this purpose he was obligated to introduce some artificial functors. The mechanism presented in the present paper looks promising and fits the Mizar paradigm.  相似文献   

2.
We propose an effective procedure, the first one to our knowledge, for translating a proof term of the Calculus of Inductive Constructions (CIC), into a tactical expression of the high-level specification language of a CIC-based proof assistant like coq (Coq development team 2008) or matita (Asperti et al., J Autom Reason 39:109–139, 2007). This procedure, which should not be considered definitive at its present stage, is intended for translating the logical representation of a proof coming from any source, i.e. from a digital library or from another proof development system, into an equivalent proof presented in the proof assistant’s editable high-level format. To testify to effectiveness of our procedure, we report on its implementation in matita and on the translation of a significant set of proofs (Guidi, ACM Trans Comput Log 2009) from their logical representation as coq 7.3.1 (Coq development team 2002) CIC proof terms to their high-level representation as tactical expressions of matita’s user interface language.  相似文献   

3.
In a previous paper, we laid out the vision of a novel graph query processing paradigm where instead of processing a visual query graph after its construction, it interleaves visual query formulation and processing by exploiting the latency offered by the gui to filter irrelevant matches and prefetch partial query results [8]. Our recent attempts at implementing this vision [8, 9] show significant improvement in system response time (srt) for subgraph queries. However, these efforts are designed specifically for graph databases containing a large collection of small or medium-sized graphs. In this paper, we propose a novel algorithm called quble (QUery Blender for Large nEtworks) to realize this visual subgraph querying paradigm on very large networks (e.g., protein interaction networks, social networks). First, it decomposes a large network into a set of graphlets and supergraphlets using a minimum cut-based graph partitioning technique. Next, it mines approximate frequent and small infrequent fragments (sifs) from them and identifies their occurrences in these graphlets and supergraphlets. Then, the indexing framework of [9] is enhanced so that the mined fragments can be exploited to index graphlets for efficient blending of visual subgraph query formulation and query processing. Extensive experiments on large networks demonstrate effectiveness of quble.  相似文献   

4.
A circle graph is the intersection graph of a set of chords in a circle. Keil [Discrete Appl. Math., 42(1):51–63, 1993] proved that Dominating Set, Connected Dominating Set, and Total Dominating Set are NP-complete in circle graphs. To the best of our knowledge, nothing was known about the parameterized complexity of these problems in circle graphs. In this paper we prove the following results, which contribute in this direction:
  • Dominating Set, Independent Dominating Set, Connected Dominating Set, Total Dominating Set, and Acyclic Dominating Set are W[1]-hard in circle graphs, parameterized by the size of the solution.
  • Whereas both Connected Dominating Set and Acyclic Dominating Set are W[1]-hard in circle graphs, it turns out that Connected Acyclic Dominating Set is polynomial-time solvable in circle graphs.
  • If T is a given tree, deciding whether a circle graph G has a dominating set inducing a graph isomorphic to T is NP-complete when T is in the input, and FPT when parameterized by t=|V(T)|. We prove that the FPT algorithm runs in subexponential time, namely $2^{\mathcal{O}(t \cdot\frac{\log\log t}{\log t})} \cdot n^{\mathcal{O}(1)}$ , where n=|V(G)|.
  相似文献   

5.
We study the Cutwidth problem, where the input is a graph G, and the objective is find a linear layout of the vertices that minimizes the maximum number of edges intersected by any vertical line inserted between two consecutive vertices. We give an algorithm for Cutwidth with running time O(2 k n O(1)). Here k is the size of a minimum vertex cover of the input graph G, and n is the number of vertices in G. Our algorithm gives an O(2 n/2 n O(1)) time algorithm for Cutwidth on bipartite graphs as a corollary. This is the first non-trivial exact exponential time algorithm for Cutwidth on a graph class where the problem remains NP-complete. Additionally, we show that Cutwidth parameterized by the size of the minimum vertex cover of the input graph does not admit a polynomial kernel unless NP?coNP/poly. Our kernelization lower bound contrasts with the recent results of Bodlaender et al. (ICALP, Springer, Berlin, 2011; SWAT, Springer, Berlin, 2012) that both Treewidth and Pathwidth parameterized by vertex cover do admit polynomial kernels.  相似文献   

6.
The NP-complete geometric covering problem Rectangle Stabbing is defined as follows: Given a set R of axis-parallel rectangles in the plane, a set L of horizontal and vertical lines in the plane, and a positive integer k, select at most k of the lines such that every rectangle is intersected by at least one of the selected lines. While it is known that the problem can be approximated in polynomial time within a factor of two, its parameterized complexity with respect to the parameter k was open so far. Giving two fixed-parameter reductions, one from the W[1]-complete problem Multicolored Clique and one to the W[1]-complete problem Short Turing Machine Acceptance, we prove that Rectangle Stabbing is W[1]-complete with respect to the parameter k, which in particular means that there is no hope for an algorithm running in f(k)?|RL| O(1) time. Our reductions also show the W[1]-completeness of the more general problem Set Cover on instances that “almost have the consecutive-ones property”, that is, on instances whose matrix representation has at most two blocks of 1s per row. We also show that the special case of Rectangle Stabbing where all rectangles are squares of the same size is W[1]-hard. The case where the input consists of non-overlapping rectangles was open for some time and has recently been shown to be fixed-parameter tractable (Heggernes et al., Fixed-parameter algorithms for cochromatic number and disjoint rectangle stabbing, 2009). By giving an algorithm running in (2k) k ?|RL| O(1) time, we show that Rectangle Stabbing is fixed-parameter tractable in the still NP-hard case where both these restrictions apply, that is, in the case of disjoint squares of the same size. This algorithm is faster than the one in Heggernes et al. (Fixed-parameter algorithms for cochromatic number and disjoint rectangle stabbing, 2009) for the disjoint rectangles case. Moreover, we show fixed-parameter tractability for the restrictions where the rectangles have bounded width or height or where each horizontal line intersects only a bounded number of rectangles.  相似文献   

7.
In this paper we study the prevalent problem of graph partitioning by analyzing the diffusion-based partitioning heuristic Bubble-FOS/C, a key component of a practical successful graph partitioner?(Meyerhenke et al. in J. Parallel Distrib. Comput. 69(9):750?C761, 2009). We begin by studying the disturbed diffusion scheme FOS/C, which computes the similarity measure used in Bubble-FOS/C and is therefore the most crucial component. By relating FOS/C to random walks, we obtain precise characterizations of the behavior of FOS/C on tori and hypercubes. Besides leading to new knowledge on FOS/C (and therefore also on Bubble-FOS/C), these characterizations have been recently used for the analysis of load balancing algorithms?(Berenbrink et al. in Proceedings of the 22nd Annual Symposium on Discrete Algorithms, pp. 429?C439, 2011). We then regard Bubble-FOS/C, which has been shown in previous experiments to produce solutions with good partition shapes and other favorable properties. In this paper we prove that it computes a relaxed solution to an edge cut minimizing binary quadratic program (BQP). This result provides the first substantial theoretical insight why Bubble-FOS/C yields good experimental results in terms of graph partitioning metrics. Moreover, we show that in bisections computed by Bubble-FOS/C, at least one of the two parts is connected. Using the aforementioned relation between FOS/C and random walks, we prove that in vertex-transitive graphs both parts must be connected components.  相似文献   

8.
The AtMostSeqCard constraint is the conjunction of a cardinality constraint on a sequence of n variables and of n???q?+?1 constraints AtMost u on each subsequence of size q. This constraint is useful in car-sequencing and crew-rostering problems. In van Hoeve et al. (Constraints 14(2):273–292, 2009), two algorithms designed for the AmongSeq constraint were adapted to this constraint with an O(2 q n) and O(n 3) worst case time complexity, respectively. In Maher et al. (2008), another algorithm similarly adaptable to filter the AtMostSeqCard constraint with a time complexity of O(n 2) was proposed. In this paper, we introduce an algorithm for achieving arc consistency on the AtMostSeqCard constraint with an O(n) (hence optimal) worst case time complexity. Next, we show that this algorithm can be easily modified to achieve arc consistency on some extensions of this constraint. In particular, the conjunction of a set of m AtMostSeqCard constraints sharing the same scope can be filtered in O(nm). We then empirically study the efficiency of our propagator on instances of the car-sequencing and crew-rostering problems.  相似文献   

9.
The Pathwidth One Vertex Deletion (POVD) problem asks whether, given an undirected graph?G and an integer k, one can delete at most k vertices from?G so that the remaining graph has pathwidth at most 1. The question can be considered as a natural variation of the extensively studied Feedback Vertex Set (FVS) problem, where the deletion of at most k vertices has to result in the remaining graph having treewidth at most 1 (i.e., being a forest). Recently Philip et?al. (WG, Lecture Notes in Computer Science, vol.?6410, pp.?196?C207, 2010) initiated the study of the parameterized complexity of POVD, showing a quartic kernel and an algorithm which runs in time 7 k n O(1). In this article we improve these results by showing a quadratic kernel and an algorithm with time complexity 4.65 k n O(1), thus obtaining almost tight kernelization bounds when compared to the general result of Dell and van Melkebeek (STOC, pp.?251?C260, ACM, New York, 2010). Techniques used in the kernelization are based on the quadratic kernel for FVS, due to Thomassé (ACM Trans. Algorithms 6(2), 2010).  相似文献   

10.
The problems studied in this article originate from the Graph Motif problem introduced by Lacroix et al. (IEEE/ACM Trans. Comput. Biol. Bioinform. 3(4):360–368, 2006) in the context of biological networks. The problem is to decide if a vertex-colored graph has a connected subgraph whose colors equal a given multiset of colors M. It is a graph pattern-matching problem variant, where the structure of the occurrence of the pattern is not of interest but the only requirement is the connectedness. Using an algebraic framework recently introduced by Koutis (Proceedings of the 35th International Colloquium on Automata, Languages and Programming (ICALP), Lecture Notes in Computer Science, vol. 5125, pp. 575–586, 2008) and Koutis and Williams (Proceedings of the 36th International Colloquium on Automata, Languages and Programming (ICALP), Lecture Notes in Computer Science, vol. 5555, pp. 653–664, 2009), we obtain new FPT algorithms for Graph Motif and variants, with improved running times. We also obtain results on the counting versions of this problem, proving that the counting problem is FPT if M is a set, but becomes #W[1]-hard if M is a multiset with two colors. Finally, we present an experimental evaluation of this approach on real datasets, showing that its performance compares favorably with existing software.  相似文献   

11.
We address the verification problem for concurrent programs modeled as multi-pushdown systems (MPDS). In general, MPDS are Turing powerful and hence come along with undecidability of all basic decision problems. Because of this, several subclasses of MPDS have been proposed and studied in the literature (Atig et al. in LNCS, Springer, Berlin, 2005; La Torre et al. in LICS, IEEE, 2007; Lange and Lei in Inf Didact 8, 2009; Qadeer and Rehof in TACAS, LNCS, Springer, Berlin, 2005). In this paper, we propose the class of bounded-budget MPDS, which are restricted in the sense that each stack can perform an unbounded number of context switches only if its depth is below a given bound, and a bounded number of context switches otherwise. We show that the reachability problem for this subclass is Pspace-complete and that LTL-model-checking is Exptime-complete. Furthermore, we propose a code-to-code translation that inputs a concurrent program \(P\) and produces a sequential program \(P'\) such that running \(P\) under the budget-bounded restriction yields the same set of reachable states as running \(P'\) . Moreover, detecting (fair) non-terminating executions in \(P\) can be reduced to LTL-Model-Checking of \(P'\) . By leveraging standard sequential analysis tools, we have implemented a prototype tool and applied it on a set of benchmarks, showing the feasibility of our translation.  相似文献   

12.
In this article, we formulate and study quantum analogues of randomized search heuristics, which make use of Grover search (in Proceedings of the 28th Annual ACM Symposium on Theory of Computing, pp. 212–219. ACM, New York, 1996) to accelerate the search for improved offsprings. We then specialize the above formulation to two specific search heuristics: Random Local Search and the (1+1) Evolutionary Algorithm. We call the resulting quantum versions of these search heuristics Quantum Local Search and the (1+1) Quantum Evolutionary Algorithm. We conduct a rigorous runtime analysis of these quantum search heuristics in the computation model of quantum algorithms, which, besides classical computation steps, also permits those unique to quantum computing devices. To this end, we study the six elementary pseudo-Boolean optimization problems OneMax, LeadingOnes, Discrepancy, Needle, Jump, and TinyTrap. It turns out that the advantage of the respective quantum search heuristic over its classical counterpart varies with the problem structure and ranges from no speedup at all for the problem Discrepancy to exponential speedup for the problem TinyTrap. We show that these runtime behaviors are closely linked to the probabilities of performing successful mutations in the classical algorithms.  相似文献   

13.
Stefan Kratsch 《Algorithmica》2012,63(1-2):532-550
It has been observed in many places that constant-factor approximable problems often admit polynomial or even linear problem kernels for their decision versions, e.g., Vertex Cover, Feedback Vertex Set, and Triangle Packing. While there exist examples like Bin Packing, which does not admit any kernel unless P = NP, there apparently is a strong relation between these two polynomial-time techniques. We add to this picture by showing that the natural decision versions of all problems in two prominent classes of constant-factor approximable problems, namely MIN F+Π1 and MAX NP, admit polynomial problem kernels. Problems in MAX SNP, a subclass of MAX NP, are shown to admit kernels with a linear base set, e.g., the set of vertices of a graph. This extends results of Cai and Chen (J. Comput. Syst. Sci. 54(3): 465–474, 1997), stating that the standard parameterizations of problems in MAX SNP and MIN F+Π1 are fixed-parameter tractable, and complements recent research on problems that do not admit polynomial kernelizations (Bodlaender et al. in J. Comput. Syst. Sci. 75(8): 423–434, 2009).  相似文献   

14.
The Parameterized Complexity of Unique Coverage and Its Variants   总被引:1,自引:0,他引:1  
In this paper we study the parameterized complexity of the Unique Coverage problem, a variant of the classic Set Cover problem. This problem admits several parameterizations and we show that all, except the standard parameterization and a generalization of it, are unlikely to be fixed-parameter tractable. We use results from extremal combinatorics to obtain the best-known kernel for Unique Coverage and the well-known color-coding technique of Alon et al. (J. ACM 42(4), 844–856, 1995) to show that a weighted version of this problem is fixed-parameter tractable. Our application of color-coding uses an interesting variation of s-perfect hash families called (k,s)-hash families which were studied by Alon et al. (J. Comb. Theory Ser. A 104(1), 207–215, 2003) in the context of a class of codes called parent identifying codes (Barg et al. in SIAM J. Discrete Math. 14(3), 423–431, 2001). To the best of our knowledge, this is the first application of (k,s)-hash families outside the domain of coding theory. We prove the existence of such families of size smaller than the best-known s-perfect hash families using the probabilistic method (Alon and Spencer in The Probabilistic Method, Wiley, New York, 2000). Explicit constructions of such families of size promised by the probabilistic method is open.  相似文献   

15.
In this paper, we present Para Miner which is a generic and parallel algorithm for closed pattern mining. Para Miner is built on the principles of pattern enumeration in strongly accessible set systems. Its efficiency is due to a novel dataset reduction technique (that we call EL-reduction), combined with novel technique for performing dataset reduction in a parallel execution on a multi-core architecture. We illustrate Para Miner’s genericity by using this algorithm to solve three different pattern mining problems: the frequent itemset mining problem, the mining frequent connected relational graphs problem and the mining gradual itemsets problem. In this paper, we prove the soundness and the completeness of Para Miner. Furthermore, our experiments show that despite being a generic algorithm, Para Miner can compete with specialized state of the art algorithms designed for the pattern mining problems mentioned above. Besides, for the particular problem of gradual itemset mining, Para Miner outperforms the state of the art algorithm by two orders of magnitude.  相似文献   

16.
We consider a CNF formula F as a multiset of clauses: F={c 1,…,c m }. The set of variables of F will be denoted by V(F). Let B F denote the bipartite graph with partite sets V(F) and F and with an edge between vV(F) and cF if vc or $\bar{v} \in c$ . The matching number ν(F) of F is the size of a maximum matching in B F . In our main result, we prove that the following parameterization of MaxSat (denoted by (ν(F)+k)-SAT) is fixed-parameter tractable: Given a formula F, decide whether we can satisfy at least ν(F)+k clauses in F, where k is the parameter. A formula F is called variable-matched if ν(F)=|V(F)|. Let δ(F)=|F|?|V(F)| and δ ?(F)=max F′?F δ(F′). Our main result implies fixed-parameter tractability of MaxSat parameterized by δ(F) for variable-matched formulas F; this complements related results of Kullmann (IEEE Conference on Computational Complexity, pp. 116–124, 2000) and Szeider (J. Comput. Syst. Sci. 69(4):656–674, 2004) for MaxSat parameterized by δ ?(F). To obtain our main result, we reduce (ν(F)+k)-SAT into the following parameterization of the Hitting Set problem (denoted by (m?k)-Hitting Set): given a collection $\mathcal{C}$ of m subsets of a ground set U of n elements, decide whether there is X?U such that CX≠? for each $C\in \mathcal{C}$ and |X|≤m?k, where k is the parameter. Gutin, Jones and Yeo (Theor. Comput. Sci. 412(41):5744–5751, 2011) proved that (m?k)-Hitting Set is fixed-parameter tractable by obtaining an exponential kernel for the problem. We obtain two algorithms for (m?k)-Hitting Set: a deterministic algorithm of runtime $O((2e)^{2k+O(\log^{2} k)} (m+n)^{O(1)})$ and a randomized algorithm of expected runtime $O(8^{k+O(\sqrt{k})} (m+n)^{O(1)})$ . Our deterministic algorithm improves an algorithm that follows from the kernelization result of Gutin, Jones and Yeo (Theor. Comput. Sci. 412(41):5744–5751, 2011).  相似文献   

17.
The adoption of Artificial Neural Networks (ANNs) in safety-related applications is often avoided because it is difficult to rule out possible misbehaviors with traditional analytical or probabilistic techniques. In this paper we present NeVer, our tool for checking safety of ANNs. NeVer encodes the problem of verifying safety of ANNs into the problem of satisfying corresponding Boolean combinations of linear arithmetic constraints. We describe the main verification algorithm and the structure of NeVer. We present also empirical results confirming the effectiveness of NeVer on realistic case studies.  相似文献   

18.
19.
We investigate a class of scheduling problems that arise in the optimization of SQL queries for parallel machines (Chekuri et al. in PODS??95, pp.?255?C265, 1995). In these problems, an undirected graph is used to represent communication and inter-operator parallelism. The goal is to minimize the global response time of the system. We provide a polynomial time approximation scheme for the special cases where the operator graph is a tree, thereby improving on a polynomial time 2.87-approximation algorithm by Chekuri et al. The approximation scheme is generalized to the case where the operator graph has treewidth bounded by a constant. We analyze instances with small response times for general operator graphs: Deciding whether a response time of four time units can be reached is easy, but deciding whether a response time of six time units can be reached is NP-hard. Finally, we prove that for general operator graphs the existence of a polynomial time approximation algorithm with worst case performance guarantee better than 4/3 would imply P=NP.  相似文献   

20.
The NP-complete problem Proper Interval Vertex Deletion is to decide whether an input graph on n vertices and m edges can be turned into a proper interval graph by deleting at most k vertices. Van Bevern et al. (In: Proceedings WG 2010. Lecture notes in computer science, vol. 6410, pp. 232–243, 2010) showed that this problem can be solved in $\mathcal {O}((14k +14)^{k+1} kn^{6})$ time. We improve this result by presenting an $\mathcal {O}(6^{k} kn^{6})$ time algorithm for Proper Interval Vertex Deletion. Our fixed-parameter algorithm is based on a new structural result stating that every connected component of a {claw,net,tent,C 4,C 5,C 6}-free graph is a proper circular arc graph, combined with a simple greedy algorithm that solves Proper Interval Vertex Deletion on {claw,net,tent,C 4,C 5,C 6}-free graphs in $\mathcal {O}(n+m)$ time. Our approach also yields a polynomial-time 6-approximation algorithm for the optimization variant of Proper Interval Vertex Deletion.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号