首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
An ongoing line of research has shown super-polynomial lower bounds for uniform and slightly-non-uniform small-depth threshold and arithmetic circuits (Allender, in Chicago J. Theor. Comput. Sci. 1999(7), 1999; Koiran and Perifel, in Proceedings of the 24th Annual IEEE Conference on Computational Complexity (CCC 2009), pp. 35–40, 2009; Jansen and Santhanam, in Proceedings of the 38th International Colloquium on Automata, Languages and Programming (ICALP 2011), I, pp. 724–735, 2011). We give a unified framework that captures and improves each of the previous results. Our main results are that Permanent does not have threshold circuits of the following kinds.
  1. Depth O(1), n o(1) bits of non-uniformity, and size n O(1).
  2. Depth O(1), polylog(n) bits of non-uniformity, and size s(n) such that for all constants c the c-fold composition of s, s (c)(n), is less than 2 n .
  3. Depth o(loglogn), polylog(n) bits of non-uniformity, and size n O(1).
(1) strengthens a result of Jansen and Santhanam (Jansen and Santhanam, in Proceedings of the 38th International Colloquium on Automata, Languages and Programming (ICALP 2011), I, pp. 724–735, 2011), who obtained similar parameters but for arithmetic circuits of constant depth rather than Boolean threshold circuits. (2) and (3) strengthen results of Allender (Allender, in Chicago J. Theor. Comput. Sci. 1999(7), 1999) and Koiran and Perifel (Koiran and Perifel, in Proceedings of the 24th Annual IEEE Conference on Computational Complexity (CCC 2009), pp. 35–40, 2009), respectively, who obtained results with similar parameters but for completely uniform circuits. Our main technical contribution is to simplify and unify earlier proofs in this area, and adapt the proofs to handle some amount of non-uniformity. We also develop a notion of circuits with a small amount of non-uniformity that naturally interpolates between fully uniform and fully non-uniform circuits. We use this notion, which we term weak uniformity, rather than the earlier and essentially equivalent notion of succinctness used by Jansen and Santhanam because the notion of weak uniformity more fully and easily interpolates between full uniformity and non-uniformity of circuits.  相似文献   

2.
Danvy??s functional unparsing problem (Danvy in J. Funct. Program. 8(6), 621?C625, 1998) is to implement a type-safe ??printf?? function, which converts a sequence of heterogeneous arguments to a string according to a given format. The dual problem is to implement a type-safe ??scanf?? function, which extracts a sequence of heterogeneous arguments from a string by interpreting (Friedman and Wand in LFP, pp. 348?C355, 1984 and in Essentials of Programming Languages, MIT Press, 2008) the same format as an equally heterogeneous sequence of patterns that binds zero or more variables. We derive multiple solutions to both problems (Wand in J. ACM 27(1), 164?C180, 1980) from their formal specifications (Wand in Theor. Comput. Sci. 20(1), 3?C32, 1982). On one hand, our solutions show how the Hindley-Milner type system, unextended, permits accessing heterogeneous sequences with the static assurance of type safety. On the other hand, our solutions demonstrate the use of control operators (Felleisen et al. in Proceedings of the 1988 ACM Conference on Lisp and Functional Programming, pp. 52?C62, ACM Press, New York, 1988; Wand in POPL 85: Conference Record of the Annual ACM Symposium on Principles of Programming Languages, vol. 16, ACM Press, New York, 1985; Meyer and Wand in Logics of Programs, Lecture Notes in Computer Science, vol. 193, pp. 219?C224, Springer, Berlin, 1985) to communicate with formats as coroutines (Wand in Proceedings of the 1980 ACM Conference on Lisp and Functional Programming, vol. 12, pp. 285?C299, ACM Press, New York, 1980 and Haynes et al. in LFP, pp. 293?C298, 1984).  相似文献   

3.
We propose a uniform method to encode various types of trees succinctly. These families include ordered (ordinal), k-ary (cardinal), and unordered (free) trees. We will show the approach is intrinsically suitable for obtaining entropy-based encodings of trees (such as the degree-distribution entropy). Previously-existing succinct encodings of trees use ad hoc techniques to encode each particular family of trees. Additionally, the succinct encodings obtained using the uniform approach improve upon the existing succinct encodings of each family of trees; in the case of ordered trees, it simplifies the encoding while supporting the full set of navigational operations. It also simplifies the implementation of many supported operations. The approach applied to k-ary trees yields a succinct encoding that supports both cardinal-type operations (e.g. determining the child label i) as well as the full set of ordinal-type operations (e.g. reporting the number of siblings to the left of a node). Previous work on succinct encodings of k-ary trees does not support both types of operations simultaneously (Benoit et al. in Algorithmica 43(4):275–292, 2005; Raman et al. in ACM-SIAM Symposium on Discrete Algorithms (SODA), pp. 233–242, 2002). For unordered trees, the approach achieves the first succinct encoding. The approach is based on two recursive decompositions of trees into subtrees. Recursive decomposition of a structure into substructures is a common technique in succinct encodings and has even been used to encode (ordered) trees (Geary et al. in ACM Trans. Algorithms 2(4):510–534, 2006; He et al. in ICALP, pp. 509–520, 2007) and dynamic binary trees (Munro et al. in ACM-SIAM Symposium on Discrete Algorithms (SODA), pp. 529–536, 2001; Storm in Representing dynamic binary trees succinctly, Master’s thesis, 2000). The main distinction of the approach in this paper is that a tree is decomposed into subtrees in a manner that the subtrees are maximally isolated from each other. This intermediate decomposition result is interesting in its own right and has proved useful in other applications (Farzan et al. in ICALP (1), pp. 451–462, 2009; Farzan and Munro in ICALP (1), pp. 439–450, 2009; Farzan and Kamali in ICALP, 2011).  相似文献   

4.
5.
This paper presents some tentative experiments in using a special case of rewriting rules in Mizar (Mizar homepage: http://www.mizar.org/): rewriting a term as its subterm. A similar technique, but based on another Mizar mechanism called functor identification (Korni?owicz 2009) was used by Caminati, in his paper on basic first-order model theory in Mizar (Caminati, J Form Reason 3(1):49–77, 2010, Form Math 19(3):157–169, 2011). However for this purpose he was obligated to introduce some artificial functors. The mechanism presented in the present paper looks promising and fits the Mizar paradigm.  相似文献   

6.
We first consider the problem of finding a maximum size stable matching if incomplete lists and ties are both allowed, but ties are on one side only. For this problem we give a simple, linear time 3/2-approximation algorithm, improving on the best known approximation factor 5/3 of Irving and Manlove (J. Comb. Optim., doi:10.1007/s10878-007-9133-x, 2007). Next, we show how this extends to the Hospitals/Residents problem with the same ratio if the residents have strict orders. We also give a simple linear time algorithm for the general problem with approximation factor 5/3, improving the best known 15/8-approximation algorithm of Iwama, Miyazaki and Yamauchi (SODA ??07: Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms, pp.?288?C297, 2007). For the cases considered in this paper it is NP-hard to approximate within a factor of 21/19 by the result of Halldórsson et?al. (ACM Transactions on Algorithms 3(3):30, 2007). Our algorithms not only give better approximation ratios than the cited ones, but are much simpler and run significantly faster. Also we may drop a restriction used in (J. Comb. Optim., doi:10.1007/s10878-007-9133-x, 2007) and the analysis is substantially more moderate. Preliminary versions of this paper appeared in (Király, Egres Technical Report TR-2008-04, www.cs.elte.hu/egres/, 2008; Király in Proceedings of MATCH-UP 2008: Matching Under Preferences??Algorithms and Complexity, Satellite Workshop of ICALP, July 6, 2008, Reykjavík, Iceland, pp.?36?C45, 2008; Király in ESA 2008, Lecture Notes in Computer Science, vol.?5193, pp.?623?C634, 2008). For the related results obtained thenceforth see Sect.?5.  相似文献   

7.
We study the Cutwidth problem, where the input is a graph G, and the objective is find a linear layout of the vertices that minimizes the maximum number of edges intersected by any vertical line inserted between two consecutive vertices. We give an algorithm for Cutwidth with running time O(2 k n O(1)). Here k is the size of a minimum vertex cover of the input graph G, and n is the number of vertices in G. Our algorithm gives an O(2 n/2 n O(1)) time algorithm for Cutwidth on bipartite graphs as a corollary. This is the first non-trivial exact exponential time algorithm for Cutwidth on a graph class where the problem remains NP-complete. Additionally, we show that Cutwidth parameterized by the size of the minimum vertex cover of the input graph does not admit a polynomial kernel unless NP?coNP/poly. Our kernelization lower bound contrasts with the recent results of Bodlaender et al. (ICALP, Springer, Berlin, 2011; SWAT, Springer, Berlin, 2012) that both Treewidth and Pathwidth parameterized by vertex cover do admit polynomial kernels.  相似文献   

8.
Teachers and students face many challenges in shifting from traditional classroom cultures to enacting the Knowledge-Building Communities model (KBC model) supported by the CSCL environment, Knowledge Forum (Bereiter, 2002; Bereiter & Scardamalia, 1993; Scardamalia, 2002; Scardamalia & Bereiter, 2006). Enacting the model involves socializing students into knowledge work, similar to disciplinary communities. A useful construct in the field of the Learning Sciences for understanding knowledge work is “epistemic games” (Collins & Ferguson, 1993; Morrison & Collins 1995; Perkins, 1997). We propose that a powerful means for supporting classroom enactments of the KBC model entails conceptualizing Knowledge Forum as a collective space for playing multi-player epistemic games. Participation in knowledge-building communities is then scaffolded through learning the moves of such games. We have designed scaffolding tools that highlight particular knowledge-building moves for practice and reflection as a means of supporting students and teachers in coming to understand how to collectively work together toward the progressive improvement of ideas. In order to examine our design theories in practice, we present research on Ideas First, a design-based research program involving enactments of the KBC model in Singaporean primary science classrooms (Bielaczyc & Ow, 2007, 2010; Ow & Bielaczyc, 2007; 2008).  相似文献   

9.
This paper investigates the problem of the pth moment exponential stability for a class of stochastic recurrent neural networks with Markovian jump parameters. With the help of Lyapunov function, stochastic analysis technique, generalized Halanay inequality and Hardy inequality, some novel sufficient conditions on the pth moment exponential stability of the considered system are derived. The results obtained in this paper are completely new and complement and improve some of the previously known results (Liao and Mao, Stoch Anal Appl, 14:165–185, 1996; Wan and Sun, Phys Lett A, 343:306–318, 2005; Hu et al., Chao Solitions Fractals, 27:1006–1010, 2006; Sun and Cao, Nonlinear Anal Real, 8:1171–1185, 2007; Huang et al., Inf Sci, 178:2194–2203, 2008; Wang et al., Phys Lett A, 356:346–352, 2006; Peng and Liu, Neural Comput Appl, 20:543–547, 2011). Moreover, a numerical example is also provided to demonstrate the effectiveness and applicability of the theoretical results.  相似文献   

10.
The class ${\mathcal{SLUR}}$ (Single Lookahead Unit Resolution) was introduced in Schlipf et al. (Inf Process Lett 54:133–137, 1995) as an umbrella class for efficient (poly-time) SAT solving, with linear-time SAT decision, while the recognition problem was not considered. ?epek et al. (2012) and Balyo et al. (2012) extended this class in various ways to hierarchies covering all of CNF (all clause-sets). We introduce a hierarchy ${\mathcal{SLUR}}_k$ which we argue is the natural “limit” of such approaches. The second source for our investigations is the class ${\mathcal{UC}}$ of unit-refutation complete clause-sets, introduced in del Val (1994) as a target class for knowledge compilation. Via the theory of “hardness” of clause-sets as developed in Kullmann (1999), Kullmann (Ann Math Artif Intell 40(3–4):303–352, 2004) and Ansótegui et al. (2008) we obtain a natural generalisation ${\mathcal{UC}}_k$ , containing those clause-sets which are “unit-refutation complete of level k”, which is the same as having hardness at most k. Utilising the strong connections to (tree-)resolution complexity and (nested) input resolution, we develop basic methods for the determination of hardness (the level k in ${\mathcal{UC}}_k$ ). A fundamental insight now is that ${\mathcal{SLUR}}_k = {\mathcal{UC}}_k$ holds for all k. We can thus exploit both streams of intuitions and methods for the investigations of these hierarchies. As an application we can easily show that the hierarchies from ?epek et al. (2012) and Balyo et al. (2012) are strongly subsumed by ${\mathcal{SLUR}}_k$ . Finally we consider the problem of “irredundant” clause-sets in ${\mathcal{UC}}_k$ . For 2-CNF we show that strong minimisations are possible in polynomial time, while already for (very special) Horn clause-sets minimisation is NP-complete. We conclude with an extensive discussion of open problems and future directions. We envisage the concepts investigated here to be the starting point for a theory of good SAT translations, which brings together the good SAT-solving aspects from ${\mathcal{SLUR}}$ together with the knowledge-representation aspects from ${\mathcal{UC}}$ , and expands this combination via notions of “hardness”.  相似文献   

11.
A pair of unit clauses is called conflicting if it is of the form (x), $(\bar{x})$ . A CNF formula is unit-conflict free (UCF) if it contains no pair of conflicting unit clauses. Lieberherr and Specker (J. ACM 28:411?C421, 1981) showed that for each UCF CNF formula with m clauses we can simultaneously satisfy at least $\hat{ \varphi } m$ clauses, where $\hat{ \varphi }=(\sqrt{5}-1)/2$ . We improve the Lieberherr-Specker bound by showing that for each UCF CNF formula F with m clauses we can find, in polynomial time, a?subformula F?? with m?? clauses such that we can simultaneously satisfy at least $\hat{ \varphi } m+(1-\hat{ \varphi })m'+(2-3\hat {\varphi })n''/2$ clauses (in F), where n?? is the number of variables in F which are not in F??. We consider two parameterized versions of MAX-SAT, where the parameter is the number of satisfied clauses above the bounds m/2 and $m(\sqrt{5}-1)/2$ . The former bound is tight for general formulas, and the later is tight for UCF formulas. Mahajan and Raman (J. Algorithms 31:335?C354, 1999) showed that every instance of the first parameterized problem can be transformed, in polynomial time, into an equivalent one with at most 6k+3 variables and 10k clauses. We improve this to 4k variables and $(2\sqrt{5}+4)k$ clauses. Mahajan and Raman conjectured that the second parameterized problem is fixed-parameter tractable (FPT). We show that the problem is indeed FPT by describing a polynomial-time algorithm that transforms any problem instance into an equivalent one with at most $(7+3\sqrt{5})k$ variables. Our results are obtained using our improvement of the Lieberherr-Specker bound above.  相似文献   

12.
Matthias Möller 《Computing》2013,95(5):425-448
This paper is concerned with the extension of the algebraic flux-correction (AFC) approach (Kuzmin in Computational fluid and solid mechanics, Elsevier, Amsterdam, pp 887–888, 2001; J Comput Phys 219:513–531, 2006; Comput Appl Math 218:79–87, 2008; J Comput Phys 228:2517–2534, 2009; Flux-corrected transport: principles, algorithms, and applications, 2nd edn. Springer, Berlin, pp 145–192, 2012; J Comput Appl Math 236:2317–2337, 2012; Kuzmin et al. in Comput Methods Appl Mech Eng 193:4915–4946, 2004; Int J Numer Methods Fluids 42:265–295, 2003; Kuzmin and Möller in Flux-corrected transport: principles, algorithms, and applications. Springer, Berlin, 2005; Kuzmin and Turek in J Comput Phys 175:525–558, 2002; J Comput Phys 198:131–158, 2004) to nonconforming finite element methods for the linear transport equation. Accurate nonoscillatory approximations to convection-dominated flows are obtained by stabilizing the continuous Galerkin method by solution-dependent artificial diffusion. Its magnitude is controlled by a flux limiter. This concept dates back to flux-corrected transport schemes. The unique feature of AFC is that all information is extracted from the system matrices which are manipulated to satisfy certain mathematical constraints. AFC schemes have been devised with conforming $P_1$ and $Q_1$ finite elements in mind but this is not a prerequisite. Here, we consider their extension to the nonconforming Crouzeix–Raviart element (Crouzeix and Raviart in RAIRO R3 7:33–76, 1973) on triangular meshes and its quadrilateral counterpart, the class of rotated bilinear Rannacher–Turek elements (Rannacher and Turek in Numer Methods PDEs 8:97–111, 1992). The underlying design principles of AFC schemes are shown to hold for (some variant of) both elements. However, numerical tests for a purely convective flow and a convection–diffusion problem demonstrate that flux-corrected solutions are overdiffusive for the Crouzeix–Raviart element. Good resolution of smooth and discontinuous profiles is attested to $Q_1^\mathrm{nc}$ approximations on quadrilateral meshes. A synthetic benchmark is used to quantify the artificial diffusion present in conforming and nonconforming high-resolution schemes of AFC-type. Finally, the implementation of efficient sparse matrix–vector multiplications is addressed.  相似文献   

13.
We present several variants of the sunflower conjecture of Erd?s & Rado (J Lond Math Soc 35:85–90, 1960) and discuss the relations among them. We then show that two of these conjectures (if true) imply negative answers to the questions of Coppersmith & Winograd (J Symb Comput 9:251–280, 1990) and Cohn et al. (2005) regarding possible approaches for obtaining fast matrix-multiplication algorithms. Specifically, we show that the Erd?s–Rado sunflower conjecture (if true) implies a negative answer to the “no three disjoint equivoluminous subsets” question of Coppersmith & Winograd (J Symb Comput 9:251–280, 1990); we also formulate a “multicolored” sunflower conjecture in ${\mathbb{Z}_3^n}$ and show that (if true) it implies a negative answer to the “strong USP” conjecture of Cohn et al. (2005) (although it does not seem to impact a second conjecture in Cohn et al. (2005) or the viability of the general group-theoretic approach). A surprising consequence of our results is that the Coppersmith–Winograd conjecture actually implies the Cohn et al. conjecture. The multicolored sunflower conjecture in ${\mathbb{Z}_3^n}$ is a strengthening of the well-known (ordinary) sunflower conjecture in ${\mathbb{Z}_3^n}$ , and we show via our connection that a construction from Cohn et al. (2005) yields a lower bound of (2.51 . . .) n on the size of the largest multicolored 3-sunflower-free set, which beats the current best-known lower bound of (2.21 . . . ) n Edel (2004) on the size of the largest 3-sunflower-free set in ${\mathbb{Z}_3^n}$ .  相似文献   

14.
In this document, we present an alternative to the method introduced by Ebner (Pattern Recognit 60–67, 2003; J Parallel Distrib Comput 64(1):79–88, 2004; Color constancy using local color shifts, pp 276–287, 2004; Color Constancy, 2007; Mach Vis Appl 20(5):283–301, 2009) for computing the local space average color. We show that when the problem is framed as a linear system and the resulting series is solved, there is a solution based on LU decomposition that reduces the computing time by at least an order of magnitude.  相似文献   

15.
Numerous sophisticated local algorithm were suggested in the literature for various fundamental problems. Notable examples are the MIS and $(\Delta +1)$ -coloring algorithms by Barenboim and Elkin (Distrib Comput 22(5–6):363–379, 2010), by Kuhn (2009), and by Panconesi and Srinivasan (J Algorithms 20(2):356–374, 1996), as well as the $O\mathopen {}(\Delta ^2)$ -coloring algorithm by Linial (J Comput 21:193, 1992). Unfortunately, most known local algorithms (including, in particular, the aforementioned algorithms) are non-uniform, that is, local algorithms generally use good estimations of one or more global parameters of the network, e.g., the maximum degree $\Delta $ or the number of nodes $n$ . This paper provides a method for transforming a non-uniform local algorithm into a uniform one. Furthermore, the resulting algorithm enjoys the same asymptotic running time as the original non-uniform algorithm. Our method applies to a wide family of both deterministic and randomized algorithms. Specifically, it applies to almost all state of the art non-uniform algorithms for MIS and Maximal Matching, as well as to many results concerning the coloring problem (In particular, it applies to all aforementioned algorithms). To obtain our transformations we introduce a new distributed tool called pruning algorithms, which we believe may be of independent interest.  相似文献   

16.
In this paper, inspired by some types of $BL$ -algebra filters (deductive systems) introduced in Haveshki et al. (Soft Comput 10:657–664, 2006), Kondo and Dudek (Soft Comput 12:419–423, 2008) and Turunen (Arch Math Log 40:467–473, 2001), we defined residuated lattice versions of them and study them in connection with Van Gasse et al. (Inf Sci 180(16):3006–3020, 2010), Lianzhen and Kaitai (Inf Sci 177:5725–5738, 2007), Zhu and Xu (Inf Sci 180:3614–3632, 2010). Also we consider some relations between these filters and quotient residuated lattice that are constructed via these filters.  相似文献   

17.
In a very recent paper, Peng and Liu (Neural Comput Appl 20:543–547, 2011) investigated the pth moment stability of the stochastic Grossberg–Hopfield neural networks with Markov volatilities by Mao et al. (Bernoulli 6:73–90, 2000, Theorem 4.1). We should point out that Mao et al. (Bernoulli 6:73–90, 2000, Theorem 4.1) investigated the pth moment exponentially stable for a class of stochastic dynamical systems with constant delay; however, this theorem cannot apply to the case of variable time delays. It is also worthy to emphasize that Peng and Liu (Neural Comput Appl 20:543–547, 2011) discussed by Mao et al. (Bernoulli 6:73–90, 2000, Theorem 4.1) the pth moment exponentially stable for the Grossberg–Hopfield neural networks with variable delays, and therefore, there are some gaps between Peng and Liu (Neural Comput Appl 20:543–547, 2011, Theorem 1) and Mao et al. (Bernoulli 6:73–90, 2000, Theorem 4.1). In this paper, we fill up this gap. Moreover, a numerical example is also provided to demonstrate the effectiveness and applicability of the theoretical results.  相似文献   

18.
The Parameterized Complexity of Unique Coverage and Its Variants   总被引:1,自引:0,他引:1  
In this paper we study the parameterized complexity of the Unique Coverage problem, a variant of the classic Set Cover problem. This problem admits several parameterizations and we show that all, except the standard parameterization and a generalization of it, are unlikely to be fixed-parameter tractable. We use results from extremal combinatorics to obtain the best-known kernel for Unique Coverage and the well-known color-coding technique of Alon et al. (J. ACM 42(4), 844–856, 1995) to show that a weighted version of this problem is fixed-parameter tractable. Our application of color-coding uses an interesting variation of s-perfect hash families called (k,s)-hash families which were studied by Alon et al. (J. Comb. Theory Ser. A 104(1), 207–215, 2003) in the context of a class of codes called parent identifying codes (Barg et al. in SIAM J. Discrete Math. 14(3), 423–431, 2001). To the best of our knowledge, this is the first application of (k,s)-hash families outside the domain of coding theory. We prove the existence of such families of size smaller than the best-known s-perfect hash families using the probabilistic method (Alon and Spencer in The Probabilistic Method, Wiley, New York, 2000). Explicit constructions of such families of size promised by the probabilistic method is open.  相似文献   

19.
Given a graph G=(V,E) and a positive integer k, an edge modification problem for a graph property Π consists in deciding whether there exists a set F of pairs of V of size at most k such that the graph $H=(V,E\vartriangle F)$ satisfies the property Π. In the Π edge-completion problem, the set F is constrained to be disjoint from E; in the Π edge-deletion problem, F is a subset of E; no constraint is imposed on F in the Π edge-editing problem. A number of optimization problems can be expressed in terms of graph modification problems which have been extensively studied in the context of parameterized complexity (Cai in Inf. Process. Lett. 58:171–176, 1996; Fellows et al. in FCT, pp. 312–321, 2007; Heggernes et al. in STOC, pp. 374–381, 2007). When parameterized by the size k of the set F, it has been proved that if Π is a hereditary property characterized by a finite set of forbidden induced subgraphs, then the three Π edge-modification problems are FPT (Cai in Inf. Process. Lett. 58:171–176, 1996). It was then natural to ask (Bodlaender et al. in IWPEC, 2006) whether these problems also admit a polynomial kernel. in polynomial time to an equivalent instance (G′,k′) with size bounded by a polynomial in k). Using recent lower bound techniques, Kratsch and Wahlström answered this question negatively (Kratsch and Wahlström in IWPEC, pp. 264–275, 2009). However, the problem remains open on many natural graph classes characterized by forbidden induced subgraphs. question to characterize for which type of graph properties, the parameterized edge-modification problems have polynomial kernels. Kratsch and Wahlström asked whether the result holds when the forbidden subgraphs are paths or cycles and pointed out that the problem is already open in the case of P 4-free graphs (i.e. cographs). This paper provides positive and negative results in that line of research. We prove that Parameterized cograph edge-modification problems have cubic vertex kernels whereas polynomial kernels are unlikely to exist for the P l -free edge-deletion and the C l -free edge-deletion problems for l?7 and l≥4 respectively. Indeed, if they exist, then NP?coNP/poly.  相似文献   

20.
We propose an effective procedure, the first one to our knowledge, for translating a proof term of the Calculus of Inductive Constructions (CIC), into a tactical expression of the high-level specification language of a CIC-based proof assistant like coq (Coq development team 2008) or matita (Asperti et al., J Autom Reason 39:109–139, 2007). This procedure, which should not be considered definitive at its present stage, is intended for translating the logical representation of a proof coming from any source, i.e. from a digital library or from another proof development system, into an equivalent proof presented in the proof assistant’s editable high-level format. To testify to effectiveness of our procedure, we report on its implementation in matita and on the translation of a significant set of proofs (Guidi, ACM Trans Comput Log 2009) from their logical representation as coq 7.3.1 (Coq development team 2002) CIC proof terms to their high-level representation as tactical expressions of matita’s user interface language.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号