首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Reachability and shortest path problems are NL-complete for general graphs. They are known to be in L for graphs of tree-width 2 (Jakoby and Tantau in Proceedings of FSTTCS’07: The 27th Annual Conference on Foundations of Software Technology and Theoretical Computer Science, pp. 216–227, 2007). In this paper, we improve these bounds for k-trees, where k is a constant. In particular, the main results of our paper are log-space algorithms for reachability in directed k-trees, and for computation of shortest and longest paths in directed acyclic k-trees. Besides the path problems mentioned above, we also consider the problem of deciding whether a k-tree has a perfect matching (decision version), and if so, finding a perfect matching (search version), and prove that these two problems are L-complete. These problems are known to be in P and in RNC for general graphs, and in SPL for planar bipartite graphs, as shown in Datta et al. (Theory Comput. Syst. 47:737–757, 2010). Our results settle the complexity of these problems for the class of k-trees. The results are also applicable for bounded tree-width graphs, when a tree-decomposition is given as input. The technique central to our algorithms is a careful implementation of the divide-and-conquer approach in log-space, along with some ideas from Jakoby and Tantau (Proceedings of FSTTCS’07: The 27th Annual Conference on Foundations of Software Technology and Theoretical Computer Science, pp. 216–227, 2007) and Limaye et al. (Theory Comput. Syst. 46(3):499–522, 2010).  相似文献   

2.
This work contrasts Giovanni Sartor’s view of inferential semantics of legal concepts (Sartor in Artif Intell Law 17:217–251, 2009) with a probabilistic model of theory formation (Kemp et al. in Cognition 114:165–196, 2010). The work further explores possibilities of implementing Kemp’s probabilistic model of theory formation in the context of mapping legal concepts between two individual legal systems. For implementing the legal concept mapping, we propose a cross-categorization approach that combines three mathematical models: the Bayesian Model of Generalization (BMG; Tenenbaum and Griffiths in Behav Brain Sci 4:629–640, 2001), the probabilistic model of theory formation, i.e., the Infinite Relational Model (IRM) first introduced by Kemp et al. (The twenty-first national conference on artificial intelligence, 2006, Cognition 114:165–196, 2010) and its extended model, i.e., the normal-IRM (n-IRM) proposed by Herlau et al. (IEEE International Workshop on Machine Learning for Signal Processing, 2012). We apply our cross-categorization approach to datasets where legal concepts related to educational systems are respectively defined by the Japanese- and the Danish authorities according to the International Standard Classification of Education. The main contribution of this work is the proposal of a conceptual framework of the cross-categorization approach that, inspired by Sartor (Artif Intell Law 17:217–251, 2009), attempts to explain reasoner’s inferential mechanisms.  相似文献   

3.
X-ray computed tomography (CT) has been playing an important role in diagnostic of cancer and radiotherapy. However, high imaging dose added to healthy organs during CT scans is a serious clinical concern. Imaging dose in CT scans can be reduced by reducing the number of X-ray projections. In this paper, we consider 2D CT reconstructions using very small number of projections. Some regularization based reconstruction methods have already been proposed in the literature for such task, like the total variation (TV) based reconstruction (Sidky and Pan in Phys. Med. Biol. 53:4777, 2008; Sidky et al. in J. X-Ray Sci. Technol. 14(2):119–139, 2006; Jia et al. in Med. Phys. 37:1757, 2010; Choi et al. in Med. Phys. 37:5113, 2010) and balanced approach with wavelet frame based regularization (Jia et al. in Phys. Med. Biol. 56:3787–3807, 2011). For most of the existing methods, at least 40 projections is usually needed to get a satisfactory reconstruction. In order to keep radiation dose as minimal as possible, while increase the quality of the reconstructed images, one needs to enhance the resolution of the projected image in the Radon domain without increasing the total number of projections. The goal of this paper is to propose a CT reconstruction model with wavelet frame based regularization and Radon domain inpainting. The proposed model simultaneously reconstructs a high quality image and its corresponding high resolution measurements in Radon domain. In addition, we discovered that using the isotropic wavelet frame regularization proposed in Cai et al. (Image restorations: total variation, wavelet frames and beyond, 2011, preprint) is superior than using its anisotropic counterpart. Our proposed model, as well as other models presented in this paper, is solved rather efficiently by split Bregman algorithm (Goldstein and Osher in SIAM J. Imaging Sci. 2(2):323–343, 2009; Cai et al. in Multiscale Model. Simul. 8(2):337–369, 2009). Numerical simulations and comparisons will be presented at the end.  相似文献   

4.
5.
Combining the block transmission in Long and Liu (Phys Rev A 65:032302, 2002) and the double operations in Lin et al. (Opt Commun 282:4455, 2009), we propose a secure multiparty quantum secret sharing protocol with the collective eavesdropping-check character. In this protocol, only the boss needs to prepare Bell states and perform Bell state measurements, and all agents only perform local operations, which makes this protocol more feasible with the current technique. Incidentally, we show that the other half of secret messages in Lin et al. protocol (Opt Commun 282:4455, 2009) may also be eavesdropped.  相似文献   

6.
This paper analyzes the role of common data problems when identifying structural breaks in small samples. Most notably, we survey small sample properties of the most commonly applied endogenous break tests developed by Brown et al. (J R Stat Soc B 37:149–163, 1975) and Zeileis (Stat Pap 45(1):123–131, 2004), Nyblom (J Am Stat Assoc 84(405):223–230, 1989) and Hansen (J Policy Model 14(4):517–533, 1992), and Andrews et al. (J Econ 70(1):9–38, 1996). Power and size properties are derived using Monte Carlo simulations. We find that the Nyblom test is on par with the commonly used F type tests in a small sample in terms of power. While the Nyblom test’s power decreases if the structural break occurs close to the margin of the sample, it proves far more robust to nonnormal distributions of the error term that are found to matter strongly in small samples although being irrelevant asymptotically for all tests that are analyzed in this paper.  相似文献   

7.
The class ${\mathcal{SLUR}}$ (Single Lookahead Unit Resolution) was introduced in Schlipf et al. (Inf Process Lett 54:133–137, 1995) as an umbrella class for efficient (poly-time) SAT solving, with linear-time SAT decision, while the recognition problem was not considered. ?epek et al. (2012) and Balyo et al. (2012) extended this class in various ways to hierarchies covering all of CNF (all clause-sets). We introduce a hierarchy ${\mathcal{SLUR}}_k$ which we argue is the natural “limit” of such approaches. The second source for our investigations is the class ${\mathcal{UC}}$ of unit-refutation complete clause-sets, introduced in del Val (1994) as a target class for knowledge compilation. Via the theory of “hardness” of clause-sets as developed in Kullmann (1999), Kullmann (Ann Math Artif Intell 40(3–4):303–352, 2004) and Ansótegui et al. (2008) we obtain a natural generalisation ${\mathcal{UC}}_k$ , containing those clause-sets which are “unit-refutation complete of level k”, which is the same as having hardness at most k. Utilising the strong connections to (tree-)resolution complexity and (nested) input resolution, we develop basic methods for the determination of hardness (the level k in ${\mathcal{UC}}_k$ ). A fundamental insight now is that ${\mathcal{SLUR}}_k = {\mathcal{UC}}_k$ holds for all k. We can thus exploit both streams of intuitions and methods for the investigations of these hierarchies. As an application we can easily show that the hierarchies from ?epek et al. (2012) and Balyo et al. (2012) are strongly subsumed by ${\mathcal{SLUR}}_k$ . Finally we consider the problem of “irredundant” clause-sets in ${\mathcal{UC}}_k$ . For 2-CNF we show that strong minimisations are possible in polynomial time, while already for (very special) Horn clause-sets minimisation is NP-complete. We conclude with an extensive discussion of open problems and future directions. We envisage the concepts investigated here to be the starting point for a theory of good SAT translations, which brings together the good SAT-solving aspects from ${\mathcal{SLUR}}$ together with the knowledge-representation aspects from ${\mathcal{UC}}$ , and expands this combination via notions of “hardness”.  相似文献   

8.
9.
Danvy??s functional unparsing problem (Danvy in J. Funct. Program. 8(6), 621?C625, 1998) is to implement a type-safe ??printf?? function, which converts a sequence of heterogeneous arguments to a string according to a given format. The dual problem is to implement a type-safe ??scanf?? function, which extracts a sequence of heterogeneous arguments from a string by interpreting (Friedman and Wand in LFP, pp. 348?C355, 1984 and in Essentials of Programming Languages, MIT Press, 2008) the same format as an equally heterogeneous sequence of patterns that binds zero or more variables. We derive multiple solutions to both problems (Wand in J. ACM 27(1), 164?C180, 1980) from their formal specifications (Wand in Theor. Comput. Sci. 20(1), 3?C32, 1982). On one hand, our solutions show how the Hindley-Milner type system, unextended, permits accessing heterogeneous sequences with the static assurance of type safety. On the other hand, our solutions demonstrate the use of control operators (Felleisen et al. in Proceedings of the 1988 ACM Conference on Lisp and Functional Programming, pp. 52?C62, ACM Press, New York, 1988; Wand in POPL 85: Conference Record of the Annual ACM Symposium on Principles of Programming Languages, vol. 16, ACM Press, New York, 1985; Meyer and Wand in Logics of Programs, Lecture Notes in Computer Science, vol. 193, pp. 219?C224, Springer, Berlin, 1985) to communicate with formats as coroutines (Wand in Proceedings of the 1980 ACM Conference on Lisp and Functional Programming, vol. 12, pp. 285?C299, ACM Press, New York, 1980 and Haynes et al. in LFP, pp. 293?C298, 1984).  相似文献   

10.
We address the problem of generating compact code from software pipelined loops. Although software pipelining is a powerful technique to extract fine-grain parallelism, it generates lifetime intervals spanning multiple loop iterations. These intervals require periodic register allocation (also called variable expansion), which in turn yields a code generation challenge. We are looking for the minimal unrolling factor enabling the periodic register allocation of software pipelined kernels. This challenge is generally addressed through one of: (1) hardware support in the form of rotating register files, which solve the unrolling problem but are expensive in hardware; (2) register renaming by inserting register moves, which increase the number of operations in the loop, and may damage the schedule of the software pipeline and reduce throughput; (3) post-pass loop unrolling that does not compromise throughput but often leads to impractical code growth. The latter approach relies on the proof that MAXLIVE registers (maximal number of values simultaneously alive) are sufficient for periodic register allocation (Eisenbeis et al. in PACT ’95: Proceedings of the IFIP WG10.3 working conference on Parallel Architectures and Compilation Techniques, pages 264–267, Manchester, UK, 1995; Hendren et al. in CC ’92: Proceedings of the 4th International Conference on Compiler Construction, pages 176–191, London, UK, 1992). However, the best existing heuristic for controlling this code growth—modulo variable expansion (Lam in SIGPLAN Not 23(7):318–328, 1988)—may not apply the correct amount of loop unrolling to guarantee that MAXLIVE registers are enough, which may result in register spills Eisenbeis et al. in PACT ’95: Proceedings of the IFIP WG10.3 working conference on Parallel Architectures and Compilation Techniques, pages 264–267, Manchester, UK, 1995. This paper presents our research results on the open problem of minimal loop unrolling, allowing a software-only code generation that does not trade the optimality of the initiation interval (II) for the compactness of the generated code. Our novel idea is to use the remaining free registers after periodic register allocation to relax the constraints on register reuse. The problem of minimal loop unrolling arises either before or after software pipelining, either with a single or with multiple register types (classes). We provide a formal problem definition for each scenario, and we propose and study a dedicated algorithm for each problem. Our solutions are implemented within an industrial-strength compiler for a VLIW embedded processor from STMicroelectronics, and validated on multiple benchmarks suites.  相似文献   

11.
In this paper decision variables for the key-frame detection problem in a video are evaluated using statistical tools derived from the theory of design of experiments. The pixel-by-pixel intensity difference of consecutive video frames is used as the factor or decision variable for designing an experiment for key-frame detection. The determination of a key-frame is correlated with the different values of the factor. A novel concept of meaningfulness of a video key-frame is also introduced to select the representative key-frame from a set of possible key-frames. The use of the concepts of design of experiments and the meaningfulness property to summarize a video is tested using a number of videos taken from MUSCLE-VCD-2007 dataset. The performance of the proposed approach in detecting key-frames is found to be superior in comparison to the competing approaches like PME based method (Liu et al., IEEE Trans Circuits Syst Video Technol 13(10):1006–1013, 2003; Mukherjee et al., IEEE Trans Circuits Syst Video Technol 17(5):612–620, 2007; Panagiotakis et al., IEEE Trans Circuits Syst Video Technol 19(3):447–451, 2009).  相似文献   

12.
Wavelet frame based models for image restoration have been extensively studied for the past decade (Chan et al. in SIAM J. Sci. Comput. 24(4):1408–1432, 2003; Cai et al. in Multiscale Model. Simul. 8(2):337–369, 2009; Elad et al. in Appl. Comput. Harmon. Anal. 19(3):340–358, 2005; Starck et al. in IEEE Trans. Image Process. 14(10):1570–1582, 2005; Shen in Proceedings of the international congress of mathematicians, vol. 4, pp. 2834–2863, 2010; Dong and Shen in IAS lecture notes series, Summer program on “The mathematics of image processing”, Park City Mathematics Institute, 2010). The success of wavelet frames in image restoration is mainly due to their capability of sparsely approximating piecewise smooth functions like images. Most of the wavelet frame based models designed in the past are based on the penalization of the ? 1 norm of wavelet frame coefficients, which, under certain conditions, is the right choice, as supported by theories of compressed sensing (Candes et al. in Appl. Comput. Harmon. Anal., 2010; Candes et al. in IEEE Trans. Inf. Theory 52(2):489–509, 2006; Donoho in IEEE Trans. Inf. Theory 52:1289–1306, 2006). However, the assumptions of compressed sensing may not be satisfied in practice (e.g. for image deblurring and CT image reconstruction). Recently in Zhang et al. (UCLA CAM Report, vol. 11-32, 2011), the authors propose to penalize the ? 0 “norm” of the wavelet frame coefficients instead, and they have demonstrated significant improvements of their method over some commonly used ? 1 minimization models in terms of quality of the recovered images. In this paper, we propose a new algorithm, called the mean doubly augmented Lagrangian (MDAL) method, for ? 0 minimizations based on the classical doubly augmented Lagrangian (DAL) method (Rockafellar in Math. Oper. Res. 97–116, 1976). Our numerical experiments show that the proposed MDAL method is not only more efficient than the method proposed by Zhang et al. (UCLA CAM Report, vol. 11-32, 2011), but can also generate recovered images with even higher quality. This study reassures the feasibility of using the ? 0 “norm” for image restoration problems.  相似文献   

13.
We extend the result of Zhang et al. (J Fuzzy Math 14:53, 2006), who discussed the finite fuzzy relation equations with max–min and max–prod composition. In this article, the $\text{max-}*$ composition is used for wide family of operations $*$ . In particular, families of solutions of two relation equations are compared.  相似文献   

14.
We study two-stage robust variants of combinatorial optimization problems on undirected graphs, like Steiner tree, Steiner forest, and uncapacitated facility location. Robust optimization problems, previously studied by Dhamdhere et al. (Proc. of 46th Annual IEEE Symposium on Foundations of Computer Science (FOCS’05), pp. 367–378, 2005), Golovin et al. (Proc. of the 23rd Annual Symposium on Theoretical Aspects of Computer Science (STACS), 2006), and Feige et al. (Proc. of the 12th International Integer Programming and Combinatorial Optimization Conference, pp. 439–453, 2007), are two-stage planning problems in which the requirements are revealed after some decisions are taken in Stage 1. One has to then complete the solution, at a higher cost, to meet the given requirements. In the robust k-Steiner tree problem, for example, one buys some edges in Stage 1. Then k terminals are revealed in Stage 2 and one has to buy more edges, at a higher cost, to complete the Stage 1 solution to build a Steiner tree on these terminals. The objective is to minimize the total cost under the worst-case scenario. In this paper, we focus on the case of exponentially many scenarios given implicitly. A scenario consists of any subset of k terminals (for k-Steiner tree), or any subset of k terminal-pairs (for k-Steiner forest), or any subset of k clients (for facility location). Feige et al. (Proc. of the 12th International Integer Programming and Combinatorial Optimization Conference, pp. 439–453, 2007) give an LP-based general framework for approximation algorithms for a class of two stage robust problems. Their framework cannot be used for network design problems like k-Steiner tree (see later elaboration). Their framework can be used for the robust facility location problem, but gives only a logarithmic approximation. We present the first constant-factor approximation algorithms for the robust k-Steiner tree (with exponential number of scenarios) and robust uncapacitated facility location problems. Our algorithms are combinatorial and are based on guessing the optimum cost and clustering to aggregate nearby vertices. For the robust k-Steiner forest problem on trees and with uniform multiplicative increase factor for Stage 2 (also known as inflation), we present a constant approximation. We show APX-hardness of the robust min-cut problem (even with singleton-set scenarios), resolving an open question of (Dhamdhere et al. in Proc. of 46th Annual IEEE Symposium on Foundations of Computer Science (FOCS’05), pp. 367–378, 2005) and (Golovin et al. in Proc. of the 23rd Annual Symposium on Theoretical Aspects of Computer Science (STACS), 2006).  相似文献   

15.
This paper concerns project scheduling under uncertainty. The project is modeled as an activity-on-node network, each activity having an uncertain duration represented by an interval. The problem of computing the minimum float of each activity over all duration scenarios is addressed. For solving this NP-hard problem, Dubois et al. (in J. Intell. Manuf. 16, 407–422, 2005) and Fortin et al. (in J. Sched. doi:10.1007/s10951-010-0163-3, 2010) have proposed an algorithm based on path enumeration. In this paper, new structural properties of optimal solutions are established and used for deriving a lower bound and designing an efficient branch-and-bound procedure. Two mixed-integer programming formulations are also proposed. Computational experimentations have been conducted on a large variety of randomly generated problem instances. The results show that the proposed branch-and-bound procedure is very fast and consistently outperforms the MIP formulations and the path enumeration algorithm.  相似文献   

16.
An ongoing line of research has shown super-polynomial lower bounds for uniform and slightly-non-uniform small-depth threshold and arithmetic circuits (Allender, in Chicago J. Theor. Comput. Sci. 1999(7), 1999; Koiran and Perifel, in Proceedings of the 24th Annual IEEE Conference on Computational Complexity (CCC 2009), pp. 35–40, 2009; Jansen and Santhanam, in Proceedings of the 38th International Colloquium on Automata, Languages and Programming (ICALP 2011), I, pp. 724–735, 2011). We give a unified framework that captures and improves each of the previous results. Our main results are that Permanent does not have threshold circuits of the following kinds.
  1. Depth O(1), n o(1) bits of non-uniformity, and size n O(1).
  2. Depth O(1), polylog(n) bits of non-uniformity, and size s(n) such that for all constants c the c-fold composition of s, s (c)(n), is less than 2 n .
  3. Depth o(loglogn), polylog(n) bits of non-uniformity, and size n O(1).
(1) strengthens a result of Jansen and Santhanam (Jansen and Santhanam, in Proceedings of the 38th International Colloquium on Automata, Languages and Programming (ICALP 2011), I, pp. 724–735, 2011), who obtained similar parameters but for arithmetic circuits of constant depth rather than Boolean threshold circuits. (2) and (3) strengthen results of Allender (Allender, in Chicago J. Theor. Comput. Sci. 1999(7), 1999) and Koiran and Perifel (Koiran and Perifel, in Proceedings of the 24th Annual IEEE Conference on Computational Complexity (CCC 2009), pp. 35–40, 2009), respectively, who obtained results with similar parameters but for completely uniform circuits. Our main technical contribution is to simplify and unify earlier proofs in this area, and adapt the proofs to handle some amount of non-uniformity. We also develop a notion of circuits with a small amount of non-uniformity that naturally interpolates between fully uniform and fully non-uniform circuits. We use this notion, which we term weak uniformity, rather than the earlier and essentially equivalent notion of succinctness used by Jansen and Santhanam because the notion of weak uniformity more fully and easily interpolates between full uniformity and non-uniformity of circuits.  相似文献   

17.
Baker and Cirinei (Lecture Notes in Computer Science, vol. 4878, Springer, pp. 62–75, 2007) have introduced an exact but naive algorithm, that consists in solving a state reachability problem in a finite automaton, to check whether a set of sporadic hard real-time tasks is schedulable on an identical multiprocessor platform. However, this algorithm suffers from poor performance due to the exponential size of the automaton relative to the size of the task set. In this paper, we build on the work of Baker and Cirinei, and rely on their formalism to characterise the complexity of this problem. We prove that it is PSpace-complete. In order to obtain an algorithm that is applicable in practice to systems of realistic sizes, we successfully apply techniques developed by the formal verification community, specifically antichain techniques (Doyen and Raskin in Lecture Notes in Computer Science, vol. 6015, Springer, pp. 2–22, 2010) to this scheduling problem. For that purpose, we define and prove the correctness of a simulation relation on Baker and Cirinei’s automaton. We show that our improved algorithm yields dramatically improved performance for the schedulability test and opens for many further improvements. This work is an extended and revised version of a previous conference paper by the same authors (Lindström et al., Proceedings of the 19th International Conference on Real-Time and Network Systems (RTNS 2011), pp. 25–34, 2011).  相似文献   

18.
In this paper we provide improved approximation algorithms for the Min-Max Tree Cover and Bounded Tree Cover problems. Given a graph G=(V,E) with weights w:E→?+, a set T 1,T 2,…,T k of subtrees of G is called a tree cover of G if $V=\bigcup_{i=1}^{k} V(T_{i})$ . In the Min-Max k-tree Cover problem we are given graph G and a positive integer k and the goal is to find a tree cover with k trees, such that the weight of the largest tree in the cover is minimized. We present a 3-approximation algorithm for this improving the two different approximation algorithms presented in Arkin et al. (J. Algorithms 59:1–18, 2006) and Even et al. (Oper. Res. Lett. 32(4):309–315, 2004) with ratio 4. The problem is known to have an APX-hardness lower bound of $\frac{3}{2}$ (Xu and Wen in Oper. Res. Lett. 38:169–173, 2010). In the Bounded Tree Cover problem we are given graph G and a bound λ and the goal is to find a tree cover with minimum number of trees such that each tree has weight at most λ. We present a 2.5-approximation algorithm for this, improving the 3-approximation bound in Arkin et al. (J. Algorithms 59:1–18, 2006).  相似文献   

19.
We propose an effective procedure, the first one to our knowledge, for translating a proof term of the Calculus of Inductive Constructions (CIC), into a tactical expression of the high-level specification language of a CIC-based proof assistant like coq (Coq development team 2008) or matita (Asperti et al., J Autom Reason 39:109–139, 2007). This procedure, which should not be considered definitive at its present stage, is intended for translating the logical representation of a proof coming from any source, i.e. from a digital library or from another proof development system, into an equivalent proof presented in the proof assistant’s editable high-level format. To testify to effectiveness of our procedure, we report on its implementation in matita and on the translation of a significant set of proofs (Guidi, ACM Trans Comput Log 2009) from their logical representation as coq 7.3.1 (Coq development team 2002) CIC proof terms to their high-level representation as tactical expressions of matita’s user interface language.  相似文献   

20.
The problems studied in this article originate from the Graph Motif problem introduced by Lacroix et al. (IEEE/ACM Trans. Comput. Biol. Bioinform. 3(4):360–368, 2006) in the context of biological networks. The problem is to decide if a vertex-colored graph has a connected subgraph whose colors equal a given multiset of colors M. It is a graph pattern-matching problem variant, where the structure of the occurrence of the pattern is not of interest but the only requirement is the connectedness. Using an algebraic framework recently introduced by Koutis (Proceedings of the 35th International Colloquium on Automata, Languages and Programming (ICALP), Lecture Notes in Computer Science, vol. 5125, pp. 575–586, 2008) and Koutis and Williams (Proceedings of the 36th International Colloquium on Automata, Languages and Programming (ICALP), Lecture Notes in Computer Science, vol. 5555, pp. 653–664, 2009), we obtain new FPT algorithms for Graph Motif and variants, with improved running times. We also obtain results on the counting versions of this problem, proving that the counting problem is FPT if M is a set, but becomes #W[1]-hard if M is a multiset with two colors. Finally, we present an experimental evaluation of this approach on real datasets, showing that its performance compares favorably with existing software.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号