首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The -automaton is the weakest form of the nondeterministic version of the restarting automaton that was introduced by Jančar et al. to model the so-called analysis by reduction. Here it is shown that the class ℒ(R) of languages that are accepted by -automata is incomparable under set inclusion to the class of Church-Rosser languages and to the class of growing context-sensitive languages. In fact this already holds for the class of languages that are accepted by 2-monotone -automata. In addition, we prove that already the latter class contains -complete languages, showing that already the 2-monotone -automaton has a surprisingly large expressive power. The results of this paper have been announced at DLT 2004 in Auckland, New Zealand. This work was mainly carried out while T. Jurdziński was visiting the University of Kassel, supported by a grant from the Deutsche Forschungsgemeinschaft (DFG). F. Mráz and M. Plátek were partially supported by the Grant Agency of the Czech Republic under Grant-No. 201/04/2102 and by the program ‘Information Society’ under project 1ET100300517. F. Mráz was also supported by the Grant Agency of Charles University in Prague under Grant-No. 358/2006/A-INF/MFF.  相似文献   

2.
This paper presents a semantics for the logic of proofs in which all the operations on proofs are realized by feasibly computable functions. More precisely, we will show that the completeness of for the semantics of proofs of Peano Arithmetic extends to the semantics of proofs in Buss’ bounded arithmetic . In view of applications in epistemology of in particular and justification logics in general this result shows that explicit knowledge in the propositional framework can be made computationally feasible. This research supported by CUNY Community College Collaborative Incentive Research Grant 91639-0001 “Mathematical Foundations of Knowledge Representation”.  相似文献   

3.
It is proved that “FIFO” worksharing protocols provide asymptotically optimal solutions to two problems related to sharing large collections of independent tasks in a heterogeneous network of workstations (HNOW) . In the , one seeks to accomplish as much work as possible on during a prespecified fixed period of L time units. In the , one seeks to complete W units of work by “renting” for as short a time as necessary. The worksharing protocols we study are crafted within an architectural model that characterizes via parameters that measure ’s workstations’ computational and communicational powers. All valid protocols are self-scheduling, in the sense that they determine completely both an amount of work to allocate to each of ’s workstations and a schedule for all related interworkstation communications. The schedules provide either a value for W given L, or a value for L given W, hence solve both of the motivating problems. A protocol observes a FIFO regimen if it has ’s workstations finish their assigned work, and return their results, in the same order in which they are supplied with their workloads. The proven optimality of FIFO protocols resides in the fact that they accomplish at least as much work as any other protocol during all sufficiently long worksharing episodes, and that they complete sufficiently large given collections of tasks at least as fast as any other protocol. Simulation experiments illustrate that the superiority of FIFO protocols is often observed during worksharing episodes of only a few minutes’ duration. A portion of this research was presented at the 15th ACM Symp. on Parallelism in Algorithms and Architectures (2003).  相似文献   

4.
Disjoint -pairs are a well studied complexity-theoretic concept with important applications in cryptography and propositional proof complexity. In this paper we introduce a natural generalization of the notion of disjoint -pairs to disjoint k-tuples of -sets for k≥2. We define subclasses of the class of all disjoint k-tuples of -sets. These subclasses are associated with a propositional proof system and possess complete tuples which are defined from the proof system. In our main result we show that complete disjoint -pairs exist if and only if complete disjoint k-tuples of -sets exist for all k≥2. Further, this is equivalent to the existence of a propositional proof system in which the disjointness of all k-tuples is shortly provable. We also show that a strengthening of this conditions characterizes the existence of optimal proof systems. An extended abstract of this paper appeared in the proceedings of the conference CSR 2006 (Lecture Notes in Computer Science 3967, 80–91, 2006). Supported by DFG grant KO 1053/5-1.  相似文献   

5.
The authors’ programming formalism is a version of call-by-value under a complexity-theoretically motivated type system. programs run in type-2 polynomial-time and all standard type-2 basic feasible functionals are -definable ( types are confined to levels 0, 1, and 2). A limitation of the original version of is that the only directly expressible recursions are tail-recursions. Here we extend so that a broad range of affine recursions are directly expressible. In particular, the revised can fairly naturally express the classic insertion- and selection-sort algorithms, thus overcoming a sticking point of most prior implicit-complexity-based formalisms. The paper’s main work is in refining the original time-complexity semantics for to show that these new recursion schemes do not lead out of the realm of feasibility.  相似文献   

6.
We introduce a new modelling assumption for wireless sensor networks, that of node redeployment (addition of sensor devices during protocol evolution) and we extend the modelling assumption of heterogeneity (having sensor devices of various types). These two features further increase the highly dynamic nature of such networks and adaptation becomes a powerful technique for protocol design. Under these modelling assumptions, we design, implement and evaluate a new power conservation scheme for efficient data propagation. Our scheme is adaptive: it locally monitors the network conditions (density, energy) and accordingly adjusts the sleep-awake schedules of the nodes towards improved operation choices. The scheme is simple, distributed and does not require exchange of control messages between nodes. Implementing our protocol in software we combine it with two well-known data propagation protocols and evaluate the achieved performance through a detailed simulation study using our extended version of the network simulator ns-2. We focus on highly dynamic scenarios with respect to network density, traffic conditions and sensor node resources. We propose a new general and parameterized metric capturing the trade-offs between delivery rate, energy efficiency and latency. The simulation findings demonstrate significant gains (such as more than doubling the success rate of the well-known propagation protocol) and good trade-offs achieved. Furthermore, the redeployment of additional sensors during network evolution and/or the heterogeneous deployment of sensors, drastically improve (when compared to “equal total power” simultaneous deployment of identical sensors at the start) the protocol performance (i.e. the success rate increases up to four times while reducing energy dissipation and, interestingly, keeping latency low). This work has been partially supported by the IST Programme of the European Union under contract number IST-2005-15964 ( ), the Programme under the European Social Fund (ESF) and Operational Program for Educational and Vocational Training II (EPEAEK II) and the Programme of GSRT under contract number 03ED568. A preliminary version of this work has appeared in [13].  相似文献   

7.
8.
There has been much recent interest in the use of the earliest-deadline-first ( ) algorithm for scheduling soft real-time sporadic task systems on identical multiprocessors. In hard real-time systems, a significant disparity exists between -based schemes and Pfair scheduling: on M processors, the worst-case schedulable utilization for all known variants is approximately M/2, whereas it is M for optimal Pfair algorithms. This is unfortunate because -based algorithms entail lower scheduling and task-migration overheads. However, such a disparity in schedulability can be alleviated by easing the requirement that all deadlines be met, which may be sufficient for soft real-time systems. In particular, in recent work, we have shown that if task migrations are not restricted, then (i.e. , global ) can ensure bounded tardiness for a sporadic task system with no restrictions on total utilization. Unrestricted task migrations in global may be unappealing for some systems, but if migrations are forbidden entirely, then bounded tardiness cannot be guaranteed. In this paper, we address the issue of striking a balance between task migrations and system utilization by proposing an algorithm called , which is based upon and treads a middle path, by restricting, but not eliminating, task migrations. Specifically, under , the ability to migrate is required for at most M−1 tasks, and it is sufficient that every such task migrate between two processors and at job boundaries only. , like global , can ensure bounded tardiness to a sporadic task system as long as the available processing capacity is not exceeded, but, unlike global , may require that per-task utilizations be capped. The required cap is quite liberal, hence, should enable a wide range of soft real-time applications to be scheduled with no constraints on total utilization.
UmaMaheswari C. DeviEmail:
  相似文献   

9.
Kierstead et al. (SIAM J Discret Math 8:485–498, 1995) have shown 1 that the competitive function of on-line coloring for -free graphs (i.e., graphs without induced path on 5 vertices) is bounded from above by the exponential function . No nontrivial lower bound was known. In this paper we show the quadratic lower bound . More precisely, we prove that is the exact competitive function for ()-free graphs. In this paper we also prove that 2 - 1 is the competitive function of the best clique covering on-line algorithm for ()-free graphs.  相似文献   

10.
Hardness of a separation of nondeterminism, randomization and determinism for polynomial time computations has motivated the analysis of this issue for restricted models of computation. Following this line of research, we consider randomized length-reducing two-pushdown automata ( ), a natural extension of pushdown automata ( ). Our main results are as follows. We show that deterministic s are weaker than Las Vegas s which in turn are weaker than Monte Carlo s. Moreover, bounded two-sided error s are stronger than Monte Carlo s and they are able to recognize some languages which cannot be recognized nondeterministically. Finally, we prove that amplification is impossible for Las Vegas and Monte Carlo automata. Partially supported by MNiSW grant number N206 024 31/3826, 2006-2008. An extended abstract of this paper appeared in the MFCS06 Proceedings (Lecture Notes in Computer Science, vol. 4162, pp. 561–572, 2006).  相似文献   

11.
We study the complexity of restricted versions of s-t-connectivity, which is the standard complete problem for . In particular, we focus on different classes of planar graphs, of which grid graphs are an important special case. Our main results are:
•  Reachability in graphs of genus one is logspace-equivalent to reachability in grid graphs (and in particular it is logspace-equivalent to both reachability and non-reachability in planar graphs).
•  Many of the natural restrictions on grid-graph reachability (GGR) are equivalent under reductions (for instance, undirected GGR, outdegree-one GGR, and indegree-one-outdegree-one GGR are all equivalent). These problems are all equivalent to the problem of determining whether a completed game position in HEX is a winning position, as well as to the problem of reachability in mazes studied by Blum and Kozen (IEEE Symposium on Foundations of Computer Science (FOCS), pp. 132–142, [1978]). These problems provide natural examples of problems that are hard for under reductions but are not known to be hard for  ; they thus give insight into the structure of .
•  Reachability in layered planar graphs is logspace-equivalent to layered grid graph reachability (LGGR). We show that LGGR lies in (a subclass of ).
•  Series-Parallel digraphs (on which reachability was shown to be decidable in logspace by Jakoby et al.) are a special case of single-source-single-sink planar directed acyclic graphs (DAGs); reachability for such graphs logspace reduces to single-source-single-sink acyclic grid graphs. We show that reachability on such grid graphs reduces to undirected GGR.
•  We build on this to show that reachability for single-source multiple-sink planar DAGs is solvable in .
E. Allender supported in part by NSF Grant CCF-0514155. D.A. Mix Barrington supported in part by NSF Grant CCR-9988260. S. Roy supported in part by NSF Grant CCF-0514155.  相似文献   

12.
The interest is in characterizing insightfully the power of program self-reference in effective programming systems ( ), the computability-theoretic analogs of programming languages (for the partial computable functions). In an in which the constructive form of Kleene’s Recursion Theorem (KRT) holds, it is possible to construct, algorithmically, from an arbitrary algorithmic task, a self-referential program that, in a sense, creates a self-copy and then performs that task on the self-copy. In an in which the not-necessarily-constructive form of Kleene’s Recursion Theorem (krt) holds, such self-referential programs exist, but cannot, in general, be found algorithmically. In an earlier effort, Royer proved that there is no collection of recursive denotational control structures whose implementability characterizes the in which KRT holds. One main result herein, proven by a finite injury priority argument, is that the in which krt holds are, similarly, not characterized by the implementability of some collection of recursive denotational control structures. On the positive side, however, a characterization of such of a rather different sort is shown herein. Though, perhaps not the insightful characterization sought after, this surprising result reveals that a hidden and inherent constructivity is always present in krt. This paper is an expanded version of [6]. This paper received support from NSF Grant CCR-0208616. Know thyself. Greek proverb  相似文献   

13.
Krivine presents the  machine, which produces weak head normal form results. Sestoft introduces several call-by-need variants of the  machine that implement result sharing via pushing update markers on the stack in a way similar to the TIM and the STG machine. When a sequence of consecutive markers appears on the stack, all but the first cause redundant updates. Improvements related to these sequences have dealt with either the consumption of the markers or the removal of the markers once they appear. Here we present an improvement that eliminates the production of marker sequences of length greater than one. This improvement results in the  machine, a more space and time efficient variant of . We then apply the classic optimization of short-circuiting operand variable dereferences to create the call-by-need  machine. Finally, we combine the two improvements in the  machine. On our benchmarks this machine uses half the stack space, performs one quarter as many updates, and executes between 27% faster and 17% slower than our ℒ variant of Sestoft’s lazy Krivine machine. More interesting is that on one benchmark ℒ, , and consume unbounded space, but consumes constant space. Our comparisons to Sestoft’s Mark 2 machine are not exact, however, since we restrict ourselves to unpreprocessed closed lambda terms. Our variant of his machine does no environment trimming, conversion to deBruijn-style variable access, and does not provide basic constants, data type constructors, or the recursive let. (The Y combinator is used instead.)  相似文献   

14.
In statistical analysis of measurement results it is often necessary to compute the range of the population variance when we only know the intervals of possible values of the x i . While can be computed efficiently, the problem of computing is, in general, NP-hard. In our previous paper “Population Variance under Interval Uncertainty: A New Algorithm” (Reliable Computing 12 (4) (2006), pp. 273–280) we showed that in a practically important case we can use constraints techniques to compute in time O(n · log(n)). In this paper we provide new algorithms that compute (in all cases) and (for the above case) in linear time O(n). Similar linear-time algorithms are described for computing the range of the entropy when we only know the intervals of possible values of probabilities p i . In general, a statistical characteristic ƒ can be more complex so that even computing ƒ can take much longer than linear time. For such ƒ, the question is how to compute the range in as few calls to ƒ as possible. We show that for convex symmetric functions ƒ, we can compute in n calls to ƒ.  相似文献   

15.
We analyze approximation algorithms for several variants of the traveling salesman problem with multiple objective functions. First, we consider the symmetric TSP (STSP) with γ-triangle inequality. For this problem, we present a deterministic polynomial-time algorithm that achieves an approximation ratio of and a randomized approximation algorithm that achieves a ratio of . In particular, we obtain a 2+ε approximation for multi-criteria metric STSP. Then we show that multi-criteria cycle cover problems admit fully polynomial-time randomized approximation schemes. Based on these schemes, we present randomized approximation algorithms for STSP with γ-triangle inequality (ratio ), asymmetric TSP (ATSP) with γ-triangle inequality (ratio ), STSP with weights one and two (ratio 4/3) and ATSP with weights one and two (ratio 3/2). A preliminary version of this work has been presented at the 4th Workshop on Approximation and Online Algorithms (WAOA 2006) (Lecture Notes in Computer Science, vol. 4368, pp. 302–315, 2007). B. Manthey is supported by the Postdoc-Program of the German Academic Exchange Service (DAAD). He is on leave from Saarland University and has done part of the work at the Institute for Theoretical Computer Science of the University of Lübeck supported by DFG research grant RE 672/3 and at the Department of Computer Science at Saarland University.  相似文献   

16.
We consider the problem of approximately integrating a Lipschitz function f (with a known Lipschitz constant) over an interval. The goal is to achieve an additive error of at most ε using as few samples of f as possible. We use the adaptive framework: on all problem instances an adaptive algorithm should perform almost as well as the best possible algorithm tuned for the particular problem instance. We distinguish between and , the performances of the best possible deterministic and randomized algorithms, respectively. We give a deterministic algorithm that uses samples and show that an asymptotically better algorithm is impossible. However, any deterministic algorithm requires samples on some problem instance. By combining a deterministic adaptive algorithm and Monte Carlo sampling with variance reduction, we give an algorithm that uses at most samples. We also show that any algorithm requires samples in expectation on some problem instance (f,ε), which proves that our algorithm is optimal.  相似文献   

17.
On the complexity of graph self-assembly in accretive systems   总被引:1,自引:1,他引:0  
We study the complexity of the Accretive Graph Assembly Problem (). An instance of consists of an edge-weighted graph G, a seed vertex in G, and a temperature τ. The goal is to determine if the graph G can be assembled by a sequence of vertex additions starting from the seed vertex. The edge weights model the forces of attraction and repulsion, and determine which vertices can be added to a partially assembled graph at the given temperature. A vertex can be added when the total weight to its already built neighbors in the graph is at least τ. The assembly process is sequential meaning that only one vertex can be added at a time. Our first result is that is NP-complete even on planar graphs with maximum degree 3 when edges have only two different types of weights. This resolves the complexity of in the sense that the problem is poly-time solvable when either the maximum degree is at most 2 or the number of distinct edge weights is one, and is NP-complete otherwise. Our second result is a dichotomy theorem that completely characterizes the complexity of on graphs with maximum degree 3 and two distinct weights: w p and w n . We give a simple system of linear constraints on w p , w n , and τ that determines whether the problem is NP-complete or is poly-time solvable. In the process of establishing this dichotomy, we give a poly-time algorithm to solve a non-trivial class of Finally, we consider the optimization version of where the goal is to assemble a largest-possible induced subgraph of the given input graph. We show that even on graphs that can be assembled and have maximum degree 3, it is NP-hard to assemble a (1/n 1-ε)-fraction of the input graph for any here n denotes the number of vertices in G.  相似文献   

18.
19.
This paper introduces the concepts of R 0 valuation, R 0 semantic, countable R 0 category , R 0 fuzzy topological category , etc. It is established in a natural way that the fuzzy topology δ and its cut topology on the set Ω M consisting of all R 0 valuations of an R 0 algebra M, and some properties of fuzzy topology δ and its cut topology are investigated carefully. Moreover, the representation theorem for R 0 algebras by means of fuzzy topology is given, that is to say the category is equivalent to the category . By studying the relation between valuations and filters, the Loomis–Sikorski theorem for R 0 algebras is obtained. As an application, K-compactness of the R 0 logic is discussed.  相似文献   

20.
Propositional dynamic logic () is complete but not compact. As a consequence, strong completeness (the property ) requires an infinitary proof system. In this paper, we present a short proof for strong completeness of relative to an infinitary proof system containing the rule from [α; β n ]φ for all , conclude . The proof uses a universal canonical model, and it is generalized to other modal logics with infinitary proof rules, such as epistemic knowledge with common knowledge. Also, we show that the universal canonical model of lacks the property of modal harmony, the analogue of the Truth lemma for modal operators.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号