首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Through key examples and constructs, exact and approximate, complexity, computability, and solution of linear programming systems are reexamined in the light of Khachian's new notion of (approximate) solution. Algorithms, basic theorems, and alternate representations are reviewed. It is shown that the Klee-Minty example hasnever been exponential for (exact) adjacent extreme point algorithms and that the Balinski-Gomory (exact) algorithm continues to be polynomial in cases where (approximate) ellipsoidal centered-cutoff algorithms (Levin, Shor, Khachian, Gacs-Lovasz) are exponential. By model approximation, both the Klee-Minty and the new J. Clausen examples are shown to be trivial (explicitly solvable) interval programming problems. A new notion of computable (approximate) solution is proposed together with ana priori regularization for linear programming systems. New polyhedral constraint contraction algorithms are proposed for approximate solution and the relevance of interval programming for good starts or exact solution is brought forth. It is concluded from all this that the imposed problem ignorance of past complexity research is deleterious to research progress on computability or efficiency of computation.This research was partly supported by Project NR047-071, ONR Contract N00014-80-C-0242, and Project NR047-021, ONR Contract N00014-75-C-0569, with the Center for Cybernetic Studies, The University of Texas at Austin.  相似文献   

2.
Suppose a directed graph has its arcs stored in secondary memory, and we wish to compute its transitive closure, also storing the result in secondary memory. We assume that an amount of main memory capable of holdings values is available, and thats lies betweenn, the number of nodes of the graph, ande, the number of arcs. The cost measure we use for algorithms is theI/O complexity of Kung and Hong, where we count 1 every time a value is moved into main memory from secondary memory, or vice versa.In the dense case, wheree is close ton 2, we show that I/O equal toO(n 3/s) is sufficient to compute the transitive closure of ann-node graph, using main memory of sizes. Moreover, it is necessary for any algorithm that is standard, in a sense to be defined precisely in the paper. Roughly, standard means that paths are constructed only by concatenating arcs and previously discovered paths. For the sparse case, we show that I/O equal toO(n 2e/s) is sufficient, although the algorithm we propose meets our definition of standard only if the underlying graph is acyclic. We also show that(n 2e/s) is necessary for any standard algorithm in the sparse case. That settles the I/O complexity of the sparse/acyclic case, for standard algorithms. It is unknown whether this complexity can be achieved in the sparse, cyclic case, by a standard algorithm, and it is unknown whether the bound can be beaten by nonstandard algorithms.We then consider a special kind of standard algorithm, in which paths are constructed only by concatenating arcs and old paths, never by concatenating two old paths. This restriction seems essential if we are to take advantage of sparseness. Unfortunately, we show that almost another factor ofn I/O is necessary. That is, there is an algorithm in this class using I/OO(n 3e/s) for arbitrary sparse graphs, including cyclic ones. Moreover, every algorithm in the restricted class must use(n 3e/s/log3 n) I/O, on some cyclic graphs.The work of this author was partially supported by NSF grant IRI-87-22886, IBM contract 476816, Air Force grant AFOSR-88-0266 and a Guggenheim fellowship.  相似文献   

3.
The notion of obvious inference in predicate logic is discussed from the viewpoint of proof-checker applications in logic and mathematics education. A class of inferences in predicate logic is defined and it is proposed to identify it with the class of obvious logical inferences. The definition is compared with other approaches. The algorithm for implementing the obviousness decision procedure follows directly from the definition.  相似文献   

4.
Mutual convertibility of bound entangled states under local quantum operations and classical communication (LOCC) is studied. We focus on states associated with unextendible product bases (UPB) in a system of three qubits. A complete classification of such UPBs is suggested. We prove that for any pair of UPBs S and T the associated bound entangled states S and T cannot be converted to each other by LOCC, unless S and T coincide up to local unitaries. More specifically, there exists a finite precision (S,T) > 0 such that for any LOCC protocol mapping S into a probabilistic ensemble (p, ), the fidelity between T and any possible final state satisfies F(T, ) = 1 - (S,T).PACS: 03.65.Bz; 03.67.-a; 89.70+c.  相似文献   

5.
Schedulers for larger classes of pinwheel instances   总被引:1,自引:0,他引:1  
The pinwheel is a hard-real-time scheduling problem for scheduling satellite ground stations to service a number of satellites without data loss. Given a multiset of positive integers (instance)A={a1,..., an}, the problem is to find an infinite sequence (schedule) of symbols from {1,2,...,n} such that there is at least one symboli within any interval of ai symbols (slots). Not all instancesA can be scheduled; for example, no successful schedule exists for instances whose density,(A)= i i (l/ai), is larger than 1. It has been shown that all instances whose densities are less than a 0.5 density threshold can always be scheduled. If a schedule exists, another concern is the design of a fast on-line scheduler (FOLS) which can generate each symbol of the schedule in constant time. Based on the idea of integer reduction, two new FOLSs which can schedule different classes of pinwheel instances, are proposed in this paper. One uses single-integer reduction and the other uses double-integer reduction. They both improve the previous 0.5 result and have density thresholds of 13/20 and2/3, respectively. In particular, if the elements inA are large, the density thresholds will asymptotically approach In 2 and 1/R2, respectively.This research was supported in part by ONR Grant N00014-87-K-0833, and was done while Francis Chin was visiting the Computer Science Program, The University of Texas at Dallas, Richardson, TX 75083, USA.  相似文献   

6.
It is shown that the translation of an open default into a modal formula x(L(x)LM 1 (x)...LM m (x)w(x)) gives rise to an embedding of open default systems into non-monotonic logics.  相似文献   

7.
I discuss the attitude of Jewish law sources from the 2nd–:5th centuries to the imprecision of measurement. I review a problem that the Talmud refers to, somewhat obscurely, as impossible reduction. This problem arises when a legal rule specifies an object by referring to a maximized (or minimized) measurement function, e.g., when a rule applies to the largest part of a divided whole, or to the first incidence that occurs, etc. A problem that is often mentioned is whether there might be hypothetical situations involving more than one maximal (or minimal) value of the relevant measurement and, given such situations, what is the pertinent legal rule. Presumption of simultaneous occurrences or equally measured values are also a source of embarrassment to modern legal systems, in situations exemplified in the paper, where law determines a preference based on measured values. I contend that the Talmudic sources discussing the problem of impossible reduction were guided by primitive insights compatible with fuzzy logic presentation of the inevitable uncertainty involved in measurement. I maintain that fuzzy models of data are compatible with a positivistic epistemology, which refuses to assume any precision in the extra-conscious world that may not be captured by observation and measurement. I therefore propose this view as the preferred interpretation of the Talmudic notion of impossible reduction. Attributing a fuzzy world view to the Talmudic authorities is meant not only to increase our understanding of the Talmud but, in so doing, also to demonstrate that fuzzy notions are entrenched in our practical reasoning. If Talmudic sages did indeed conceive the results of measurements in terms of fuzzy numbers, then equality between the results of measurements had to be more complicated than crisp equations. The problem of impossible reduction could lie in fuzzy sets with an empty core or whose membership functions were only partly congruent. Reduction is impossible may thus be reconstructed as there is no core to the intersection of two measures. I describe Dirichlet maps for fuzzy measurements of distance as a rough partition of the universe, where for any region A there may be a non-empty set of - _A (upper approximation minus lower approximation), where the problem of impossible reduction applies. This model may easily be combined with probabilistic extention. The possibility of adopting practical decision standards based on -cuts (and therefore applying interval analysis to fuzzy equations) is discussed in this context. I propose to characterize the uncertainty that was presumably capped by the old sages as U-uncertainty, defined, for a non-empty fuzzy set A on the set of real numbers, whose -cuts are intervals of real numbers, as U(A) = 1/h(A) 0 h(A) log [1+(A)]d, where h(A) is the largest membership value obtained by any element of A and (A) is the measure of the -cut of A defined by the Lebesge integral of its characteristic function.  相似文献   

8.
A Maple procedure is described by means of which an algebraic function given by an equation f(x y) = 0 can be expanded into a fractional power series (Puiseux series)
where
,
of special (nice) type. It may be a series with polynomial, rational, hypergeometric coefficients, or m-sparse or m-sparse m-hypergeometric series. First, a linear ordinary differential equation with polynomial coefficients Ly(x) = 0 is constructed which is satisfied by the given algebraic function. The , n 0, and a required number of initial coefficients 0, ..., are computed by using Maple algcurves package. By means of Maple Slode package, a solution to the equation Ly(x) = 0 is constructed in the form of a series with nice coefficients, the initial coefficients of which correspond to the calculated 0, ..., . The procedure suggested can construct an expansion at a user-given point x 0, as well as determine points where an expansion of such a special type is possible.  相似文献   

9.
In this paper, we define what we call a unitary immersion of a nonlinear system. We observe that, for classical Hamiltonian systems, this notion contains, in some sense, the concept of quantization. We restrict our attention to degree-zero unitary immersions, where all observation functions must be represented by operators of the type multiplication by a function. We show that the problem of classifying such degree-zero unitary immersions of a given nonlinear system is not obvious. In some cases, we solve this problem.Chargé de Recherche au CNRS.Maître de Conférences.  相似文献   

10.
Zusammenfassung Sei G eine kontextsensitive Grammatik. Gc bezeichne den kontextfreien Kern von G. In dieser Arbeit wird die Zeitkomplexität des folgenden Problems untersucht. Das NormalisierungsproblemSei ein Ableitungsbaum bezüglich Gc; ist auch ein Ableitungsbaum bezüglich G? Es wird gezeigt, daß im allgemeinen das Normalisierungs-problem NP-vollständig ist. Andererseits gibt es zu jeder kontextsensitiven Sprache L eine kontextsensitive Grammatik G, für welche das Normalisierungsproblem in Polynomzeit lösbar ist.
The time complexity of the normalization problem of contextsensitive grammars
Summary Let G be a Contextsensitive grammar. G cf represents the context free core of G. In this paper the time complexity of the following problem will be discussed. The Normalization ProblemLet be a derivation tree with respect to Gc; is also a derivation tree with respect to G? It is shown that in general the normalization problem is NP-complete. On the other hand, for every context sensitive language L there is a corresponding context sensitive grammar G for which this normalization problem is solvable in polynomial time.
  相似文献   

11.
In a model for a measure of computational complexity, , for a partial recursive functiont, letR t denote all partial recursive functions having the same domain ast and computable within timet. Let = {R t |t is recursive} and let = { |i is actually the running time function of a computation}. and are partially ordered under set-theoretic inclusion. These partial orderings have been extensively investigated by Borodin, Constable and Hopcroft in [3]. In this paper we present a simple uniform proof of some of their results. For example, we give a procedure for easily calculating a model of computational complexity for which is not dense while is dense. In our opinion, our technique is so transparent that it indicates that certain questions of density are not intrinsically interesting for general abstract measures of computational complexity, . (This is not to say that similar questions are necessarily uninteresting for specific models.)Supported by NSF Research Grants GP6120 and GJ27127.  相似文献   

12.
Examples in the history of Automated Theorem Proving are given, in order to show that even a seemingly mechanical activity, such as deductive inference drawing, involves special cultural features and tacit knowledge. Mechanisation of reasoning is thus regarded as a complex undertaking in cultural pruning of human-oriented reasoning. Sociological counterparts of this passage from human- to machine-oriented reasoning are discussed, by focusing on problems of man-machine interaction in the area of computer-assisted proof processing.  相似文献   

13.
Modeling and programming tools for neighborhood search often support invariants, i.e., data structures specified declaratively and automatically maintained incrementally under changes. This paper considers invariants for longest paths in directed acyclic graphs, a fundamental abstraction for many applications. It presents bounded incremental algorithms for arc insertion and deletion which run in O( + || log||) time and O() time respectively, where || and are measures of the change in the input and output. The paper also shows how to generalize the algorithm to various classes of multiple insertions/deletions encountered in scheduling applications. Preliminary experimental results show that the algorithms behave well in practice.  相似文献   

14.
Summary Geffert has shown that earch recursively enumerable languageL over can be expressed in the formL{h(x) –1 g(x)x in +} * where is an alphabet andg, h is a pair of morphisms. Our purpose is to give a simple proof for Geffert's result and then sharpen it into the form where both of the morphisms are nonerasing. In our method we modify constructions used in a representation of recursively enumerable languages in terms of equality sets and in a characterization of simple transducers in terms of morphisms. As direct consequences, we get the undecidability of the Post correspondence problem and various representations ofL. For instance,L =(L 0) * whereL 0 is a minimal linear language and is the Dyck reductiona, A.  相似文献   

15.
In this paper, an objective conception of contexts based loosely upon situation theory is developed and formalized. Unlike subjective conceptions, which take contexts to be something like sets of beliefs, contexts on the objective conception are taken to be complex, structured pieces of the world that (in general) contain individuals, other contexts, and propositions about them. An extended first-order language for this account is developed. The language contains complex terms for propositions, and the standard predicate ist that expresses the relation that holds between a context and a proposition just in case the latter is true in the former. The logic for the objective conception features a global classical predicate calculus, a local logic for reasoning within contexts, and axioms for propositions. The specter of paradox is banished from the logic by allowing ist to be nonbivalent in problematic cases: it is not in general the case, for any context c and proposition p, that either ist(c,p) or ist(c, ¬ p). An important representational capability of the logic is illustrated by proving an appropriately modified version of an illustrative theorem from McCarthy's classic Blocks World example.  相似文献   

16.
A problem of estimating a functional parameter (x) and functionals () based on observation of a solution u (t, x) of the stochastic partial differential equation is considered. The asymptotic problem setting, as the noise intensity 0, is investigated.  相似文献   

17.
When verifying concurrent systems described by transition systems, state explosion is one of the most serious problems. If quantitative temporal information (expressed by clock ticks) is considered, state explosion is even more serious. We present a notion of abstraction of transition systems, where the abstraction is driven by the formulae of a quantitative temporal logic, called qu-mu-calculus, defined in the paper. The abstraction is based on a notion of bisimulation equivalence, called , n-equivalence, where is a set of actions and n is a natural number. It is proved that two transition systems are , n-equivalent iff they give the same truth value to all qu-mu-calculus formulae such that the actions occurring in the modal operators are contained in , and with time constraints whose values are less than or equal to n. We present a non-standard (abstract) semantics for a timed process algebra able to produce reduced transition systems for checking formulae. The abstract semantics, parametric with respect to a set of actions and a natural number n, produces a reduced transition system , n-equivalent to the standard one. A transformational method is also defined, by means of which it is possible to syntactically transform a program into a smaller one, still preserving , n-equivalence.  相似文献   

18.
In this paper, we consider the linear interval tolerance problem, which consists of finding the largest interval vector included in ([A], [b]) = {x R n | A [A], b [b], Ax = b}. We describe two different polyhedrons that represent subsets of all possible interval vectors in ([A], [b]), and we provide a new definition of the optimality of an interval vector included in ([A], [b]). Finally, we show how the Simplex algorithm can be applied to find an optimal interval vector in ([A], [b]).  相似文献   

19.
20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号