首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 218 毫秒
1.
We introduce a methodology whereby an arbitrary logic system L can be enriched with temporal features to create a new system T(L). The new system is constructed by combining L with a pure propositional temporal logic T (such as linear temporal logic with Since and Until) in a special way. We refer to this method as adding a temporal dimension to L or just temporalising L. We show that the logic system T(L) preserves several properties of the original temporal logic like soundness, completeness, decidability, conservativeness and separation over linear flows of time. We then focus on the temporalisation of first-order logic, and a comparison is make with other first-order approaches to the handling of time.  相似文献   

2.
A first-order system F has theKreisel length-of-proof property if the following statement is true for all formulas(x): If there is ak1 such that for alln0 there is a proof of(¯n) in F with at mostk lines, then there is a proof of x(x) in F. We consider this property for Parikh systems, which are first-order axiomatic systems that contain a finite number of axiom schemata (including individual axioms) and a finite number of rules of inference. We prove that any usual Parikh system formulation of Peano arithmetic has the Kreisel length-of-proof property if the underlying logic of the system is formulated without a schema for universal instantiation in either one of two ways. (In one way, the formula to be instantiated is built up in steps, and in the other way, the term to be substituted is built up in steps.) Our method of proof uses techniques and ideas from unification theory.  相似文献   

3.
We consider the parallel time complexity of logic programs without function symbols, called logical query programs, or Datalog programs. We give a PRAM algorithm for computing the minimum model of a logical query program, and show that for programs with the polynomial fringe property, this algorithm runs in time that is logarithmic in the input size, assuming that concurrent writes are allowed if they are consistent. As a result, the linear and piecewise linear classes of logic programs are inN C. Then we examine several nonlinear classes in which the program has a single recursive rule that is an elementary chain. We show that certain nonlinear programs are related to GSM mappings of a balanced parentheses language, and that this relationship implies the polynomial fringe property; hence such programs are inN C Finally, we describe an approach for demonstrating that certain logical query programs are log space complete forP, and apply it to both elementary single rule programs and nonelementary programs.Supported by NSF Grant IST-84-12791, a grant of IBM Corporation, and ONR contract N00014-85-C-0731.  相似文献   

4.
5.
Given a finite setE R n, the problem is to find clusters (or subsets of similar points inE) and at the same time to find the most typical elements of this set. An original mathematical formulation is given to the problem. The proposed algorithm operates on groups of points, called samplings (samplings may be called multiple centers or cores); these samplings adapt and evolve into interesting clusters. Compared with other clustering algorithms, this algorithm requires less machine time and storage. We provide some propositions about nonprobabilistic convergence and a sufficient condition which ensures the decrease of the criterion. Some computational experiments are presented.  相似文献   

6.
We introduce here the study of generalnonmonotonic rule systems. These deal with situations where a conclusion is drawn from a system of beliefsS (and seen to be inS), basedboth on some premises being inS and on some restraints not being inS. In the monotone systems of traditional logic there are no restraints, conclusions are drawn solely based on premises being inS. Nonmonotonic rule systems capture the essential syntactic, semantic, and algorithmic features of many nonmonotone systems such as default logic, negation as failure, truth maintenance, autoepistemic logic, and also important combinatorial questions from mathematics such as the marriage problem. This reveals semantics and syntax and proof procedures and algorithms for computing belief sets in many cases where none were previously available and entirely uniformly. In particular, we introduce and study deductively closed sets, extensions and weak extensions. Semantics of nonmonotonic rule systems is studied in part II of this paper and extensions to predicate classical, intuitionistic, and modal logics are left to a later paper.Work partially supported by NSF grant RII-8610671 and Kentucky EPSCoR program and ARO contract DAAL03-89-K-0124.Work partially supported by NSF grant DMS-8902797 and ARO contract DAAG629-85-C-0018.Work partially supported by NSF grant DMS-8702473.  相似文献   

7.
We present an algorithm for computingL 1 shortest paths among polygonal obstacles in the plane. Our algorithm employs the continuous Dijkstra technique of propagating a wavefront and runs in timeO(E logn) and spaceO(E), wheren is the number of vertices of the obstacles andE is the number of events. By using bounds on the density of certain sparse binary matrices, we show thatE =O(n logn), implying that our algorithm is nearly optimal. We conjecture thatE =O(n), which would imply our algorithm to be optimal. Previous bounds for our problem were quadratic in time and space.Our algorithm generalizes to the case of fixed orientation metrics, yielding anO(n–1/2 log2 n) time andO(n–1/2) space approximation algorithm for finding Euclidean shortest paths among obstacles. The algorithm further generalizes to the case of many sources, allowing us to compute anL 1 Voronoi diagram for source points that lie among a collection of polygonal obstacles.Partially supported by a grant from Hughes Research Laboratories, Malibu, California and by NSF Grant ECSE-8857642. Much of this work was done while the author was a Ph.D. student at Stanford University, under the support of a Howard Hughes Doctoral Fellowship, and an employee of Hughes Research Laboratories.  相似文献   

8.
The cross ratio of four colinear points is of fundamental importance in model based vision, because it is the simplest numerical property of an object that is invariant under projection to an image. It provides a basis for algorithms to recognise objects from images without first estimating the position and orientation of the camera.A quantitative analysis of the effectiveness of the cross ratio in model based vision is made. A given imageI of four colinear points is classified by making comparisons between the measured cross ratio of the four image points and the cross ratios stored in the model database. The imageI is accepted as a projection of an objectO with cross ratio if |–|ntu, wheren is the standard deviation of the image noise,t is a threshold andu=. The performance of the cross ratio is described quantitatively by the probability of rejectionR, the probability of false alarmF and the probability of misclassificationp (), defined for two model cross ratios , . The trade off between these different probabilities is determined byt.It is assumed that in the absence of an object the image points have identical Gaussian distributions, and that in the presence of an object the image points have the appropriate conditional densities. The measurements of the image points are subject to small random Gaussian perturbations. Under these assumptions the trade offs betweenR,F andp () are given to a good approximation byR=2(1–(t)),F=r F t, t|–|–1, where is the relative noise level, is cumulative distribution function for the normal distribution,r F is constant, ande is a function of only. The trade off betweenR andF is obtained in Maybank (1994). In this paper the trade off betweenR andp () is obtained.It is conjectured that the general form of the above trade offs betweenR,F andp () is the same for a range of invariants useful in model based vision. The conjecture prompts the following definition: an invariant which has trade offs betweenR,F,p () of the above form is said to benon-degenerate for model based vision.The consequences of the trade off betweenR andp () are examined. In particular, it is shown that for a fixed overall probability of misclassification there is a maximum possible model cross ratio m , and there is a maximum possible numberN of models. Approximate expressions for m andN are obtained. They indicate that in practice a model database containing only cross ratio values can have a size of order at most ten, for a physically plausible level of image noise, and for a probability of misclassification of the order 0.1.  相似文献   

9.
The condensed detachment ruleD is a combination of modus ponens with a minimal amount of substitution. EarlierD has been shown to be complete for intuitionistic and classical implicational logic but incomplete forBCK andBCI logic. We show thatD is complete for the relevance logic. One of the main steps is the proof of the formula ((a a) a) a found in interaction with our resolution theorem prover. Various strategies of generating consequences of the axioms and choosing best ones for the next iteration were tried until the proof was found.  相似文献   

10.
This paper has two purposes. The first is to present a new way to find a Steiner minimum tree (SMT) connectingN sites ind-space,d >- 2. We present (in Appendix 1) a computer code for this purpose. This is the only procedure known to the author for finding Steiner minimal trees ind-space ford > 2, and also the first one which fits naturally into the framework of backtracking and branch-and-bound. Finding SMTs of up toN = 12 general sites ind-space (for anyd) now appears feasible.We tabulate Steiner minimal trees for many point sets, including the vertices of most of the regular and Archimedeand-polytopes with <- 16 vertices. As a consequence of these tables, the Gilbert-Pollak conjecture is shown to be false in dimensions 3–9. (The conjecture remains open in other dimensions; it is probably false in all dimensionsd withd 3, but it is probably true whend = 2.)The second purpose is to present some new theoretical results regarding the asymptotic computational complexity of finding SMTs to precision .We show that in two-dimensions, Steiner minimum trees may be found exactly in exponential time O(C N ) on a real RAM. (All previous provable time bounds were superexponential.) If the tree is only wanted to precision , then there is an (N/)O(N)-time algorithm, which is subexponential if 1/ grows only polynomially withN. Also, therectilinear Steiner minimal tree ofN points in the plane may be found inN O(N) time.J. S. Provan devised an O(N 6/4)-time algorithm for finding the SMT of a convexN-point set in the plane. (Also the rectilinear SMT of such a set may be found in O(N 6) time.) One therefore suspects that this problem may be solved exactly in polynomial time. We show that this suspicion is in fact true—if a certain conjecture about the size of Steiner sensitivity diagrams is correct.All of these algorithms are for a real RAM model of computation allowing infinite precision arithmetic. They make no probabilistic or other assumptions about the input; the time bounds are valid in the worst case; and all our algorithms may be implemented with a polynomial amount of space. Only algorithms yielding theexact optimum SMT, or trees with lengths (1 + ) × optimum, where is arbitrarily small, are considered here.  相似文献   

11.
It is shown that the translation of an open default into a modal formula x(L(x)LM 1 (x)...LM m (x)w(x)) gives rise to an embedding of open default systems into non-monotonic logics.  相似文献   

12.
We develop a theory of communication within branching programs that provides exponential lower bounds on the size of branching programs that are bounded alternating. Our theory is based on the algebraic concept of -branching programs, : , a semiring homomorphism, that generalizes ordinary branching programs, -branching programs [M2] andMOD p-branching programs [DKMW].Due to certain exponential lower and polynomial upper bounds on the size of bounded alternating -branching programs we are able to separate the corresponding complexity classesN ba ,co-N ba ba , andMOD p - ba ,p prime, from each other, and from that classes corresponding to oblivious linear length-bounded branching programs investigated in the past.  相似文献   

13.
Through key examples and constructs, exact and approximate, complexity, computability, and solution of linear programming systems are reexamined in the light of Khachian's new notion of (approximate) solution. Algorithms, basic theorems, and alternate representations are reviewed. It is shown that the Klee-Minty example hasnever been exponential for (exact) adjacent extreme point algorithms and that the Balinski-Gomory (exact) algorithm continues to be polynomial in cases where (approximate) ellipsoidal centered-cutoff algorithms (Levin, Shor, Khachian, Gacs-Lovasz) are exponential. By model approximation, both the Klee-Minty and the new J. Clausen examples are shown to be trivial (explicitly solvable) interval programming problems. A new notion of computable (approximate) solution is proposed together with ana priori regularization for linear programming systems. New polyhedral constraint contraction algorithms are proposed for approximate solution and the relevance of interval programming for good starts or exact solution is brought forth. It is concluded from all this that the imposed problem ignorance of past complexity research is deleterious to research progress on computability or efficiency of computation.This research was partly supported by Project NR047-071, ONR Contract N00014-80-C-0242, and Project NR047-021, ONR Contract N00014-75-C-0569, with the Center for Cybernetic Studies, The University of Texas at Austin.  相似文献   

14.
The relation between an operational interleaving semantics forTSCP based on a transition system and a compositional true concurrency semantics based on event structures is studied. In particular we extend the consistency result of Goltz and Loogan [15] forTCSP processes without recursion to the general case. Thus we obtain for everyTCSP processP that its operational meaningO(P) and the interleaving behaviourO( M3P3) which is derived from the event structureM3P3 associated withP are bisimilar.  相似文献   

15.
LetB be a Banach space ofR n valued continuous functions on [0, ) withfB. Consider the nonlinear Volterra integral equation (*)x(t)+ o t K(t,s,x(s))ds. We use the implicit function theorem to give sufficient conditions onB andK (t,s,x) for the existence of a unique solutionxB to (*) for eachf B with f B sufficiently small. Moreover, there is a constantM>0 independent off with MfB.Part of this work was done while the author was visiting at Wright State University.  相似文献   

16.
Workflow Management Systems (WFMSs) are often used in context of B2B integration as a base technology to implement business-to-business (B2B) integration processes across enterprises. In this context the notion of distributed inter-organizational workflows is introduced to indicate the collaboration of enterprises on a process level. This notion requires a thorough examination presented in this article since WFMSs were not designed with inter-enterprise distribution as one of the design goals. At a closer look, the proposed use of WFMSs in context of B2B integration is often very naïve and inappropriate. Consequently it does not address the real requirements found in enterprises. Enterprises do not share common workflow definitions, let alone common workflow instance execution state and have no intent to do so due to competitive knowledge protection. Furthermore, trading partner specific business rules within enterprises are not accounted for leading to an unwanted explosion of workflow definitions. This article clarifies the notion of distributed inter-organizational workflows as well as private and public processes. Based on this definition, the appropriate use of WFMSs is shown in context of an overall B2B integration solution that allows enterprises to protect their competitive knowledge while participating in B2B integration.  相似文献   

17.
In the framework of stochastic mechanics, the following problem is considered: in a set of admissible feedback controls v, with range inE n , find one minimizing the expectationE sx { s T L(t, (t), (t, (t)))dt + W T ((T))} for all (s, x) [0,T) E n , whereL(t, x, ) = (/12)m 2 – U(t, x) is the classical action integrand and is an-dimensional diffusion process in the weak sense, (see Bensoussan, 1982) with drift and diffusion coefficientD constant > 0.W T andU are given real functions. Sufficiency conditions for the existence of such an optimal feedback control are given. Dedicated to George Leitmann Recommended by G.J. Olsder Presented at the Third Workshop on Control Mechanics in honor of George Leitmann, January 22–24, 1990, University of Southern California, Los Angeles, California (USA).  相似文献   

18.
Summary Equivalence is a fundamental notion for the semantic analysis of algebraic specifications. In this paper the notion of crypt-equivalence is introduced and studied w.r.t. two loose approaches to the semantics of an algebraic specification T: the class of all first-order models of T and the class of all term-generated models of T. Two specifications are called crypt-equivalent if for one specification there exists a predicate logic formula which implicitly defines an expansion (by new functions) of every model of that specification in such a way that the expansion (after forgetting unnecessary functions) is homologous to a model of the other specification, and if vice versa there exists another predicate logic formula with the same properties for the other specification. We speak of first-order crypt-equivalence if this holds for all first-order models, and of inductive crypt-equivalence if this holds for all term-generated models. Characterizations and structural properties of these notions are studied. In particular, it is shown that first order crypt-equivalence is equivalent to the existence of explicit definitions and that in case of positive definability two first-order crypt-equivalent specifications admit the same categories of models and homomorphisms. Similarly, two specifications which are inductively crypt-equivalent via sufficiently complete implicit definitions determine the same associated categories. Moreover, crypt-equivalence is compared with other notions of equivalence for algebraic specifications: in particular, it is shown that first-order cryptequivalence is strictly coarser than abstract semantic equivalence and that inductive crypt-equivalence is strictly finer than inductive simulation equivalence and implementation equivalence.  相似文献   

19.
Symbolic trajectory evaluation provides a means to formally verify properties of a sequential system by a modified form of symbolic simulation. The desired system properties are expressed in a notation combining Boolean expressions and the temporal logic next-time operator. In its simplest form, each property is expressed as an assertion [AC], where the antecedentA expresses some assumed conditions on the system state over a bounded time period, and the consequentC expresses conditions that should result. A generalization allows simple invariants to be established and proven automatically.The verifier operates on system models in which the state space is ordered by information content. By suitable restrictions to the specification notation, we guarantee that for every trajectory formula, there is a unique weakest state trajectory that satisfies it. Therefore, we can verify an assertion [AC] by simulating the system over the weakest trajectory forA and testing adherence toC. Also, establishing invariants correspond to simple fixed point calculations.This paper presents the general theory underlying symbolic trajectory evaluation. It also illustrates the application of the theory to the taks of verifying switch-level circuits as well as more abstract implementations.This research was supported by the Defense Advanced Research Projects Agency, ARPA Order Number 4976, by the National Science Foundation, under grant number MIP-8913667, by operating grant OGPO 109688 from the Natural Sciences and Engineering Research Council of Canada, and by a fellowship from the British Columbia Advanced Systems Institute.  相似文献   

20.
Property preserving abstractions for the verification of concurrent systems   总被引:9,自引:0,他引:9  
We study property preserving transformations for reactive systems. The main idea is the use of simulations parameterized by Galois connections (, ), relating the lattices of properties of two systems. We propose and study a notion of preservation of properties expressed by formulas of a logic, by a function mapping sets of states of a systemS into sets of states of a systemS'. We give results on the preservation of properties expressed in sublanguages of the branching time -calculus when two systemsS andS' are related via (, )-simulations. They can be used to verify a property for a system by verifying the same property on a simpler system which is an abstraction of it. We show also under which conditions abstraction of concurrent systems can be computed from the abstraction of their components. This allows a compositional application of the proposed verification method.This is a revised version of the papers [2] and [16]; the results are fully developed in [28].This work was partially supported by ESPRIT Basic Research Action REACT.Verimag is a joint laboratory of CNRS, Institut National Polytechnique de Grenoble, Université J. Fourier and Verilog SA associated with IMAG.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号