首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 515 毫秒
1.
The common metric temporal logic for continuous time were shown to be insufficient, when it was proved that they cannot express a modality suggested by Pnueli. Moreover no finite temporal logic can express all the natural generalizations of this modality. It followed that if we look for an optimal decidable metric logic we must accept infinitely many modalities, or adopt a different formalism.Here we identify a fragment of the second order monadic logic of order with the “+1” function, that expresses all the Pnueli modalities and much more. Its main advantage over the temporal logics is that it enables us to say not just that within prescribed time there is a point where some punctual event will occur, but also that within prescribed time some process that starts now (or that started before, or that will start soon) will terminate. We prove that this logic is decidable with respect to satisfiability and validity, over continuous time. The proof depends heavily on the theory of compositionality. In particular every temporal logic that has truth tables in this logic is automatically decidable. We extend this result by proving that any temporal logic, that has all its modalities defined by means more general than truth tables, in a logic stronger than the one just described, has a decidable satisfiability problem. We suggest that this monadic logic can be the framework in which temporal logics can be safely defined, with the guarantee that their satisfiability problem is decidable.  相似文献   

2.
This paper proposes an algorithm for the model based design of a distributed protocol for fault detection and diagnosis for very large systems. The overall process is modeled as different Time Petri Net (TPN) models (each one modeling a local process) that interact with each other via guarded transitions that becomes enabled only when certain conditions (expressed as predicates over the marking of some places) are satisfied (the guard is true). In order to use this broad class of time DES models for fault detection and diagnosis we derive in this paper the timing analysis of the TPN models with guarded transitions. In this paper we also extend the modeling capability of the faults calling some transitions faulty when operations they represent take more or less time than a prescribed time interval corresponding to their normal execution. We consider here that different local agents receive local observation as well as messages from neighboring agents. Each agent estimates the state of the part of the overall process for which it has model and from which it observes events by reconciling observations with model based predictions. We design algorithms that use limited information exchange between agents and that can quickly decide “questions” about “whether and where a fault occurred?” and “whether or not some components of the local processes have operated correctly?”. The algorithms we derive allow each local agent to generate a preliminary diagnosis prior to any communication and we show that after communicating the agents we design recover the global diagnosis that a centralized agent would have derived. The algorithms are component oriented leading to efficiency in computation.  相似文献   

3.
The purpose of this paper is to give an exposition of material dealing with constructive logics, typed λ-calculi, and linear logic. The emergence in the past ten years of a coherent field of research often named “logic and computation” has had two major (and related) effects: firstly, it has rocked vigorously the world of mathematical logic; secondly, it has created a new computer science discipline, which spans a range of subjects from what is traditionally called the theory of computation, to programming language design. Remarkably, this new body of work relies heavily on some “old” concepts found in mathematical logic, like natural deduction, sequent calculus, and λ-calculus (but often viewed in a different light), and also on some newer concepts. Thus, it may be quite a challenge to become initiated to this new body of work (but the situation is improving, and there are now some excellent texts on this subject matter). This paper attempts to provide a coherent and hopefully “gentle” initiation to this new body of work. We have attempted to cover the basic material on natural deduction, sequent calculus, and typed λ-calculus, but also to provide an introduction to Girard's linear logic, one of the most exciting developments in logic these past six years. The first part of these notes gives an exposition of the background material (with some exceptions, such as “contraction-free” systems for intuitionistic propositional logic and the Girard translation of classical logic into intuitionistic logic, which is new). The second part is devoted to more current topics such as linear logic, proof nets, the geometry of interaction, and unified systems of logic (LU).  相似文献   

4.
In this paper, we address a fundamental problem related to the induction of Boolean logic: Given a set of data, represented as a set of binary “truen-vectors” (or “positive examples”) and a set of “falsen-vectors” (or “negative examples”), we establish a Boolean function (or an extension)f, so thatfis true (resp., false) in every given true (resp., false) vector. We shall further require that such an extension belongs to a certain specified class of functions, e.g., class of positive functions, class of Horn functions, and so on. The class of functions represents our a priori knowledge or hypothesis about the extensionf, which may be obtained from experience or from the analysis of mechanisms that may or may not cause the phenomena under consideration. The real-world data may contain errors, e.g., measurement and classification errors might come in when obtaining data, or there may be some other influential factors not represented as variables in the vectors. In such situations, we have to give up the goal of establishing an extension that is perfectly consistent with the given data, and we are satisfied with an extensionfhaving the minimum number of misclassifications. Both problems, i.e., the problem of finding an extension within a specified class of Boolean functions and the problem of finding a minimum error extension in that class, will be extensively studied in this paper. For certain classes we shall provide polynomial algorithms, and for other cases we prove their NP-hardness.  相似文献   

5.
We consider a language for reasoning about probability which allows us to make statements such as “the probability of E1 is less than ” and “the probability of E1 is at least twice the probability of E2,” where E1 and E2 are arbitrary events. We consider the case where all events are measurable (i.e., represent measurable sets) and the more general case, which is also of interest in practice, where they may not be measurable. The measurable case is essentially a formalization of (the propositional fragment of) Nilsson's probabilistic logic. As we show elsewhere, the general (nonmeasurable) case corresponds precisely to replacing probability measures by Dempster-Shafer belief functions. In both cases, we provide a complete axiomatization and show that the problem of deciding satisfiability is NP-complete, no worse than that of propositional logic. As a tool for proving our complete axiomatizations, we give a complete axiomatization for reasoning about Boolean combinations of linear inequalities, which is of independent interest. This proof and others make crucial use of results from the theory of linear programming. We then extend the language to allow reasoning about conditional probability and show that the resulting logic is decidable and completely axiomatizable, by making use of the theory of real closed fields.  相似文献   

6.
In the 1970s Codd introduced the relational algebra, with operators selection, projection, union, difference and product, and showed that it is equivalent to first-order logic. In this paper, we show that if we replace in Codd’s relational algebra the product operator by the “semijoin” operator, then the resulting “semijoin algebra” is equivalent to the guarded fragment of first-order logic. We also define a fixed point extension of the semijoin algebra that corresponds to μGF.This author has been partially supported by the European Community Research Training Network “Games and Automata for Synthesis and Validation” (GAMES), contract HPRN-CT-2002-00283.  相似文献   

7.
Punctual timing constraints are important in formal modelling of safety-critical real-time systems. But they are very expensive to express in dense time. In most cases, punctuality and dense-time lead to undecidability. Efforts have been successful to obtain decidability; but the results are either non-primitive recursive or nonelementary. In this paper we propose a duration logic which can express quantitative temporal constraints and punctuality timing constraints over continuous intervals and has a reasonable complexity. Our logic allows most specifications that are interesting in practice, and retains punctuality. It can capture the semantics of both events and states, and incorporates the notions duration and accumulation. We call this logic ESDL (the acronym stands for Event- and State-based Duration Logic). We show that the satisfiability problem is decidable, and the complexity of the satisfiability problem is NEXPTIME. ESDL is one of a few decidable interval temporal logics with metric operators. Through some case studies, we also show that ESDL can specify many safety-critical real-time system properties which were previously specified by undecidable interval logics or their decidable reductions based on some abstractions.  相似文献   

8.
One of the key features of logic programming is the notion of goal-directed provability. In intuitionistic logic, the notion of uniform proof has been used as a proof-theoretic characterization of this property. Whilst the connections between intuitionistic logic and computation are well known, there is no reason per se why a similar notion cannot be given in classical logic. In this paper we show that there are two notions of goal-directed proof in classical logic, both of which are suitably weaker than that for intuitionistic logic. We show the completeness of this class of proofs for certain fragments, which thus form logic programming languages. As there are more possible variations on the notion of goal-directed provability in classical logic, there is a greater diversity of classical logic programming languages than intuitionistic ones. In particular, we show how logic programs may contain disjunctions in this setting. This provides a proof-theoretic basis for disjunctive logic programs, as well as characterising the “disjunctive” nature of answer substitutions for such programs in terms of the provability properties of the classical connectives Λ and Λ.  相似文献   

9.
Existing search engines––with Google at the top––have many remarkable capabilities; but what is not among them is deduction capability––the capability to synthesize an answer to a query from bodies of information which reside in various parts of the knowledge base.

In recent years, impressive progress has been made in enhancing performance of search engines through the use of methods based on bivalent logic and bivalent-logic-based probability theory. But can such methods be used to add nontrivial deduction capability to search engines, that is, to upgrade search engines to question-answering systems? A view which is articulated in this note is that the answer is “No.” The problem is rooted in the nature of world knowledge, the kind of knowledge that humans acquire through experience and education.

It is widely recognized that world knowledge plays an essential role in assessment of relevance, summarization, search and deduction. But a basic issue which is not addressed is that much of world knowledge is perception-based, e.g., “it is hard to find parking in Paris,” “most professors are not rich,” and “it is unlikely to rain in midsummer in San Francisco.” The problem is that (a) perception-based information is intrinsically fuzzy; and (b) bivalent logic is intrinsically unsuited to deal with fuzziness and partial truth.

To come to grips with fuzziness of world knowledge, new tools are needed. The principal new tool––a tool which is briefly described in this note––is Precisiated Natural Language (PNL). PNL is based on fuzzy logic and has the capability to deal with partiality of certainty, partiality of possibility and partiality of truth. These are the capabilities that are needed to be able to draw on world knowledge for assessment of relevance, and for summarization, search and deduction.  相似文献   


10.
In many applications, the use of Bayesian probability theory is problematical. Information needed to feasibility calculate is unavailable. There are different methodologies for dealing with this problem, e.g., maximal entropy and Dempster-Shafer Theory. If one can make independence assumptions, many of the problems disappear, and in fact, this is often the method of choice even when it is obviously incorrect. The notion of independence is a 0–1 concept, which implies that human guesses about its validity will not lead to robust systems. In this paper, we propose a fuzzy formulation of this concept. It should lend itself to probabilistic updating formulas by allowing heuristic estimation of the “degree of independence.” We show how this can be applied to compute a new notion of conditional probability (we call this “extended conditional probability”). Given information, one typically has the choice of full conditioning (standard dependence) or ignoring the information (standard independence). We list some desiderata for the extension of this to allowing degree of conditioning. We then show how our formulation of degree of independence leads to a formula fulfilling these desiderata. After describing this formula, we show how this compares with other possible formulations of parameterized independence. In particular, we compare it to a linear interpolant, a higher power of a linear interpolant, and to a notion originally presented by Hummel and Manevitz [Tenth Int. Joint Conf. on Artificial Intelligence, 1987]. Interestingly, it turns out that a transformation of the Hummel-Manevitz method and our “fuzzy” method are close approximations of each other. Two examples illustrate how fuzzy independence and extended conditional probability might be applied. The first shows how linguistic probabilities result from treating fuzzy independence as a linguistic variable. The second is an industrial example of troubleshooting on the shop floor.  相似文献   

11.
MOSAIC: A fast multi-feature image retrieval system   总被引:1,自引:0,他引:1  
  相似文献   

12.
The LA-logics (“logics with Local Agreement”) are polymodal logics defined semantically such that at any world of a model, the sets of successors for the different accessibility relations can be linearly ordered and the accessibility relations are equivalence relations. In a previous work, we have shown that every LA-logic defined with a finite set of modal indices has an NP-complete satisfiability problem. In this paper, we introduce a class of LA-logics with a countably infinite set of modal indices and we show that the satisfiability problem is PSPACE-complete for every logic of such a class. The upper bound is shown by exhibiting a tree structure of the models. This allows us to establish a surprising correspondence between the modal depth of formulae and the number of occurrences of distinct modal connectives. More importantly, as a consequence, we can show the PSPACE-completeness of Gargov's logic DALLA and Nakamura's logic LGM restricted to modal indices that are rational numbers, for which the computational complexity characterization has been open until now. These logics are known to belong to the class of information logics and fuzzy modal logics, respectively.  相似文献   

13.
Multi-Dimensional Modal Logic as a Framework for Spatio-Temporal Reasoning   总被引:7,自引:0,他引:7  
In this paper we advocate the use of multi-dimensional modal logics as a framework for knowledge representation and, in particular, for representing spatio-temporal information. We construct a two-dimensional logic capable of describing topological relationships that change over time. This logic, called PSTL (Propositional Spatio-Temporal Logic) is the Cartesian product of the well-known temporal logic PTL and the modal logic S4u, which is the Lewis system S4 augmented with the universal modality. Although it is an open problem whether the full PSTL is decidable, we show that it contains decidable fragments into which various temporal extensions (both point-based and interval based) of the spatial logic RCC-8 can be embedded. We consider known decidability and complexity results that are relevant to computation with multi-dimensional formalisms and discuss possible directions for further research.  相似文献   

14.
An infinitary proof theory is developed for modal logics whose models are coalgebras of polynomial functors on the category of sets. The canonical model method from modal logic is adapted to construct a final coalgebra for any polynomial functor. The states of this final coalgebra are certain “maximal” sets of formulas that have natural syntactic closure properties.

The syntax of these logics extends that of previously developed modal languages for polynomial coalgebras by adding formulas that express the “termination” of certain functions induced by transition paths. A completeness theorem is proven for the logic of functors which have the Lindenbaum property that every consistent set of formulas has a maximal extension. This property is shown to hold if the deducibility relation is generated by countably many inference rules.

A counter-example to completeness is also given. This is a polynomial functor that is not Lindenbaum: it has an uncountable set of formulas that is deductively consistent but has no maximal extension and is unsatisfiable, even though all of its countable subsets are satisfiable.  相似文献   


15.
Gesture-based programming (GBP) is a paradigm for the evolutionary programming of dextrous robotic systems by human demonstration. We call the paradigm “gesture-based” because we try to capture, in real-time, the intention behind the demonstratrator's fleeting, context-dependent hand motions, contact conditions, finger poses, and even cryptic utterances, rather than just recording and replaying movement. The paradigm depends on a pre-existing knowledge base of capabilities, collectively called “encapsulated expertise”, that comprise the real-time sensorimotor primitives from which the run-time executable is constructed as well as providing the basis for interpreting the teacher's actions during programming. In this paper we first describe the GBP environment, which is not fully implemented. We then present a technique based on principal components analysis, augmentable with model-based information, for learing and recognizing sensorimotor primitives. This paper describes simple applications of the technique to a small mobile robot and a PUMA manipulator. The mobile robot learned to escape from jams while the manipulator learned guarded moves and rotational accommodation that are composable to allow flat plate mating operations. While these initial applications are simple, they demonstrate the ability to extract primitives from demonstration, recognize the learned primitives in subsequent demonstrations, and combine and transform primitives to create different capabilities, which are all critical to the GBP paradigm.  相似文献   

16.
Florin   《Performance Evaluation》2003,51(2-4):171-190
Large deviations papers like that of Ignatyuk et al. [Russ. Math. Surv. 49 (1994) 41–99] have shown that asymptotically, the stationary distribution of homogeneous regulated networks is of the form
with the coefficient being different in various “boundary influence domains” and also depending on some of these domains on n. In this paper, we focus on the case of constant exponents and on a subclass of networks we call “strongly skip-free” (which includes all Jackson and all two-dimensional skip-free networks). We conjecture that an asymptotic exponent is constant iff it corresponds to a large deviations escape path which progresses gradually (from the origin to the interior) through boundary facets whose dimension always increases by one. Solving the corresponding large deviations problem for our subclass of networks leads to a family of “local large deviation systems” (LLDSs) (for the constant exponents), which are expressed entirely in terms of the cumulant generating function of the network. In this paper, we show that at least for “strongly skip-free” Markovian networks with independent transition processes, the LLDS is closely related to some “local boundary equilibrium systems” (LESs) obtained by retaining from the equilibrium equations only those valid in neighborhoods of the boundary.

Since asymptotic results require typically only that the cumulant generating function is well-defined over an appropriate domain, it is natural to conjecture that these LLDSs will provide the asymptotic constant exponents regardless of any distributional assumptions made on the network.

Finally, we outline a practical recipe for combining the local approximations to produce a global large deviations approximation , with the coefficients Kj determined numerically.  相似文献   


17.
Uncertainty in deductive databases and logic programming has been modeled using a variety of (numeric and non-numeric) formalisms in the past, including probabilistic, possibilistic, and fuzzy set-theoretic approaches, and many valued logic programming. In this paper, we consider a hybrid approach to the modeling of uncertainty in deductive databases. Our model, called deductive IST (DIST) is based on an extension of the Information Source Tracking (IST) model, recently proposed for relational databases. The DIST model permits uncertainty to be modeled and manipulated in essentially qualitative terms with an option to convert qualitative expressions of uncertainty into numeric form (e.g., probabilities). An uncertain deductive database is modeled as a Horn clause program in the DIST framework, where each fact and rule is annotated with an expression indicating the “sources” contributing to this information and their nature of contribution. (1) We show that positive DIST programs enjoy the least model/least fixpoint semantics analogous to classical logic programs. (2) We show that top-down (e.g., SLD-resolution) and bottom-up (e.g., magic sets rewriting followed by semi-naive evaluation) query processing strategies developed for datalog can be easily extended to DIST programs. (3) Results and techniques for handling negation as failure in classical logic programming can be easily extended to DIST. As an illustration of this, we show how stratified negation can be so extended. We next study the problem of query optimization in such databases and establish the following results. (4) We formulate query containment in qualitative as well as quantitative terms. Intuitively, our qualitative sense of containment would say a query Q1 is contained in a query Q2 provided for every input database D, for every tuple t, t ε Q2(D) holds in every “situation” in which t ε Q1(D) is true. The quantitative notion of containment would say Q1 is contained in Q2 provided on every input, the certainty associated with any tuple computed by Q1 is no more than the certainty associated with the same tuple by Q2 on the given input. We also prove that qualitative and quantitative notions of containment (both absolute and uniform versions) coincide. (5) We establish necessary and sufficient conditions for the qualitative containment of conjunctive queries. (6) We extend the well-known chase technique to develop a test for uniform containment and equivalence of positive DIST programs. (7) Finally, we prove that the complexity of testing containment of conjunctive DIST queries remains the same as in the classical case when number of information sources is regarded as a constant (so, it's NP-complete in the size of the queries). We also show that testing containment of conjunctive queries is co-NP-complete in the number of information sources.  相似文献   

18.
Software that can produce independently checkable evidence for the correctness of its output has received recent attention for use in certifying compilers and proof-carrying code. CVC (Cooperating Validity Checker) is a proof-producing validity checker for a decidable fragment of first-order logic enriched with background theories. This paper describes how proofs of valid formulas are produced from the decision procedure for linear real arithmetic implemented in CVC. It is shown how extensions to LF which support proof rules schematic in an arity (“elliptical” rules) are very convenient for this purpose.  相似文献   

19.
We refer to an arbitrary family of events (hypotheses), i.e., has neither any particular algebraic structure nor is a partition of the certain event Ω. We detect logical relations among the given events (the latter could represent some possible diseases), and some further information is carried by probability assessments, relative to an event E (e.g., a symptom) conditionally to some of the Hi's (“partial likelihood”). If we assess (prior) probabilities for the events Hi's, then the ensuing problems are: (i) is this assessment coherent? (ii) is the partial likelihood coherent “per se”? (iii) is the global assignment (the initial one together with the likelihood) coherent? If the relevant answers are all YES, then we may try to “update” (coherently) the priors P(Hi) into the posteriors P(Hi|E). This is an instance of a more general issue, the problem of coherent extensions: a very particular case is Bayes' updating for exhaustive and mutually exclusive hypotheses, in which this extension is unique. In the general case the lack of uniqueness gives rise to upper and lower updated probabilities, and we could now update again the latter, given a new event F and a corresponding (possibly partial) likelihood. In this paper, many relevant features of this problem are discussed, keeping an eye on the distinction between semantic and syntactic aspects.  相似文献   

20.
Optimizing agent-based meeting scheduling through preference estimation   总被引:2,自引:0,他引:2  
Meeting scheduling is a routine task that needs to be performed quite regularly and frequently within any organization. Unfortunately, this task can be quite tedious and time-consuming, potentially requiring a several rounds of negotiations among many people on the meeting date, time and place before a meeting can finally be confirmed. The objective of our research is to create an agent-based environment within which meeting scheduling can be performed and optimized. For meeting scheduling, we define optimality as the solution that has the highest average preference level among all the possible choices. Our model tries to mimic real life in that an individual's preferences are not made public. Without complete information, traditional optimal algorithms, such as A* will not work. In this paper, we present a novel “preference estimation” technique that allows us to find optimal solutions to negotiations problems without needing to know the exact preference models of all the meeting participants beforehand. Instead, their preferences are “estimated” and built on the fly based on observations of their responses during negotiation. Another unique contribution is the use of “preference rules” that allow preferences to change dynamical as scheduling decisions are made. This mimics changing preferences as schedule gets filled. This paper uses two negotiation algorithms to compare the effect of “preference estimation”—one that is based on negotiation through relaxation and the other that extends this with preference estimations. Simulations were then performed to compare these algorithms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号