首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In the tree-like representation of Time, two histories are undivided at a moment t whenever they share a common moment in the future of t. In the present paper, it will first be proved that Ockhamist and Peircean branching-time logics are unable to express some important sentences in which the notion of undividedness is involved. Then, a new semantics for branching-time logic will be presented. The new semantics is based on trees endowed with an indistinguishability function, a generalization of the notion of undividedness. It will be shown that Ockhamist and Peircean semantics can be viewed as limit cases of the semantics developed in this paper.  相似文献   

2.
Certain tasks, such as formal program development and theorem proving, fundamentally rely upon the manipulation of higher-order objects such as functions and predicates. Computing tools intended to assist in performing these tasks are at present inadequate in both the amount of knowledge they contain (i.e., the level of support they provide) and in their ability to learn (i.e., their capacity to enhance that support over time). The application of a relevant machine learning technique—explanation-based generalization (EBG)—has thus far been limited to first-order problem representations. We extend EBG to generalize higher-order values, thereby enabling its application to higher-order problem encodings.Logic programming provides a uniform framework in which all aspects of explanation-based generalization and learning may be defined and carried out. First-order Horn logics (e.g., Prolog) are not, however, well suited to higher-order applications. Instead, we employ Prolog, a higher-order logic programming language, as our basic framework for realizing higher-order EBG. In order to capture the distinction between domain theory and training instance upon which EBG relies, we extend Prolog with the necessity operator of modal logic. We develop a meta-interpreter realizing EBG for the extended language, Prolog, and provide examples of higher-order EBG.  相似文献   

3.
A well-known problem in default logic is the ability of naive reasoners to explain bothg and ¬g from a set of observations. This problem is treated in at least two different ways within that camp.One approach is examination of the various explanations and choosing among them on the basis of various explanation comparators. A typical comparator is choosing the explanation that depends on the most specific observation, similar to the notion of narrowest reference class.Others examine default extensions of the observations and choose whatever is true in any extension, or what is true in all extensions or what is true in preferred extensions. Default extensions are sometimes thought of as acceptable models of the world that are discarded as more knowledge becomes available.We argue that the notions of specificity and extension lack clear semantics. Furthermore, we show that the problems these ideas were supposed to solve can be handled easily within a probabilistic framework.  相似文献   

4.
Summary Reasoning about programs involves some logical framework which seems to go beyond classical predicate logic. LAR is an extension of predicate logic by additional concepts which are to formalize our natural reasoning about algorithms. Semantically, this extension introduces an underlying time scale on which formulas are considered and time shifting connectives. Besides a full model-theoretic treatment, a consistent and complete formal system for LAR is given. The pure logical system can serve as a basis for various theories. As an example, a theory of while program schemes is developed which contains Hoare's correctness proof system.  相似文献   

5.
In many applications one has a set of discrete points at which some variable such as pressure or velocity is measured. In order to graphically represent and display such data (say, as contours of constant pressure), the discrete data must be represented by a smooth function. This continuous surface can then be evaluated at any point for graphical display. Sometimes data are arbitrarily located except that they occur along non-intersecting lines, an example occurring in wind tunnel tests where data are recorded at plug taps on an aircraft body. An algorithm is developed for this type of structured data problem and illustrated by means of color computer graphics.  相似文献   

6.
We introduce a methodology whereby an arbitrary logic system L can be enriched with temporal features to create a new system T(L). The new system is constructed by combining L with a pure propositional temporal logic T (such as linear temporal logic with Since and Until) in a special way. We refer to this method as adding a temporal dimension to L or just temporalising L. We show that the logic system T(L) preserves several properties of the original temporal logic like soundness, completeness, decidability, conservativeness and separation over linear flows of time. We then focus on the temporalisation of first-order logic, and a comparison is make with other first-order approaches to the handling of time.  相似文献   

7.
A linear evolution equation for a thermodynamic variable F, odd under time-reversal, is obtained from the exact equation derived by Robertson from the Liouville equation for the information-theoretic phase-space distribution. One obtains an exact expression for , the relaxation time for F. For very short , is time-independent for t > if C(t) F{exp(-i t)}Fo, the equilibrium time correlation, decays exponentially for t > . is the Liouville operator. So long as C(t) is such that decays rapidly to a steady-state value, the t limit of agrees with the fluctuation-dissipation theorem in applications to fluid transport.  相似文献   

8.
Summary For a family of languages , CAL() is defined as the family of images of under nondeterministic two-way finite state transducers, while FINITE · VISIT() is the closure of under deterministic two-way finite state transducers; CAL0()= and for n0, CAL n+1()=CAL n (CAL()). For any semiAFL , if FINITE · VISIT() CAL(), then CAL n () forms a proper hierarchy and for every n0, FINITE · VISIT(CALn()) CAL n+1() FINITE · VISIT(CAL n+1()). If is a SLIP semiAFL or a weakly k-iterative full semiAFL or a semiAFL contained in any full bounded AFL, then FINITE · VISIT() CAL() and in the last two cases, FINITE · VISIT(). If is a substitution closed full principal semiAFL and FINITE · VISIT(), then FINITE · VISIT() CAL(). If is a substitution closed full principal semiAFL generated by a language without an infinite regular set and 1 is a full semiAFL, then is contained in CALm(1) if and only if it is contained in 1. Among the applications of these results are the following. For the following families , CAL n () forms a proper hierarchy: =INDEXED, =ETOL, and any semiAFL contained in CF. The family CF is incomparable with CAL m (NESA) where NESA is the family of one-way nonerasing stack languages and INDEXED is incomparable with CAL m (STACK) where STACK is the family of one-way stack languages.This work was supported in part by the National Science Foundation under Grants No. DCR74-15091 and MCS-78-04725  相似文献   

9.
Entemann (2002) defends fuzzy logic by pointing to what he calls misconceptions concerning fuzzy logic. However, some of these ;misconceptions are in fact truths, and it is Entemann who has the misconceptions. The present article points to mistakes made by Entemann in three different areas. It closes with a discussion of what sort of general considerations it would take to motivate fuzzy logic.  相似文献   

10.
Since the earliest formalisation of default logic by Reiter many contributions to this appealing approach to nonmonotonic reasoning have been given. The different formalisations are here presented in a general framework that gathers the basic notions, concepts and constructions underlying default logic. Our view is to interpret defaults as special rules that impose a restriction on the juxtaposition of monotonic Hubert-style proofs of a given logicL. We propose to describe default logic as a logic where the juxtaposition of default proofs is subordinate to a restriction condition . Hence a default logic is a pair (L, ) where properties of the logic , like compactness, can be interpreted through the restriction condition . Different default systems are then given a common characterization through a specific condition on the logicL. We also prove cumulativity for any default logic (L, ) by slightly modifying the notion of default proof. We extend, in fact, the language ofL in a way close to that followed by Brewka in the formulation of his cumulative default system. Finally we show the existence of infinitely many intermediary default logics, depending on and called linear logics, which lie between Reiter's and ukaszewicz' versions of default logic.Work carried out in the framework of the agreement between Italian PT Administration and FUBLaforia, Université Paris VI Pierre et Marie Curie, 4 Place Jussieu,Tour 46, 75252 Paris, France  相似文献   

11.
Linear logic, introduced by J.-Y. Girard, is a refinement of classical logic providing means for controlling the allocation of resources. It has aroused considerable interest from both proof theorists and computer scientists. In this paper we investigate methods for automated theorem proving in propositional linear logic. Both the bottom-up (tableaux) and top-down (resolution) proof strategies are analyzed. Various modifications of sequent rules and efficient search strategies are presented along with the experiments performed with the implemented theorem provers.  相似文献   

12.
The main stream of legal theory tends to incorporate unwritten principles into the law. Weighing of principles plays a great role in legal argumentation, inter alia in statutory interpretation. A weighing and balancing of principles and other prima facie reasons is a jump. The inference is not conclusive.To deal with defeasibility and weighing, a jurist needs both the belief-revision logic and the nonmonotonic logic. The systems of nonmonotonic logic included in the present volume provide logical tools enabling one to speak precisely about various kinds of rules about rules, dealing with such things as applicability of rules, what is assumed by rules, priority between rules and the burden of proof. Nonmonotonic logic is an example of an extension of the domain of logic. But the more far-reaching the extension is, the greater problems it meets. It seems impossible to make logical reconstruction of the totality of legal argumentation.The lawyers' search for reasons has no obvious end point. Ideally, the search for reasons may end when one arrives at a coherent totality of knowledge. In other words, coherence is the termination condition of reasoning. Both scientific knowledge and knowledge of legal and moral norms progresses by trial and error, and that one must resort to a certain convention to define what error means. The main difference is, however, that conventions of science are much more precise than those of legal scholarship.Consequently, determination of error in legal science is often holistic and circular. The reasons determining that a legal theory is erroneous are not more certain than the contested theory itself. A strict and formal logical analysis cannot give us the full grasp of legal rationality. A weaker logical theory, allowing for nonmonotonic steps, comes closer, at the expense of an inevitable loss of computational efficiency. Coherentist epistemology grasps even more of this rationality, at the expense of a loss of preciseness.  相似文献   

13.
prove ((E, F), A, B, C, D) : - !, prove (E, [F A], B, C, D).prove ((E; F), A, B, C, D) : - !, prove (E, A, B, C, D), prove (F, A, B, C, D).prove (all(H, I), A, B, C, D) : - !, + length (C, D), copy_term ((H, I, C), (G, F, C)), append (A, [all (H, I)], E), prove(F, E, B, [G C], D).prove (A,_, [C D] ,_, _) :-((A= – (B); – (A) = B)) -> (unify(B, C); prove (A, [], D,_,_)).prove (A, [E F], B, C, D): - prove (E, F, [AB], C,D).implements a first-order theorem prover based on free-variable semantic tableaux. It is complete, sound, and efficient.  相似文献   

14.
15.
In this paper, we propose a two-layer sensor fusion scheme for multiple hypotheses multisensor systems. To reflect reality in decision making, uncertain decision regions are introduced in the hypotheses testing process. The entire decision space is partitioned into distinct regions of correct, uncertain and incorrect regions. The first layer of decision is made by each sensor indepedently based on a set of optimal decision rules. The fusion process is performed by treating the fusion center as an additional virtual sensor to the system. This virtual sensor makes decision based on the decisions reached by the set of sensors in the system. The optimal decision rules are derived by minimizing the Bayes risk function. As a consequence, the performance of the system as well as individual sensors can be quantified by the probabilities of correct, incorrect and uncertain decisions. Numerical examples of three hypotheses, two and four sensor systems are presented to illustrate the proposed scheme.  相似文献   

16.
Summary This paper is devoted to developing and studying a precise notion of the encoding of a logical data structure in a physical storage structure, that is motivated by considerations of computational efficiency. The development builds upon the notion of an encoding of one graph in another. The cost of such an encoding is then defined so as to reflect the structural compatibility of the two graphs, the (externally specified) costs of implementing the host graph, and the (externally specified) set of intended usage patterns of the guest graph. The stability of the constructed framework is demonstrated in terms of a number of results; the faithfulness of the formalism is argued in terms of a number of examples from the literature; and the tractability of the model is hinted at by several results and by further references to the literature.  相似文献   

17.
We introduce here the study of generalnonmonotonic rule systems. These deal with situations where a conclusion is drawn from a system of beliefsS (and seen to be inS), basedboth on some premises being inS and on some restraints not being inS. In the monotone systems of traditional logic there are no restraints, conclusions are drawn solely based on premises being inS. Nonmonotonic rule systems capture the essential syntactic, semantic, and algorithmic features of many nonmonotone systems such as default logic, negation as failure, truth maintenance, autoepistemic logic, and also important combinatorial questions from mathematics such as the marriage problem. This reveals semantics and syntax and proof procedures and algorithms for computing belief sets in many cases where none were previously available and entirely uniformly. In particular, we introduce and study deductively closed sets, extensions and weak extensions. Semantics of nonmonotonic rule systems is studied in part II of this paper and extensions to predicate classical, intuitionistic, and modal logics are left to a later paper.Work partially supported by NSF grant RII-8610671 and Kentucky EPSCoR program and ARO contract DAAL03-89-K-0124.Work partially supported by NSF grant DMS-8902797 and ARO contract DAAG629-85-C-0018.Work partially supported by NSF grant DMS-8702473.  相似文献   

18.
Mutual convertibility of bound entangled states under local quantum operations and classical communication (LOCC) is studied. We focus on states associated with unextendible product bases (UPB) in a system of three qubits. A complete classification of such UPBs is suggested. We prove that for any pair of UPBs S and T the associated bound entangled states S and T cannot be converted to each other by LOCC, unless S and T coincide up to local unitaries. More specifically, there exists a finite precision (S,T) > 0 such that for any LOCC protocol mapping S into a probabilistic ensemble (p, ), the fidelity between T and any possible final state satisfies F(T, ) = 1 - (S,T).PACS: 03.65.Bz; 03.67.-a; 89.70+c.  相似文献   

19.
When interpolating incomplete data, one can choose a parametric model, or opt for a more general approach and use a non-parametric model which allows a very large class of interpolants. A popular non-parametric model for interpolating various types of data is based on regularization, which looks for an interpolant that is both close to the data and also smooth in some sense. Formally, this interpolant is obtained by minimizing an error functional which is the weighted sum of a fidelity term and a smoothness term.The classical approach to regularization is: select optimal weights (also called hyperparameters) that should be assigned to these two terms, and minimize the resulting error functional.However, using only the optimal weights does not guarantee that the chosen function will be optimal in some sense, such as the maximum likelihood criterion, or the minimal square error criterion. For that, we have to consider all possible weights.The approach suggested here is to use the full probability distribution on the space of admissible functions, as opposed to the probability induced by using a single combination of weights. The reason is as follows: the weight actually determines the probability space in which we are working. For a given weight , the probability of a function f is proportional to exp(– f2 uu du) (for the case of a function with one variable). For each different , there is a different solution to the restoration problem; denote it by f. Now, if we had known , it would not be necessary to use all the weights; however, all we are given are some noisy measurements of f, and we do not know the correct . Therefore, the mathematically correct solution is to calculate, for every , the probability that f was sampled from a space whose probability is determined by , and average the different f's weighted by these probabilities. The same argument holds for the noise variance, which is also unknown.Three basic problems are addressed is this work: Computing the MAP estimate, that is, the function f maximizing Pr(f/D) when the data D is given. This problem is reduced to a one-dimensional optimization problem. Computing the MSE estimate. This function is defined at each point x as f(x)Pr(f/D) f. This problem is reduced to computing a one-dimensional integral.In the general setting, the MAP estimate is not equal to the MSE estimate. Computing the pointwise uncertainty associated with the MSE solution. This problem is reduced to computing three one-dimensional integrals.  相似文献   

20.
Ontological aspects of information modeling   总被引:4,自引:1,他引:3  
Information modeling (also known as conceptual modeling or semantic data modeling) may be characterized as the formulation of a model in which information aspects of objective and subjective reality are presented (the application), independent of datasets and processes by which they may be realized (the system).A methodology for information modeling should incorporate a number of concepts which have appeared in the literature, but should also be formulated in terms of constructs which are understandable to and expressible by the system user as well as the system developer. This is particularly desirable in connection with certain intimate relationships, such as being the same as or being a part of.The conceptual basis for such a methodology, as conventionally approached, seems flavored with notions arising in the systems arena to an inappropriate degree. To counter this tendency it is useful to turn to a discipline not hitherto much involved in technology, namely analytic philosophy.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号