首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Abstract

The needs of a real-time reasoner situated in an environment may make it appropriate to view error-correction and non-monotonicity as much the same thing. This has led us to formulate situated (or step) logic, an approach to reasoning in which the formalism has a kind of real-time self-reference that affects the course of deduction itself. Here we seek to motivate this as a useful vehicle for exploring certain issues in commonsense reasoning. In particular, a chief drawback of more traditional logics is avoided: from a contradiction we do not have all wffs swamping the (growing) conclusion set. Rather, we seek potentially inconsistent, but nevertheless useful, logics where the real-time self-referential feature allows a direct contradiction to be spotted and corrective action taken, as part of the same system of reasoning. Some specific inference mechanisms for real-time default reasoning are suggested, notably a form of introspection relevant to default reasoning. Special treatment of ‘now’ and of contradictions are the main technical devices here. We illustrate this with a computer-implemented real time solution to R. Moore's Brother Problem.  相似文献   

2.
Introspective reasoning can enable a reasoner to learn by refining its own reasoning processes. In order to perform this learning, the system must monitor the course of its reasoning to detect learning opportunities and then apply appropriate learning strategies. This article describes lessons learned from research on a computer model of how introspective reasoning can guide failure-driven learning. The computer model monitors its own reasoning by comparing it to a model of the desired behaviour of its reasoning, and learns in response to deviations from the ideal defined by the model. The approach is applied to the problem of determining indices for selecting cases from a case-based planner's memory. Experiments show that learning driven by this introspective reasoning both decreases retrieval effort and improves the quality of plans retrieved, increasing the overall performance of the planning system compared to case learning alone.  相似文献   

3.
Reasoning can lead not only to the adoption of beliefs, but also to the retraction of beliefs. In philosophy, this is described by saying that reasoning is defeasible. My ultimate objective is the construction of a general theory of reasoning and its implementation in an automated reasoner capable of both deductive and defeasible reasoning. the resulting system is named “OSCAR.” This article addresses some of the theoretical underpinnings of OSCAR. This article extends my earlier theory in two directions. First, it addresses the question of what the criteria of adequacy should be for a defeasible reasoner. Second, it extends the theory to accommodate reasons of varying strengths.  相似文献   

4.
5.
Abstract

One method to overcome the notorious efficiency problems of logical reasoning algorithms in AI has been to combine a general-purpose reasoner with several special-purpose reasoners for commonly used subtasks. In this paper we are using Schubert's (Schubert et al. 1983, 1987) method of implementing a special-purpose class reasoner. We show that it is possible to replace Schubert's preorder number class tree by a preorder number list without loss of functionality. This form of the algorithm lends itself perfectly towards a parallel implementation,1 and we describe design, coding and testing of such an implementation. Our algorithm is practically independent of the size of the class list, and even with several thousand nodes learning times are under a second and retrieval times are under 500 ms.  相似文献   

6.
Some general principles are formulated about geometric reasoning in the context of model-based computer vision. Such reasoning tries to draw inferences about the spatial relationships between objects in a scene based on the fragmentary and uncertain geometric evidence provided by an image. The paper discusses the tasks the reasoner is to perform for the vision program, the basic competences it requires and the various methods of implementation. In the section on basic competences, some specifications of the data types and operations needed in any geometric reasoner are given.  相似文献   

7.
In this paper constructions leading to the formation of belief sets by agents are studied. The focus is on the situation when possible belief sets are built incrementally in stages. An infinite sequence of theories that represents such a process is called a reasoning trace. A set of reasoning traces describing all possible reasoning scenarios for the agent is called a reasoning frame. Default logic by Reiter is not powerful enough to represent reasoning frames. In the paper a generalization of default logic of Reiter is introduced by allowing infinite sets of justifications. This formalism is called infinitary default logic. In the main result of the paper it is shown that every reasoning frame can be represented by an infinitary default theory. A similar representability result for antichains of theories (belief frames) is also presented.  相似文献   

8.
Abstract

We present an evolution of SNePS, the SnePSR (from SNePS with resources) knowledge representation/reasoner system. SnePSR is an intelligent resource-bounded reasoner that allows several resource spending strategies. Since no a priori commitments are made about the way resources are spent, the process of consuming resources can be used to model non-omniscient, non-exhaustive reasoners. SnePSR combines the introduction of resources with the capability to produce conditional answers, which explicitly reveals the impediments that are responsible for the absence of a definite answer. SnePSR avoids two problems that most of the programs, trying to behave intelligently, suffer from: (1) never take into account the fact that reasoning resources are limited; (2) remain silent whenever a definite answer cannot be produced. After briefly presenting how the characteristics that distinguish SNePSR have been incorporated into SNePS we present some case studies of interactions with SNePSR demonstrating some of the system's features.  相似文献   

9.
Complete propositional reasoning is impractical as a tool in artificial intelligence, because it is computationally intractable. Most current approaches to limited propositional reasoning cannot easily be adjusted to use more (or less) time to prove more (fewer) theorems when the task requires it. This difficulty can be solved byparameterizing the reasoner: designing in a ‘power dial’ giving the user fine control over cost and performance. System designers face the significant problem of choosing the best parameter scheme to use. This paper proposes an empirical methodology for comparing parameter schemes and illustrates its use in comparing eight such schemes for a given complete, resolution-based propositional reasoner. From the data, a clear choice emerges as the most preferable of the eight.  相似文献   

10.
In nonmonotonic reasoning, a default conditional αβ has most often been informally interpreted as a defeasible version of a classical conditional, usually the material conditional. There is however an alternative interpretation, in which a default is regarded essentially as a rule, leading from premises to conclusion. In this paper, we present a family of logics, based on this alternative interpretation. A general semantic framework under this rule-based interpretation is developed, and associated proof theories for a family of weak conditional logics is specified. Nonmonotonic inference is easily defined in these logics. Interestingly, the logics presented here are weaker than the commonly-accepted base conditional approach for defeasible reasoning. However, this approach resolves problems that have been associated with previous approaches.   相似文献   

11.
In this article, we propose an Allen‐like approach to deal with different types of temporal constraints about periodic events. We consider the different components of such constraints (thus, unlike Allen, we also take into account quantitative constraints) including frame times, user‐defined periods, qualitative temporal constraints, and numeric quantifiers and the interactions between such components. We propose a specialized high‐level formalism to represent temporal constraints about periodic events; temporal reasoning on the formalism is performed by a path‐consistency algorithm repeatedly applying our operations of inversion, intersection, and composition and by a specialized reasoner about periods and numeric quantification. The high‐level formalism has been designed in such a way that different types of temporal constraints about periodic events can be represented in a compact and (hopefully) user‐friendly way and path‐consistency‐based temporal reasoning on the formalism can be performed in polynomial time. We also prove that our definitions of inversion, intersection, and composition and, thus, of our path‐consistency algorithm, are correct. This article also sketches the general architecture of the temporal manager for periodic events (TeMP+), that has been designed on the basis of our approach. As a working example, we show an application of our approach to scheduling in a school. © 2003 Wiley Periodicals, Inc.  相似文献   

12.
This paper deals with the automation of reasoning from incomplete information by means of default logics. We provide proof procedures for default logics' major reasoning modes, namely, credulous and skeptical reasoning. We start by reformulating the task of credulous reasoning in default logics as deductive planning problems. This interpretation supplies us with several interesting and valuable insights into the proof theory of default logics. Foremost, it allows us to take advantage of the large number of available methods, algorithms, and implementations for solving deductive planning problems. As an example, we demonstrate how credulous reasoning in certain variants of default logic is implementable by means of a planning method based on equational logic programming. In addition, our interpretation allows us to transfer theoretical results, such as complexity results, from the field of planning to that of default logics. In this way, we have isolated two yet unknown classes of default theories for which deciding credulous entailment is polynomial.Our approach to skeptical reasoning relies on an arbitrary method for credulous reasoning. It does not strictly require rather the inspection of all extensions, nor does it strictly require the computation of entire extensions to decide whether a formula is skeptically entailed. Notably, our approach abstracts from an underlying credulous reasoner. In this way, it can be used to extend existing formalisms for credulous reasoning to skeptical reasoning.This author was a visiting professor at the University of Darmstadt while parts of this work were being carried out. This author also acknowledges support from the Commission of the European Communities under grant no. ERB4001GT922433.  相似文献   

13.
Abstract

In the past we developed a semantics for a restricted annotated logic language for inheritance reasoning. Here we generalize it to annotated Horn logic programs. We first provide a formal account of the language, describe its semantics, and provide an interpreter written in Prolog for it. We then investigate its relationship to Belnap's 4-valued logic, Gelfond and Lifschitz's semantics for logic programs with negation, Brewka's prioritized default logics and other annotated logics due to Kifer et al.  相似文献   

14.
This paper is a discussion of two continuous learning approaches for improving classification accuracy for an intuitive reasoner algorithm. The reasoner predicted the value of a given target variable by multiple iterations of forward-chained, rule-based inference. Each rule in the reasoner’s rule set had associated with it a weight, referred to here as “Strength of Belief” (SB). The value of SB of a rule indicated the certainty level of that rule. In each iteration of reasoning, any instances of similar values for a given variable were replaced by a single consolidated datum and the SB associated with the consolidated datum was increased. At the end of the reasoning process, the class (value) of the target variable which had the highest SB was reported as the conclusion. The rule set for the reasoner was generated based on a training data set that contained 80% of the data in a weather database comprising 50 years worth of hourly measurements for 54 weather variables. Each rule was induced based on only a small subset of the weather data. The intuitive reasoner was tested by using the induced rules to predict a number of pre-selected target variables using 275 test cases created from the test data. The first continuous learning approach was to identify relevant input variables for the reasoner, and the second was to rebalance the rule set used by the reasoner by adjusting the SB associated with each of the rules. Because of the way the rules were induced, the resulting rules did not contain any information about the relevance of the 53 possible input variables to the task of predicting a given target variable for previously unseen cases. A method was developed to identify which input variables were most relevant to the task based on the induced rule set. This method resulted in higher prediction accuracy of the intuitive reasoner than using a set of randomly chosen input variables for four of six target variables. The second continuous learning approach was intended to address the class imbalance problem in the rule set. The intuitive reasoner appeared to over-fit classes (values) which had frequent representation in the rule set. To address this problem, a heuristic was developed that generated adjustment factors for the SB values of the rules. The use of this heuristic improved the classification accuracy of the intuitive reasoner for four of the six target variables.  相似文献   

15.
A recurrent problem in the development of reasoning agents is how to assign degrees of beliefs to uncertain events in a complex environment. The standard knowledge representation framework imposes a sharp separation between learning and reasoning; the agent starts by acquiring a “model” of its environment, represented into an expressive language, and then uses this model to quantify the likelihood of various queries. Yet, even for simple queries, the problem of evaluating probabilities from a general purpose representation is computationally prohibitive. In contrast, this study embarks on the learning to reason (L2R) framework that aims at eliciting degrees of belief in an inductive manner. The agent is viewed as an anytime reasoner that iteratively improves its performance in light of the knowledge induced from its mistakes. Indeed, by coupling exponentiated gradient strategies in learning and weighted model counting techniques in reasoning, the L2R framework is shown to provide efficient solutions to relational probabilistic reasoning problems that are provably intractable in the classical paradigm.  相似文献   

16.
We propose a general way of combining background reasoners in theory reasoning. Using a restricted version of the Craig interpolation lemma, we show that background reasoner cooperation can be achieved as a form of constraint propagation, much in the spirit of existing combination methods for decision procedures. In this case, constraint information is propagated across reasoners eexchanging residues that are, in essence, disjunctions of ground literals over a common signature. As an application of our approach, we describe a multitheory version of the semantic tableau calculus, and we prove it sound and complete.  相似文献   

17.
We present a general approach for representing and reasoning with sets of defaults in default logic, focusing on reasoning about preferences among sets of defaults. First, we consider how to control the application of a set of defaults so that either all apply (if possible) or none do (if not). From this, an approach to dealing with preferences among sets of default rules is developed. We begin with an ordered default theory , consisting of a standard default theory, but with possible preferences on sets of rules. This theory is transformed into a second, standard default theory wherein the preferences are respected. The approach differs from other work, in that we obtain standard default theories and do not rely on prioritized versions of default logic. In practical terms this means we can immediately use existing default logic theorem provers for an implementation. Also, we directly generate just those extensions containing the most preferred applied rules; in contrast, most previous approaches generate all extensions, then select the most preferred. In a major application of the approach, we show how semimonotonic default theories can be encoded so that reasoning can be carried out at the object level. With this, we can reason about default extensions from within the framework of a standard default logic. Hence one can encode notions such as skeptical and credulous conclusions, and can reason about such conclusions within a single extension.  相似文献   

18.
Abstract

The concept of extension plays an important role in default logic. The notion of an ordered seminormal default theory has been introduced (Etherington 1987) to characterize a class of seminormal default theories which have extensions. However, the original definition has a drawback because of its dependence on specific representations of the default theory. We introduce the ‘canonical representation’ of a default theory and redefine the orderedness of a default theory based on its canonical representation. We show that under the new definition, the orderedness of a default theory Δ = (W,D) is intrinsic to the theory itself, independent of the specific representations of W and D. We present a modification of the algorithm in Etherington (1987) for computing extensions of a default theory. More importantly, we prove the conjecture (Etherington 1987) that a modified version of the algorithm in Etherington (1987) converges for general ordered, finite seminormal default theories, while the original algorithm was proven (Etherington 1987) to converge for ordered, finite network default theories which form a proper subset of the theories considered in this paper.  相似文献   

19.
This article describes a framework for practical social reasoning designed to be used for analysis, specification, and implementation of the social layer of agent reasoning in multiagent systems. Our framework, called the expectation strategy behavior (ESB) framework, is based on (i) using sets of update rules for social beliefs tied to observations (so‐called expectations), (ii) bounding the amount of reasoning to be performed over these rules by defining a reasoning strategy, and (iii) influencing the agent's decision‐making logic by means of behaviors conditioned on the truth status of current and future social beliefs. We introduce the foundations of ESB conceptually and present a formal framework and an actual implementation of a reasoning engine, which is specifically combined with a general (belief–desire–intention‐based) practical reasoning programming system. We illustrate the generality of ESB through select case studies, which show that it is able to represent and implement different typical styles of social reasoning. The broad coverage of existing social reasoning methods, the modularity that derives from its declarative nature, and its focus on practical implementation make ESB a useful tool for building advanced socially reasoning agents.  相似文献   

20.
Many formalisms for reasoning about knowing commit an agent to be logically omniscient. Logical omniscience is an unrealistic principle for us to use to build a real-world agent, since it commits the agent to knowing infinitely many things. A number of formalizations of knowledge have been developed that do not ascribe logical omniscience to agents. With few exceptions, these approaches are modifications of the possible-worlds semantics. In this paper we use a combination of several general techniques for building non-omniscient reasoners. First we provide for the explicit representation of notions such as problems, solutions, and problem solving activities, notions which are usually left implicit in the discussions of autonomous agents. A second technique is to take explicitly into account the notion of resource when we formalize reasoning principles. We use the notion of resource to describe interesting principles of reasoning that are used for ascribing knowledge to agents. For us, resources are abstract objects. We make extensive use of ordering and inaccessibility relations on resources, but we do not find it necessary to define a metric. Using principles about resources without using a metric is one of the strengths of our approach.We describe the architecture of a reasoner, built from a finite number of components, who solves a puzzle, involving reasoning about knowing, by explicitly using the notion of resource. Our approach allows the use of axioms about belief ordinarily used in problem solving – such as axiom K of modal logic – without being forced to attribute logical omniscience to any agent. In particular we address the issue of how we can use resource-unbounded (e.g., logically omniscient) reasoning to attribute knowledge to others without introducing contradictions. We do this by showing how omniscient reasoning can be introduced as a conservative extension over resource-bounded reasoning.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号