首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
多主体系统中对其它主体的研究   总被引:6,自引:0,他引:6  
多主体系统是当前人工智能研究后一个热点,其中,关于知识和动作的推理是一个重要的课题,文中给出了一种知识表示框架,称为RAO逻辑,用来对其它主体研究时表示概念和规则,我们从日常推理中抽象出换位原则的规则(PEP),PEP是RAO是的一条公理模式,并且为主体研究其它主体的一个基本规则,它与知识逻辑中的分离规则和(K)公理具有相似的形式和作用。  相似文献   

2.
张宏  何华灿 《计算机科学》2006,33(8):184-186
采用换位原理的推理规则能够使得多Agent系统中关于其它Agent的状况和行为的推理变得简明和清晰。本文探讨了几个正规模态特征公式的有效性与框架性质之间的关系,发现一些直观上成立的模态公式也是有条件成立的,并从模态逻辑和Kripke可能世界语义的角度给出了文[1~3]中换位原理(PEP)规则有效性的语义证明。  相似文献   

3.
How do I choose whom to delegate a task to? This is an important question for an autonomous agent collaborating with others to solve a problem. Were similar proposals accepted from similar agents in similar circumstances? What arguments were most convincing? What are the costs incurred in putting certain arguments forward? Can I exploit domain knowledge to improve the outcome of delegation decisions? In this paper, we present an agent decision-making mechanism where models of other agents are refined through evidence from past dialogues and domain knowledge, and where these models are used to guide future delegation decisions. Our approach combines ontological reasoning, argumentation and machine learning in a novel way, which exploits decision theory for guiding argumentation strategies. Using our approach, intelligent agents can autonomously reason about the restrictions (e.g., policies/norms) that others are operating with, and make informed decisions about whom to delegate a task to. In a set of experiments, we demonstrate the utility of this novel combination of techniques. Our empirical evaluation shows that decision-theory, machine learning and ontology reasoning techniques can significantly improve dialogical outcomes.  相似文献   

4.
Many formalisms for reasoning about knowing commit an agent to be logically omniscient. Logical omniscience is an unrealistic principle for us to use to build a real-world agent, since it commits the agent to knowing infinitely many things. A number of formalizations of knowledge have been developed that do not ascribe logical omniscience to agents. With few exceptions, these approaches are modifications of the possible-worlds semantics. In this paper we use a combination of several general techniques for building non-omniscient reasoners. First we provide for the explicit representation of notions such as problems, solutions, and problem solving activities, notions which are usually left implicit in the discussions of autonomous agents. A second technique is to take explicitly into account the notion of resource when we formalize reasoning principles. We use the notion of resource to describe interesting principles of reasoning that are used for ascribing knowledge to agents. For us, resources are abstract objects. We make extensive use of ordering and inaccessibility relations on resources, but we do not find it necessary to define a metric. Using principles about resources without using a metric is one of the strengths of our approach.We describe the architecture of a reasoner, built from a finite number of components, who solves a puzzle, involving reasoning about knowing, by explicitly using the notion of resource. Our approach allows the use of axioms about belief ordinarily used in problem solving – such as axiom K of modal logic – without being forced to attribute logical omniscience to any agent. In particular we address the issue of how we can use resource-unbounded (e.g., logically omniscient) reasoning to attribute knowledge to others without introducing contradictions. We do this by showing how omniscient reasoning can be introduced as a conservative extension over resource-bounded reasoning.  相似文献   

5.
Any agent interacting with the real world must be able to reason about uncertainty in the world, about the actions that may occur in the world (either due to the agent or those initiated by other agents), about the (probabilistic) beliefs of other agents, and how these (probabilistic) beliefs are changing over time. In this article, we develop a family of logics that a reasoning agent may use to perform successively more sophisticated types of reasoning in such environments. We also characterize different types of agents. Furthermore, we provide a logic that enables a systems designer (who may have populated an environment with a collection of such autonomous agents) to reason about the system of agents as a whole. © 1995 John Wiley & Sons, Inc.  相似文献   

6.
In this paper we describe a language for reasoning about actions that can be used for modelling and for programming rational agents. We propose a modal approach for reasoning about dynamic domains in a logic programming setting. Agent behavior is specified by means of complex actions which are defined using modal inclusion axioms. The language is able to handle knowledge producing actions as well as actions which remove information. The problem of reasoning about complex actions with incomplete knowledge is tackled and the temporal projection and planning problems is addressed; more specifically, a goal directed proof procedure is defined, which allows agents to reason about complex actions and to generate conditional plans. We give a non-monotonic solution for the frame problem by making use of persistency assumptions in the context of an abductive characterization. The language has been used for implementing an adaptive web-based system.  相似文献   

7.
基于因果图的一种知识获取方法   总被引:4,自引:0,他引:4  
王洪春 《计算机仿真》2006,23(3):126-128
产生式规则和因果图是知识表示的两种方法,鉴于产生式规则在表达知识和推理方面的缺陷或不足,因此寻找一种能更好地表达知识和推理的方法非常必要,而因果图具有表达知识直观,推理灵活、方便等特点。论文根据模糊式产生式规则与因果图,以及合成式模糊产生式规则与含与门、或门的因果图的对应关系,给出了将模糊产生式规则集表示的知识转换成更紧凑、直观因果图表示的方法和过程,相应的也得到了一个因果图知识的获取方法,并给了一个其转换的实例。  相似文献   

8.
In many settings, fully automated reasoning about tasks and resources is crucial. This is particularly important in multi-agent systems where tasks are monitored, managed and performed by intelligent agents. For these agents, it is critical to autonomously reason about the types of resources a task may require. However, determining appropriate resource types requires extensive expertise and domain knowledge. In this paper, we propose a means to automate the selection of resource types that are required to fulfil tasks. Our approach combines ontological reasoning and Logic Programming in a novel way for flexible matchmaking of resources to tasks. Using the proposed approach, intelligent agents can autonomously reason about the resources and tasks in various real-life settings and we demonstrate this here through case-studies. Our evaluation shows that the proposed approach equips intelligent agents with flexible reasoning support for task resourcing.  相似文献   

9.
In this paper we discuss reasoning about reasoning in a multiple agent scenario. We consider agents that are perfect reasoners, loyal, and that can take advantage of both the knowledge and ignorance of other agents. The knowledge representation formalism we use is (full) first order predicate calculus, where different agents are represented by different theories, and reasoning about reasoning is realized via a meta-level representation of knowledge and reasoning. The framework we provide is pretty general: we illustrate it by showing a machine checked solution to the three wisemen puzzle. The agents' knowledge is organized into units: the agent's own knowledge about the world and its knowledge about other agents are units containing object-level knowledge; a unit containing meta-level knowledge embodies the reasoning about reasoning and realizes the link among units. In the paper we illustrate the meta-level architecture we propose for problem solving in a multi-agent scenario; we discuss our approach in relation to the modal one and we compare it with other meta-level architectures based on logic. Finally, we look at a class of applications that can be effectively modeled by exploiting the meta-level approach to reasoning about knowledge and reasoning.  相似文献   

10.
Planning for ad hoc teamwork is challenging because it involves agents collaborating without any prior coordination or communication. The focus is on principled methods for a single agent to cooperate with others. This motivates investigating the ad hoc teamwork problem in the context of self-interested decision-making frameworks. Agents engaged in individual decision making in multiagent settings face the task of having to reason about other agents’ actions, which may in turn involve reasoning about others. An established approximation that operationalizes this approach is to bound the infinite nesting from below by introducing level 0 models. For the purposes of this study, individual, self-interested decision making in multiagent settings is modeled using interactive dynamic influence diagrams (I-DID). These are graphical models with the benefit that they naturally offer a factored representation of the problem, allowing agents to ascribe dynamic models to others and reason about them. We demonstrate that an implication of bounded, finitely-nested reasoning by a self-interested agent is that we may not obtain optimal team solutions in cooperative settings, if it is part of a team. We address this limitation by including models at level 0 whose solutions involve reinforcement learning. We show how the learning is integrated into planning in the context of I-DIDs. This facilitates optimal teammate behavior, and we demonstrate its applicability to ad hoc teamwork on several problem domains and configurations.  相似文献   

11.
We propose an epistemic, nonmonotonic approach to the formalization of knowledge in a multi-agent setting. From the technical viewpoint, a family of nonmonotonic logics, based on Lifschitz's modal logic of minimal belief and negation as failure, is proposed, which allows for formalizing an agent which is able to reason about both its own knowledge and other agents' knowledge and ignorance. We define a reasoning method for such a logic and characterize the computational complexity of the major reasoning tasks in this formalism. From the practical perspective, we argue that our logical framework is well-suited for representing situations in which an agent cooperates in a team, and each agent is able to communicate his knowledge to other agents in the team. In such a case, in many situations the agent needs nonmonotonic abilities, in order to reason about such a situation based on his own knowledge and the other agents' knowledge and ignorance. Finally, we show the effectiveness of our framework in the robotic soccer application domain.  相似文献   

12.
An intelligent agent model with awareness of workflow progress   总被引:4,自引:4,他引:0  
To support human functioning, ambient intelligent agents require knowledge about the tasks executed by the human. This knowledge includes design-time information like: (i) the goal of a task and (ii) the alternative ways for a human to achieve that goal, as well as run-time information such as the choices made by a human during task execution. In order to provide effective support, the agent must know exactly what steps the human is following. However, if not all steps along the path can be observed, it is possible that the agent cannot uniquely derive which path the human is following. Furthermore, in order to provide timely support, the agent must observe, reason, conclude and support within a limited period of time. To deal with these problems, this paper presents a generic focused reasoning mechanism to enable a guided selection of the path which is most likely followed by the human. This mechanism is based upon knowledge about the human and the workflow to perform the task. In order to come to such an approach, a reasoning mechanism is adopted in combination with the introduction of a new workflow representation, which is utilized to focus the reasoning process in an appropriate manner. The approach is evaluated by means of an extensive case study.  相似文献   

13.
Theory of mind refers to the ability to reason explicitly about unobservable mental content of others, such as beliefs, goals, and intentions. People often use this ability to understand the behavior of others as well as to predict future behavior. People even take this ability a step further, and use higher-order theory of mind by reasoning about the way others make use of theory of mind and in turn attribute mental states to different agents. One of the possible explanations for the emergence of the cognitively demanding ability of higher-order theory of mind suggests that it is needed to deal with mixed-motive situations. Such mixed-motive situations involve partially overlapping goals, so that both cooperation and competition play a role. In this paper, we consider a particular mixed-motive situation known as Colored Trails, in which computational agents negotiate using alternating offers with incomplete information about the preferences of their trading partner. In this setting, we determine to what extent higher-order theory of mind is beneficial to computational agents. Our results show limited effectiveness of first-order theory of mind, while second-order theory of mind turns out to benefit agents greatly by allowing them to reason about the way they can communicate their interests. Additionally, we let human participants negotiate with computational agents of different orders of theory of mind. These experiments show that people spontaneously make use of second-order theory of mind in negotiations when their trading partner is capable of second-order theory of mind as well.  相似文献   

14.
Predicate logic based reasoning approaches provide a means of formally specifying domain knowledge and manipulating symbolic information to explicitly reason about different concepts of interest. Extension of traditional binary predicate logics with the bilattice formalism permits the handling of uncertainty in reasoning, thereby facilitating their application to computer vision problems. In this paper, we propose using first order predicate logics, extended with a bilattice based uncertainty handling formalism, as a means of formally encoding pattern grammars, to parse a set of image features, and detect the presence of different patterns of interest. Detections from low level feature detectors are treated as logical facts and, in conjunction with logical rules, used to drive the reasoning. Positive and negative information from different sources, as well as uncertainties from detections, are integrated within the bilattice framework. We show that this approach can also generate proofs or justifications (in the form of parse trees) for each hypothesis it proposes thus permitting direct analysis of the final solution in linguistic form. Automated logical rule weight learning is an important aspect of the application of such systems in the computer vision domain. We propose a rule weight optimization method which casts the instantiated inference tree as a knowledge-based neural network, interprets rule uncertainties as link weights in the network, and applies a constrained, back-propagation algorithm to converge upon a set of rule weights that give optimal performance within the bilattice framework. Finally, we evaluate the proposed predicate logic based pattern grammar formulation via application to the problems of (a) detecting the presence of humans under partial occlusions and (b) detecting large complex man made structures as viewed in satellite imagery. We also evaluate the optimization approach on real as well as simulated data and show favorable results.  相似文献   

15.
In this paper, a generic rule-base inference methodology using the evidential reasoning (RIMER) approach is proposed. Existing knowledge-base structures are first examined, and knowledge representation schemes under uncertainty are then briefly analyzed. Based on this analysis, a new knowledge representation scheme in a rule base is proposed using a belief structure. In this scheme, a rule base is designed with belief degrees embedded in all possible consequents of a rule. Such a rule base is capable of capturing vagueness, incompleteness, and nonlinear causal relationships, while traditional if-then rules can be represented as a special case. Other knowledge representation parameters such as the weights of both attributes and rules are also investigated in the scheme. In an established rule base, an input to an antecedent attribute is transformed into a belief distribution. Subsequently, inference in such a rule base is implemented using the evidential reasoning (ER) approach. The scheme is further extended to inference in hierarchical rule bases. A numerical study is provided to illustrate the potential applications of the proposed methodology.  相似文献   

16.
模糊Petri网(Fuzzy Petri Nets, FPN)是一种适合于描述异步并发事件的计算机系统模型,可以有效地对并行和并发系统进行形式化验证和决策分析.针对聚驱综合调整系统知识具有不确定性和模糊性的特点,给出了基于加权模糊产生式规则的加权FPN决策模型.在此模型的基础上,给出了决策推理过程的形式化推理算法.算法考虑了推理过程中的众多约束条件,将复杂的推理过程采用矩阵运算来实现,充分利用了FPN的并行处理能力,使决策推理过程更加简单和快速.并以压裂方式调整为例,说明了该模型具有直观、表达能力强和易于推理等优点,具有较强的实用价值.  相似文献   

17.
Deductive databases that interact with, and are accessed by, reasoning agents in the real world (such as logic controllers in automated manufacturing, weapons guidance systems, aircraft landing systems, land-vehicle maneuvering systems, and air-traffic control systems) must have the ability to deal with multiple modes of reasoning. Specifically, the types of reasoning we are concerned with include, among others, reasoning about time, reasoning about quantitative relationships that may be expressed in the form of differential equations or optimization problems, and reasoning about numeric modes of uncertainty about the domain which the database seeks to describe. Such databases may need to handle diverse forms of data structures, and frequently they may require use of the assumption-based nonmonotonic representation of knowledge. A hybrid knowledge base is a theoretical framework capturing all the above modes of reasoning. The theory tightly unifies the constraint logic programming scheme of Jaffar and Lassez (1987), the generalized annotated logic programming theory of Kifer and Subrahmanian (1989), and the stable model semantics of Gelfond and Lifschitz (1988). New techniques are introduced which extend both the work on annotated logic programming and the stable model semantics  相似文献   

18.
蒋宗华  徐勇 《计算机工程》2012,38(13):289-292
针对现有模块化本体推理方法通用性低、控制复杂等不足,提出一种基于服务的分布式Tableau算法。模块在进行一致性推理时,对关于外部概念的断言,将调用相应模块的服务进行推理,同一推理中的矛盾在定义相应概念的模块中得到捕获,采用优化技术改进算法的时间性能。实验结果表明,该算法使得模块在表述知识时能灵活引用外部概念,支持复杂的推理任务,具有较好的可伸缩性。  相似文献   

19.
The dynamics of default reasoning   总被引:1,自引:0,他引:1  
In this paper we study default reasoning from a dynamic, agent-oriented, semantics-based point of view. In a formal framework used to specify and to reason about rational agents, we introduce actions that model the (attempted) jumping to conclusions that is a fundamental part of reasoning by default. Application of such an action consists of three parts. First it is checked whether the formula that the agent tries to jump to is a default, thereafter it is checked whether the default formula can consistently be incorporated by the agent, and if this is the case the formula is included in the agent's beliefs. As for all actions in our framework, we define the ability and opportunity of agents to apply these actions, and the states of affairs following application. To formalise formulae being defaults, we introduce the modality of common possibility. This modality is related to, but not reducible to, the notions of common knowledge and ‘everybody knows’-knowledge. To model the qualitative difference that exists between hard, factual knowledge and beliefs derived by default, we employ different modalities to represent these concepts, thus combining knowledge, beliefs, and defaults in one framework. Based on the concepts used to model the default reasoning of agents, we look into the dynamics of the supernormal fragment of default logic. We show in particular that by sequences of jumps to conclusions agents can end up with extensions in the sense of default logic of their belief.  相似文献   

20.
The performance of an expert system depends on the quality and validity of the domain-specific knowledge built into the system. In most cases, however, domain knowledge (e.g. stock market behavior knowledge) is unstructured and differs from one domain expert to another. So, in order to acquire domain knowledge, expert system developers often take an induction approach in which a set of general rules is constructed from past examples. Expert systems based upon the induced rules were reported to perform quite well in the hold-out sample test.

However, these systems hardly provide users with an explanation which would clarify the results of a reasoning process. For this reason, users would remain unsure about whether to accept the system conclusion or not. This paper presents an approach in which explanations about the induced rules are constructed. Our approach applies the structural equation model to the quantitative data, the qualitative format of which was originally used in rule induction. This approach was implemented with Korean stock market data to show that a plausible explanation about the induced rule can be constructed.  相似文献   


设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号