首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We present a formalism for representing the formation of intentions by agents engaged in cooperative activity. We use a syntactic approach presenting a formal logical calculus that can be regarded as a meta-logic that describes the reasoning and activities of the agents. Our central focus is on the evolving intentions of agents over time, and the conditions under which an agent can adopt and maintain an intention. In particular, the reasoning time and the time taken to subcontract are modeled explicitly in the logic. We axiomatize the concept of agent interactions in the meta-language, show that the meta-theory is consistent and describe the unique intended model of the meta-theory. In this context we deal both with subcontracting between agents and the presence of multiple recipes, that is, multiple ways of accomplishing tasks. We show that under various initial conditions and known facts about agent beliefs and abilities, the meta-theory representation yields good results.  相似文献   

2.
Agent‐based virtual simulations of social systems susceptible to corruption (e.g., police agencies) require agents capable of exhibiting corruptible behaviors to achieve realistic simulations and enable the analysis of corruption as a social problem. This paper proposes a formal belief‐desire‐intention framework supported by the functional event calculus and fuzzy logic for modeling corruption based on the integrity level of social agents and the influence of corrupters on them. Corruptible social agents are endowed with beliefs, desires, intentions, and corrupt‐prone plans to achieve their desires. This paper also proposes a fuzzy logic system to define the level of impact of corruption‐related events on the degree of belief in the truth of anti‐corruption factors (e.g., the integrity of the leader of an organization). Moreover, an agent‐based model of corruption supported by the proposed belief‐desire‐intention framework and the fuzzy logic system was devised and implemented. Results obtained from agent‐based simulations are consistent with actual macro‐level patterns of corruption reported in the literature. The simulation results show that (i) the bribery rate increases as more external entities attempt to bribe agents and (ii) the more anti‐corruption factors agents believe to be true, the less prone to perpetrate acts of corruption. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

3.
Agent的意图模型   总被引:17,自引:4,他引:13  
胡山立  石纯一 《软件学报》2000,11(7):965-970
意图是Agent的一个不可缺少的意识属性,在决定理性Agent的行为时起着重要的作用.已经有了若干种基于正规模态逻辑的意图模型,但它们存在着严重的“逻辑全知”问题.该文阐明意图不是正规模态算子,并提出了另一种意图模型,它不存在“逻辑全知”问题和其他相关问题(例如,副作用问题等).这种意图模型与Konolige和Pollack的意图模型相比,比较简单、自然,且满足K公理和联合一致性原理,实际上,为非正规模态算子基于正规可能世界的语义表示提供了一种新的方法.  相似文献   

4.
Sentential theories of belief hold that propositions (the things that agents believe and know) are sentences of a representation language. To analyze quantification into the scope of attitudes, these theories require a naming map a function that maps objects to their names in the representation language. Epistemic logics based on sentential theories usually assume a single naming map, which is built into the logic. I argue that to describe everyday knowledge, the user of the logic must be able to define new naming maps for particular problems. Since the range of a naming map is usually an infinite set of names, defining a map requires quantification over names. This paper describes an epistemic logic with quantification over names, presents a theorem-proving algorithm based on translation to first-order logic, and proves soundness and completeness. The first version of the logic suffers from the problem of logical omniscience; a second version avoids this problem, and soundness and completeness are proved for this version also.  相似文献   

5.
6.
7.
When combining logic level theorem proving with computational methods it is important to identify both functions that can be efficiently computed and the objects they can be applied to. This is generally achieved by mappings of logic level terms and functions to their computational counterparts. However, these mappings are often quite ad hoc and fragile depending very much on the particular logic representations of terms. We present a method of annotating terms in logic proofs with their computational properties. This enables the compact representation of computational objects in deduction systems as well as their connection to functions that can be easily computed for them. This eases the identification of deduction problems that can be treated efficiently by computational methods and also abstracts from trivial properties that are artefacts of a particular representation. We ensure logical correctness of our concepts by providing the possibility to replace terms by their logical representation and by expanding computational procedures by tactic application that can be rigorously checked.  相似文献   

8.
基于概率推理的入侵意图识别研究   总被引:1,自引:0,他引:1  
攻击者的入侵行为背后往往蕴含着攻击者的目标和意图,据此提出了入侵意图识别的层次化模型。为了处理网络环境中的不确定性信息,提出了基于概率推理的入侵意图识别算法,并在此基础上预测攻击者的后续攻击规划和目标,从而起到提前预警的作用。根据网络安全事件、目标和意图之间的因果关系建立的贝叶斯网络能够描述和处理并发意图识别问题。试验证明了该方法的可行性和有效性。  相似文献   

9.
One of the typical causes of errors in team cooperative activities, such as in central control rooms of power plants and cockpits in aircrafts, is conflicts among team members' intentions. If mutual awareness and communication were perfectly established and maintained, conflicts could be detected and recovered by team members; however, this does not happen in practice. In this paper, we provide a framework for detecting conflicts among team members' intentions based on team intention inference, aiming to make machines function as a coordinator for cooperative activities. In previous work, we developed a method for team intention inference based on a definition of ‘we-intention’. We-intention is other-regarding intentions relating to situations in which some agents act together, and is represented as a set of individual intentions and mutual beliefs. In this framework, a conflict can be defined as a set of individual intentions and false beliefs (undesired procedures), and detected by searching for such combinations. We applied the proposed method to the operation of a plant simulator operated by a two-person team, and it was confirmed through an experiment that this method could list candidates for conflicts by type and set the actual conflict high in priority in the tested context.  相似文献   

10.
基于关系的两维意向结构   总被引:6,自引:0,他引:6       下载免费PDF全文
从建构agent角度出发,提出了一个基于关系结构的包括agent意向、信念以及目标等认知状态的框架.在此框架中,实现目标的意向形成了两维序结构,其中一维表示意向间的时序关系,另一维表示意向间的相干关系,在此基础上,研究了信念、意向和目标的相互关系.因为摒弃了传统的用模态算子来刻画agent的意向的方法,所以在构建agent时,可以直接采用意向库以及意向间的时序、相干关系来表示agent的意向,从而缩小了agent理论模型与实际agent结构之间的差异,为agent结构的建立提供了必要的理论基础.  相似文献   

11.
Agent在多Agent系统中计算的意愿理论*   总被引:7,自引:2,他引:5  
提出了Agent在多Agent系统中计算的意愿理论,以支持Agent计算的理论研究.区分了两种意愿:实现型意愿和维护型意愿.基于多Agent系统计算的逻辑框架,给出了两种意愿新的语义定义,获取和描述了它们的一些重要逻辑属性.  相似文献   

12.
胡山立  石纯一 《软件学报》2002,13(11):2112-2115
理性Agent规约的形式框架通常基于信念、愿望和意图逻辑.为了克服现有的信念、愿望和意图逻辑中存在的问题,为非正规模态算子提供一种合适的语义表示.讨论了理性Agent性态的抽象规约中对语义表示的要求以及现有的信念、愿望和意图逻辑中存在的问题.介绍了作者开发的真假子集语义及其在Agent形式化中的应用.他们的框架使意图的有问题的性质无效.并且证明通过对模型的代数结构施加一定的约束,能获得许多希望的性质.最后对真假子集语义进行了分析.这一切表明真假子集语义为非正规模态算子提供了一种合适的语义表示,是对经典的正规模态算子可能世界语义的一个重要发展,是理性Agent性态的逻辑规约的有力工具,可应用于建立新的合适的Agent逻辑系统.  相似文献   

13.
Research on resource-bounded agents has established that rational agents need to be able to revise their commitments in light of new opportunities. In the context of collaborative activities, rational agents must be able to reconcile their intentions to do team-related actions with other, conflicting intentions. The SPIRE experimental system allows the process of intention reconciliation in team contexts to be simulated and studied. Initial work with SPIRE examined the impact of environmental factors and agent utility functions on individual and group outcomes in the context of one set of social norms governing collaboration. This paper extends those results by further studying the effect of environmental factors and the agents' level of social consciousness and by comparing the impact of two different types of social norms on agent behavior and outcomes. The results show that the choice of social norms influences the accuracy of the agents' responses to varying environmental factors, as well as the effectiveness of social consciousness and other aspects of agents' utility functions. In experiments using heterogeneous groups of agents, both sets of norms were susceptible to the free-rider effect. However, the gains of the less responsible agents were minimal, suggesting that agent designers would have little incentive to design agents that deviate from the standard level of responsibility to the group.  相似文献   

14.
Recent advances in man–machine interaction include attempts to infer operator intentions from operator actions, to better anticipate and support system performance. This capability has been investigated in contexts such as intelligent interface designs and operation support systems. While some progress has been demonstrated, efforts to date have focused on a single operator. In large and complex artefacts such as power plants or aircrafts, however, a team generally operates the system, and team intention is not reducible to mere summation of individual intentions. It is therefore necessary to develop a team intention inference method for sophisticated team–machine communication. In this paper a method is proposed for team intention inference in process domains. The method uses expectations of the other members as clues to infer a team intention and describes it as a set of individual intentions and beliefs of the other team members. We applied it to the operation of a plant simulator operated by a two-person team, and it was shown that, at least in this context, the method is effective for team intention inference.  相似文献   

15.
This paper presents a hybrid agent architecture that integrates the behaviours of BDI agents, specifically desire and intention, with a neural network based reinforcement learner known as Temporal Difference-Fusion Architecture for Learning and COgNition (TD-FALCON). With the explicit maintenance of goals, the agent performs reinforcement learning with the awareness of its objectives instead of relying on external reinforcement signals. More importantly, the intention module equips the hybrid architecture with deliberative planning capabilities, enabling the agent to purposefully maintain an agenda of actions to perform and reducing the need of constantly sensing the environment. Through reinforcement learning, plans can also be learned and evaluated without the rigidity of user-defined plans as used in traditional BDI systems. For intention and reinforcement learning to work cooperatively, two strategies are presented for combining the intention module and the reactive learning module for decision making in a real time environment. Our case study based on a minefield navigation domain investigates how the desire and intention modules may cooperatively enhance the capability of a pure reinforcement learner. The empirical results show that the hybrid architecture is able to learn plans efficiently and tap both intentional and reactive action execution to yield a robust performance.  相似文献   

16.
17.
Planning and the stability of intention   总被引:3,自引:2,他引:1  
I sketch my general model of the roles of intentions in the planning of agents like us-agents with substantial resource limitations and with important needs for coordination. I then focus on the stability of prior intentions: their rational resistance to reconsideration. I emphasize the importance of cases in which one's nonreconsideration of a prior intention is nondeliberative and is grounded in relevant habits of reconsideration. Concerning such cases I argue for a limited form of two-tier consequentialism, one that is restricted in ways that aim at blocking an analogue of Smart's concerns about rule-worship. I contrast this with the unrestricted two-tier consequentialism suggested by McClennen. I argue that my restricted approach is superior for a theory of the practical rationality of reflective, planning agents like us. But I also conjecture that an unrestricted two-tier consequentialism may be more appropriate for the AI project of specifying a high level architecture for a resource-bounded planner.  相似文献   

18.
Financial robo-advisors have been widely used to assist individuals in their investment decisions, making it important to reduce uncertainties in the assistance process. Existing empirical studies rarely explore uncertainty reduction strategies and their implications on users’ investment intentions in the context of financial robo-advisors; our study attempts to address this gap. We construct a model to explain how uncertainty reduction strategies affect users’ investment intention in using financial robo-advisors. By collecting and analyzing a sample of 307 financial robo-advisor users, we find that algorithmic interpretability, structural assurance, and interactivity as uncertainty reduction strategies are positively related to users’ investment intention through the value-based adoption mechanism. Our research extends the value-based adoption model and uncertainty reduction theory in the financial robo-advisor context. We provide insights to financial robo-advisor service providers about focusing on improving algorithmic transparency, third-party assurance, and interactivity of financial robo-advisors to enhance perceived value and investment intention.  相似文献   

19.
Computational trust and reputation models have been recognized as one of the key technologies required to design and implement agent systems. These models manage and aggregate the information needed by agents to efficiently perform partner selection in uncertain situations. For simple applications, a game theoretical approach similar to that used in most models can suffice. However, if we want to undertake problems found in socially complex virtual societies, we need more sophisticated trust and reputation systems. In this context, reputation-based decisions that agents make take on special relevance and can be as important as the reputation model itself. In this paper, we propose a possible integration of a cognitive reputation model, Repage, into a cognitive BDI agent. First, we specify a belief logic capable to capture the semantics of Repage information, which encodes probabilities. This logic is defined by means of a two first-order languages hierarchy, allowing the specification of axioms as first-order theories. The belief logic integrates the information coming from Repage in terms if image and reputation, and combines them, defining a typology of agents depending of such combination. We use this logic to build a complete graded BDI model specified as a multi-context system where beliefs, desires, intentions and plans interact among each other to perform a BDI reasoning. We conclude the paper with an example and a related work section that compares our approach with current state-of-the-art models.  相似文献   

20.
It is well-known that adding reflective reasoning can tremendously increase the power of a proof assistant. In order for this theoretical increase of power to become accessible to users in practice, the proof assistant needs to provide a great deal of infrastructure to support reflective reasoning. In this paper we explore the problem of creating a practical implementation of such a support layer.Our implementation takes a specification of a logical theory (which is identical to how it would be specified if we were simply going to reason within this logical theory, instead of reflecting it) and automatically generates the necessary definitions, lemmas, and proofs that are needed to enable the reflected meta-reasoning in the provided theory.One of the key features of our approach is that the structure of a logic is preserved when it is reflected. In particular, all variables, including meta-variables, are preserved in the reflected representation. This also allows the preservation of proof automation—there is a structure-preserving one-to-one map from proof steps in the original logic to proof step in the reflected logic.To enable reasoning about terms with sequent context variables, we develop a principle for context induction, called teleportation.This work is fully implemented in the MetaPRL theorem prover.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号