首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Although Belief-Desire-Intention (BDI) agents have been deeply investigated from both a theoretical and a pragmatic perspective, less attention has been paid to the inherent recursive structure of mental states, which plays an essential role when modelling high level interaction between intelligent agents. This paper tries to capture this property by introducing a multi-context approach to the representation of mental states. A semantics for multi-context formalisms is provided based on the definition of “mental structure”, which is a hierarchical lattice of triangular modules <x,B,D>, where the component x represents the agent x’s mental state as a whole, while B and D represent specifically x’s beliefs and x’s desires. If other mental attitudes, as intention and commitment, are to be considers as primitives, then they can be embodied in the basic module, otherwise they can be represented in terms of beliefs and desires. The old notion of clause is rediscovered in order to facilitate the heavy automated theorem-proving necessary to exploit the potentiality of the formalism for the intelligent interaction with the external environment. The main advantages of this approach are the support for “unconsciousness” and the fact that inferences themselves can be modelled as mental attitudes. Some advanced dynamics of mental states, as the abductive revision of mental states after the reception of a communication, will easily be applied over this formalism.   相似文献   

2.
In this research note, we introduce a graded BDI agent development framework, g-BDI for short, that allows to build agents as multi-context systems that reason about three fundamental and graded mental attitudes (i.e. beliefs, desires and intentions). We propose a sound and complete logical framework for them and some logical extensions to accommodate slightly different views on desires.  相似文献   

3.
Law-abiding and integrity on the Internet: A case for agents   总被引:1,自引:0,他引:1  
Software agents extend the current, information-based Internet to include autonomous mobile processing. In most countries such processes, i.e., software agents are, however, without an explicit legal status. Many of the legal implications of their actions (e.g., gathering information, negotiating terms, performing transactions) are not well understood. One important characteristic of mobile software agents is that they roam the Internet: they often run on agent platforms of others. There often is no pre-existing relation between the “owner” of a running agent’s process and the owner of the agent platform on which an agent process runs. When conflicts arise, the position of the agent platform administrator is not clear: is he or she allowed to slow down the process or possibly remove it from the system? Can the interests of the user of the agent be protected? This article explores legal and technical perspectives in protecting the integrity and availability of software agents and agent platforms.  相似文献   

4.
Open multi-agent systems (MAS) are decentralised and distributed systems that consist of a large number of loosely coupled autonomous agents. In the absence of centralised control they tend to be difficult to manage, especially in an open environment, which is dynamic, complex, distributed and unpredictable. This dynamism and uncertainty in an open environment gives rise to unexpected plan failures. In this paper we present an abstract knowledge based approach for the diagnosis and recovery of plan action failures. Our approach associates a sentinel agent with each problem solving agent in order to monitor the problem solving agent’s interactions. The proposed approach also requires the problem solving agents to be able to report on the status of a plan’s actions.Once an exception is detected the sentinel agents start an investigation of the suspected agents. The sentinel agents collect information about the status of failed plan abstract actions and knowledge about agents’ mental attitudes regarding any failed plan. The sentinel agent then uses this abstract knowledge and the agents’ mental attitudes, to diagnose the underlying cause of the plan failure. The sentinel agent may ask the problem solving agent to retry their failed plan based on the diagnostic result.  相似文献   

5.
Embedding planning systems in real-world domains has led to the necessity of Distributed Continual Planning (DCP) systems where planning activities are distributed across multiple agents and plan generation may occur concurrently with plan execution. A key challenge in DCP systems is how to coordinate activities for a group of planning agents. This problem is compounded when these agents are situated in a real-world dynamic domain where the agents often encounter differing, incomplete, and possibly inconsistent views of their environment. To date, DCP systems have only focused on cases where agents’ behavior is designed to optimize a global plan. In contrast, this paper presents a temporal reasoning mechanism for self-interested planning agents. To do so, we model agents’ behavior based on the Belief-Desire-Intention (BDI) theoretical model of cooperation, while modeling dynamic joint plans with group time constraints through creating hierarchical abstraction plans integrated with temporal constraints network. The contribution of this paper is threefold: (i) the BDI model specifies a behavior for self interested agents working in a group, permitting an individual agent to schedule its activities in an autonomous fashion, while taking into consideration temporal constraints of its group members; (ii) abstract plans allow the group to plan a joint action without explicitly describing all possible states in advance, making it possible to reduce the number of states which need to be considered in a BDI-based approach; and (iii) a temporal constraints network enables each agent to reason by itself about the best time for scheduling activities, making it possible to reduce coordination messages among a group. The mechanism ensures temporal consistency of a cooperative plan, enables the interleaving of planning and execution at both individual and group levels. We report on how the mechanism was implemented within a commercial training and simulation application, and present empirical evidence of its effectiveness in real-life scenarios and in reducing communication to coordinate group members’ activities.  相似文献   

6.
In complex multiagent systems, the agents may be heterogeneous and possibly designed by different programmers. Thus, the importance of defining a standard framework for agent communication languages (ACL) with a clear semantics has been widely recognized. The semantics should be verifiable, clear, and practical. Most classical proposals (for instance, mentalistic semantics) fail to meet these objectives. This paper proposes a logic‐based semantics, which is social in nature. The basic idea is to associate with each speech act a clear meaning in terms of a commitment induced by that speech act, and a penalty to be paid in case that commitment is violated. A violation criterion based on the existence of arguments is then defined per speech act. We show that the proposed semantics satisfies some key properties that ensure that the approach is well founded. The logical setting makes the semantics verifiable. Moreover, it is shown that the new semantics is practical because it captures the dynamic of dialogues and shows clearly how isolated speech acts can be connected for building dialogues. © 2008 Wiley Periodicals, Inc.  相似文献   

7.
Advancements in technology are bringing robotics into interpersonal communication contexts, including the college classroom. This study was one of the first to examine college students’ communication-related perceptions of robots being used in an instructional capacity. Student participants rated both a human instructor using a telepresence robot and an autonomous social robot delivering the same lesson as credible. However, students gave higher credibility ratings to the teacher as robot, which led to differences between the two instructional agents in their learning outcomes. Students reported more affective learning from the teacher as robot than the robot as teacher, despite controlled instructional performances. Instructional agent type had both direct and indirect effects on behavioral learning. The direct effect suggests a potential machine heuristic in which students are more likely to follow behavioral suggestions offered by an autonomous social robot. The findings generally support the MAIN model and the Computers are Social Actors paradigm, but suggest that future work needs to be done in this area.  相似文献   

8.
In this paper, some generalizations of the problem of formation of a group of autonomous mobile agents under nonlinear cyclic pursuit is studied. Cyclic pursuit is a simple distributed control law, in which the agent i pursues agent i+1 modulo n. Each agent is subjected to a nonholonomic constraint. A necessary condition for equilibrium formation to occur among a group of agents with different speeds and different controller gains is obtained. These results generalize equal speed and equal controller gain results available in the literature.  相似文献   

9.
This paper presents a generic formal framework to specify and test autonomous e‐commerce agents. First, the formalism to represent the behaviour of agents is introduced. The corresponding machinery to define how implementations can be tested follows. Two testing approaches are considered. The first of them, which can be called active, is based on stimulating the implementation under test (IUT) with a test. The peculiarity is that tests will be defined as a special case of autonomous e‐commerce agent. The second approach, which can be called passive, consists of observing the behaviour of the tested agent in an environment containing other agents. As a case study the framework is applied to the e‐commerce system Kasbah. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

10.
《Knowledge》2005,18(6):245-255
In this paper, we propose a teamwork model based on the concept of a mental attribute called attitude. Our team model presents team as a collective abstract attitude, in which is embedded a novel way of solving problems and conflicts in our domain. We argue that this collective attitude is further decomposed into the individual attitudes of the agents towards various team attributes. We then evaluate the team problem solving behaviours of the agents in a simulated fire world using teams with and without different types of attitudes. The application and implementation of this model to a virtual fire world has revealed a promising prospect in developing team agents.  相似文献   

11.
Ge  Hongwei  Ge  Zhixin  Sun  Liang  Wang  Yuxin 《Applied Intelligence》2022,52(9):9701-9716

Multi-agent reinforcement learning is efficient to deal with tasks that require cooperation among different individuals. And communication plays an important role to enhance the cooperation of agents in scalable and unstable environments. However, there are still many challenges because some information of communication may fail to facilitate cooperation or even have a negative effect. Thus, how to explore efficient information for the cooperation of agents is a critical issue to be solved. In this paper, we propose a multi-agent reinforcement learning algorithm with cognition differences and consistent representation (CDCR). The criteria of cognition differences are formulated to explore information possessed by different agents, to help each agent have a better understanding of others. We further train a cognition encoding network to obtain the global cognition consistent representation for each agent, then the representation is used to realize the cognitive consistency of the agent for the environment. To validate the effectiveness of the CDCR, we carry out experiments in Predator-Prey and StarCraft II environments. The results in Predator-Prey demonstrate that the proposed cognition differences can achieve effective communication among agents; the results in StarCraft II demonstrate that considering both cognition differences and consistent representation can increase the test win rate of the baseline algorithm by 29% in the best case, and the ablation studies further demonstrate the positive roles played by the proposed strategies.

  相似文献   

12.
We examined how human mental workload and the corresponding eye movement behaviors are affected by the stages and levels of autonomy in routine and autonomy failure conditions in human-autonomy teams (HAT). Thirty participants performed monitoring and diagnosing tasks with the autonomous agent in a three-factor experiment. The factors included information processing stage, level of autonomy, and agent operation condition. The results indicated that the later the agent-supported information processing stage or the higher the autonomy level, the higher the participants’ mental workload following autonomous agent failure. Compared to the continuous manual operation condition, the HAT performance did not decline following autonomous agent failure at the cost of increased mental workload. The eye movement results indicated a top-down compensatory control mechanism of attention, indicating the risk of team performance decline following autonomous agent failure. These findings can be applied in designing autonomous agents and setting human mental workload levels in a HAT.  相似文献   

13.
This paper addresses the problem of virtual pedestrian autonomous navigation for crowd simulation. It describes a method for solving interactions between pedestrians and avoiding inter-collisions. Our approach is agent-based and predictive: each agent perceives surrounding agents and extrapolates their trajectory in order to react to potential collisions. We aim at obtaining realistic results, thus the proposed model is calibrated from experimental motion capture data. Our method is shown to be valid and solves major drawbacks compared to previous approaches such as oscillations due to a lack of anticipation. We first describe the mathematical representation used in our model, we then detail its implementation, and finally, its calibration and validation from real data.  相似文献   

14.
15.
For a software information agent, operating on behalf of a human owner and belonging to a community of agents, the choice of communicating or not with another agent becomes a decision to take, since communication generally implies a cost. Since these agents often operate as recommender systems, on the basis of dynamic recognition of their human owners’ behaviour and by generally using hybrid machine learning techniques, three main necessities arise in their design, namely (i) providing the agent with an internal representation of both interests and behaviour of its owner, usually called ontology; (ii) detecting inter-ontology properties that can help an agent to choose the most promising agents to be contacted for knowledge-sharing purposes; (iii) semi-automatically constructing the agent ontology, by simply observing the behaviour of the user supported by the agent, leaving to the user only the task of defining concepts and categories of interest. We present a complete MAS architecture, called connectionist learning and inter-ontology similarities (CILIOS), for supporting agent mutual monitoring, trying to cover all the issues above. CILIOS exploits an ontology model able to represent concepts, concept collections, functions and causal implications among events in a multi-agent environment; moreover, it uses a mechanism capable of inducing logical rules representing agent behaviour in the ontology by means of a connectionist ontology representation, based on neural-symbolic networks, i.e., networks whose input and output nodes are associated with logic variables.  相似文献   

16.
In multi-agent systems, the study of language and communication is an active field of research. In this paper we present the application of evolutionary strategies to the self-emergence of a common lexicon in a population of agents. By modeling the vocabulary or lexicon of each agent as an association matrix or look-up table that maps the meanings (i.e. the objects encountered by the agents or the states of the environment itself) into symbols or signals we check whether it is possible for the population to converge in an autonomous, decentralized way to a common lexicon, so that the communication efficiency of the entire population is optimal. We have conducted several experiments aimed at testing whether it is possible to converge with evolutionary strategies to an optimal Saussurean communication system. We have organized our experiments alongside two main lines: first, we have investigated the effect of the population size on the convergence results. Second, and foremost, we have also investigated the effect of the lexicon size on the convergence results. To analyze the convergence of the population of agents we have defined the population's consensus when all the agents (i.e. 100% of the population) share the same association matrix or lexicon. As a general conclusion we have shown that evolutionary strategies are powerful enough optimizers to guarantee the convergence to lexicon consensus in a population of autonomous agents.  相似文献   

17.
Topology-based multi-agent systems (TMAS), wherein agents interact with one another according to their spatial relationship in a network, are well suited for problems with topological constraints. In a TMAS system, however, each agent may have a different state space, which can be rather large. Consequently, traditional approaches to multi-agent cooperative learning may not be able to scale up with the complexity of the network topology. In this paper, we propose a cooperative learning strategy, under which autonomous agents are assembled in a binary tree formation (BTF). By constraining the interaction between agents, we effectively unify the state space of individual agents and enable policy sharing across agents. Our complexity analysis indicates that multi-agent systems with the BTF have a much smaller state space and a higher level of flexibility, compared with the general form of n-ary (n > 2) tree formation. We have applied the proposed cooperative learning strategy to a class of reinforcement learning agents known as temporal difference-fusion architecture for learning and cognition (TD-FALCON). Comparative experiments based on a generic network routing problem, which is a typical TMAS domain, show that the TD-FALCON BTF teams outperform alternative methods, including TD-FALCON teams in single agent and n-ary tree formation, a Q-learning method based on the table lookup mechanism, as well as a classical linear programming algorithm. Our study further shows that TD-FALCON BTF can adapt and function well under various scales of network complexity and traffic volume in TMAS domains.  相似文献   

18.
With the development of large scale multiagent systems, agents are always organized in network structures where each agent interacts only with its immediate neighbors in the network. Coordination among networked agents is a critical issue which mainly includes two aspects: task allocation and load balancing; in traditional approach, the resources of agents are crucial to their abilities to get tasks, which is called talent-based allocation. However, in networked multiagent systems, the tasks may spend so much communication costs among agents that are sensitive to the agent localities; thus this paper presents a novel idea for task allocation and load balancing in networked multiagent systems, which takes into account both the talents and centralities of agents. This paper first investigates the comparison between talent-based task allocation and centrality-based one; then, it explores the load balancing of such two approaches in task allocation. The experiment results show that the centrality-based method can reduce the communication costs for single task more effectively than the talent-based one, but the talent-based method can generally obtain better load balancing performance for parallel tasks than the centrality-based one.  相似文献   

19.

This article presents the STROBE model: both an agent representation and an agent communication, model based on a social approach, which means interaction centered. This model represents how agents may realize the interactive, dynamic generation of services on the Grid. Dynamically generated services embody a new concept of service implying a collaborative creation of knowledge, i.e., learning; services are constructed interactively between agents depending on a conversation. The approach consists of integrating selected features from multi-agent systems and agent communication, language interpretation in applicative/functional programming and e-learning/human-learning into a unique, original, and simple view that privileges interactions, including control. The main characteristic of STROBE agents is that they develop a language (environment + interpreter) for each of their interlocutors. The model is inscribed within a global approach, defending a shift from the classical algorithmic (control based) view to problem solving in computing to an interaction-based view of social informatics, where artificial as well as human agents operate by communicating as well as by computing. The paper shows how the model may not only account for the classical communicating agent approaches, but also represent a fundamental advance in modeling societies of agents in particular in dynamic service generation scenarios such as those necessary today on the Web and proposed tomorrow for the Grid. Preliminary concrete experimentations illustrate the potential of the model; they are significant examples for a very wide class of computational and learning situations.  相似文献   

20.
Dealing with changing situations is a major issue in building agent systems. When the time is limited, knowledge is unreliable, and resources are scarce, the issue becomes more challenging. The BDI (Belief-Desire-Intention) agent architecture provides a model for building agents that addresses that issue. The model can be used to build intentional agents that are able to reason based on explicit mental attitudes, while behaving reactively in changing circumstances. However, despite the reactive and deliberative features, a classical BDI agent is not capable of learning. Plans as recipes that guide the activities of the agent are assumed to be static. In this paper, an architecture for an intentional learning agent is presented. The architecture is an extension of the BDI architecture in which the learning process is explicitly described as plans. Learning plans are meta-level plans which allow the agent to introspectively monitor its mental states and update other plans at run time. In order to acquire the intricate structure of a plan, a process pattern called manipulative abduction is encoded as a learning plan. This work advances the state of the art by combining the strengths of learning and BDI agent frameworks in a rich language for describing deliberation processes and reactive execution. It enables domain experts to specify learning processes and strategies explicitly, while allowing the agent to benefit from procedural domain knowledge expressed in plans.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号