首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Commitments among agents are widely recognized as an important basis for organizing interactions in multiagent systems. We develop an approach for formally representing and reasoning about commitments in the event calculus. We apply and evaluate this approach in the context of protocols, which represent the interactions allowed among communicating agents. Protocols are essential in applications such as electronic commerce where it is necessary to constrain the behaviors of autonomous agents. Traditional approaches, which model protocols merely in terms of action sequences, limit the flexibility of the agents in executing the protocols. By contrast, by formally representing commitments, we can specify the content of the protocols through the agents' commitments to one another. In representing commitments in the event calculus, we formalize commitment operations and domain-independent reasoning rules as axioms to capture the evolution of commitments. We also provide a means to specify protocol-specific axioms through the agents' actions. These axioms enable agents to reason about their actions explicitly to flexibly accommodate the exceptions and opportunities that may arise at run time. This reasoning is implemented using an event calculus planner that helps determine flexible execution paths that respect the given protocol specifications.  相似文献   

2.
Commitments are being used to specify interactions among autonomous agents in multiagent systems. Various formalizations of commitments have shown their strength in representing and reasoning on multiagent interactions. These formalizations mostly study commitment lifecycles, emphasizing fulfillment of a single commitment. However, when multiple commitments coexist, fulfillment of one commitment may have an effect on the lifecycle of other commitments. Since agents generally participate in more than one commitment at a time, it is important for an agent to determine whether it can honor its commitments. These commitments may be the existing commitments of the agent as well as any prospective commitments that the agent plans to participate in. To address this, we develop the concept of commitment feasibility, i.e., whether it is possible for an agent to fulfill a set of commitments all together. To achieve this we generalize the fulfillment of a single commitment to the feasibility of a set of commitments. We then develop a solid method to determine commitment feasibility. Our method is based on the transformation of feasibility into a constraint satisfaction problem and use of constraint satisfaction techniques to come up with a conclusion. We show soundness and completeness of our method and illustrate its applicability over realistic cases.  相似文献   

3.
In cooperative multiagent systems an alternative that maximizes the social welfare—the sum of utilities—can only be selected if each agent reports its full utility function. This may be infeasible in environments where communication is restricted. Employing a voting rule to choose an alternative greatly reduces the communication burden, but leads to a possible gap between the social welfare of the optimal alternative and the social welfare of the one that is ultimately elected. Procaccia and Rosenschein (2006) [13] have introduced the concept of distortion to quantify this gap.In this paper, we present the notion of embeddings into voting rules: functions that receive an agent?s utility function and return the agent?s vote. We establish that very low distortion can be obtained using randomized embeddings, especially when the number of agents is large compared to the number of alternatives. We investigate our ideas in the context of three prominent voting rules with low communication costs: Plurality, Approval, and Veto. Our results arguably provide a compelling reason for employing voting in cooperative multiagent systems.  相似文献   

4.
To Commit or Not to Commit: Modeling Agent Conversations for Action   总被引:1,自引:0,他引:1  
Conversations are sequences of messages exchanged among interacting agents. For conversations to be meaningful, agents ought to follow commonly known specifications limiting the types of messages that can be exchanged at any point in the conversation. These specifications are usually implemented using conversation policies (which are rules of inference) or conversation protocols (which are predefined conversation templates). In this article we present a semantic model for specifying conversations using conversation policies. This model is based on the principles that the negotiation and uptake of shared social commitments entail the adoption of obligations to action, which indicate the actions that agents have agreed to perform. In the same way, obligations are retracted based on the negotiation to discharge their corresponding shared social commitments. Based on these principles, conversations are specified as interaction specifications that model the ideal sequencing of agent participations negotiating the execution of actions in a joint activity. These specifications not only specify the adoption and discharge of shared commitments and obligations during an activity, but also indicate the commitments and obligations that are required (as preconditions) or that outlive a joint activity (as postconditions). We model the Contract Net Protocol as an example of the specification of conversations in a joint activity.  相似文献   

5.
Statistical relational learning of trust   总被引:1,自引:0,他引:1  
The learning of trust and distrust is a crucial aspect of social interaction among autonomous, mentally-opaque agents. In this work, we address the learning of trust based on past observations and context information. We argue that from the truster’s point of view trust is best expressed as one of several relations that exist between the agent to be trusted (trustee) and the state of the environment. Besides attributes expressing trustworthiness, additional relations might describe commitments made by the trustee with regard to the current situation, like: a seller offers a certain price for a specific product. We show how to implement and learn context-sensitive trust using statistical relational learning in form of a Dirichlet process mixture model called Infinite Hidden Relational Trust Model (IHRTM). The practicability and effectiveness of our approach is evaluated empirically on user-ratings gathered from eBay. Our results suggest that (i) the inherent clustering achieved in the algorithm allows the truster to characterize the structure of a trust-situation and provides meaningful trust assessments; (ii) utilizing the collaborative filtering effect associated with relational data does improve trust assessment performance; (iii) by learning faster and transferring knowledge more effectively we improve cold start performance and can cope better with dynamic behavior in open multiagent systems. The later is demonstrated with interactions recorded from a strategic two-player negotiation scenario.  相似文献   

6.
Environment as a first class abstraction in multiagent systems   总被引:2,自引:1,他引:1  
The current practice in multiagent systems typically associates the environment with resources that are external to agents and their communication infrastructure. Advanced uses of the environment include infrastructures for indirect coordination, such as digital pheromones, or support for governed interaction in electronic institutions. Yet, in general, the notion of environment is not well defined. Functionalities of the environment are often dealt with implicitly or in an ad hoc manner. This is not only poor engineering practice, it also hinders engineers to exploit the full potential of the environment in multiagent systems. In this paper, we put forward the environment as an explicit part of multiagent systems.We give a definition stating that the environment in a multiagent system is a first-class abstraction with dual roles: (1) the environment provides the surrounding conditions for agents to exist, which implies that the environment is an essential part of every multiagent system, and (2) the environment provides an exploitable design abstraction for building multiagent system applications. We discuss the responsibilities of such an environment in multiagent systems and we present a reference model for the environment that can serve as a basis for environment engineering. To illustrate the power of the environment as a design abstraction, we show how the environment is successfully exploited in a real world application. Considering the environment as a first-class abstraction in multiagent systems opens up new horizons for research and development in multiagent systems.  相似文献   

7.
A single global authority is not sufficient to regulate heterogenous agents in multiagent systems based on distributed architectures, due to idiosyncratic local situations and to the need to regulate new issues as soon as they arise. On the one hand institutions should be structured as normative systems with a hierarchy of authorities able to cope with the dynamics of local situations, but on the other hand higher authorities should be able to delimit the autonomy of lower authorities to issue valid norms. In this paper, we study the interplay of obligations and strong permissions in the context of hierarchies of authorities using input/output logic, because its explicit norm base facilitates reasoning about norm base maintenance, and it covers a variety of conditional obligations and permissions. We combine the logic with constraints, priorities and hierarchies of authorities. In this setting, we observe that Makinson and van der Torre’s notion of prohibition immunity for permissions is no longer sufficient, and we introduce a new notion of permission as exception and a new distinction between static and dynamic norms. We show how strong permissions can dynamically change an institution by adding exceptions to obligations, provide an explicit representation of what is permitted to the subjects of the normative system and allow higher level authorities to limit the power of lower level authorities to change the normative system.
Leendert van der TorreEmail:
  相似文献   

8.
9.
In this paper, we propose a novel metric called MetrIntPair (Metric for Pairwise Intelligence Comparison of Agent‐Based Systems) for comparison of two cooperative multiagent systems problem‐solving intelligence. MetrIntPair is able to make an accurate comparison by taking into consideration the variability in intelligence in problem‐solving. The metric could treat the outlier intelligence indicators, intelligence measures that are statistically different from those others. For evaluation of the proposed metric, we realized a case study for two cooperative multiagent systems applied for solving a class of NP‐hard problems. The results of the case study proved that the small difference in the measured intelligence of the multiagent systems is the consequence of the variability. There is no statistical difference between the intelligence quotients/level of the multiagent systems. Both multiagent systems should be classified in the same intelligence class.  相似文献   

10.
Software agents’ ability to interact within different open systems, designed by different groups, presupposes an agreement on an unambiguous definition of a set of concepts, used to describe the context of the interaction and the communication language the agents can use. Agents’ interactions ought to allow for reliable expectations on the possible evolution of the system; however, in open systems interacting agents may not conform to predefined specifications. A possible solution is to define interaction environments including a normative component, with suitable rules to regulate the behaviour of agents. To tackle this problem we propose an application-independent metamodel of artificial institutions that can be used to define open multiagent systems. In our view an artificial institution is made up by an ontology that models the social context of the interaction, a set of authorizations to act on the institutional context, a set of linguistic conventions for the performance of institutional actions and a system of norms that are necessary to constrain the agents’ actions.  相似文献   

11.
The notion of environment is receiving an increasing attention in the development of multiagent applications. This is witnessed by the emergence of a number of infrastructures providing agent designers with useful means to develop the agent environment, and thus to structure an effective multiagent application. In this paper we analyse the role and features of such infrastructures, and survey some relevant examples. We endorse a general viewpoint where the environment of a multiagent system is seen as a set of basic bricks we call environment abstractions, which (i) provide agents with services useful for achieving individual and social goals, and (ii) are supported by some underlying software infrastructure managing their creation and exploitation. Accordingly, we focus the survey on the opportunities that environment infrastructures provide to system designers when developing multiagent applications.  相似文献   

12.
Norms (permissions, obligations and prohibitions) offer a useful and powerful abstraction with which to capture social constraints in multi-agent systems. Norms should exclude disruptive or antisocial behaviour without prescribing the design of individual agents or restricting their autonomy. An important challenge, however, in the design and management of systems governed by norms is that norms may, at times, conflict with one another; e.g, an action may be simultaneously prohibited and obliged for a particular agent. In such circumstances, agents no longer have the option of complying with these norms; whatever they do or refrain from doing will lead to a social constraint being broken. In this paper, we present mechanisms for the detection and resolution of normative conflicts. These mechanisms, based on first-order unification and constraint solving techniques, are the building blocks of more sophisticated algorithms we present for the management of normative positions, that is, the adoption and removal of permissions, obligations and prohibitions in societies of agents. We capture both direct and indirect conflicts between norms, formalise a practical concept of authority, and model conflicts that may arise as a result of delegation. We are able to formally define classic ways for resolving conflicts such as lex superior and lex posterior.  相似文献   

13.
We propose a diagnosis procedure that agents can use to explain exceptions to contract executions. Contracts are expressed by social commitments associated with temporal constraints. The procedure reasons from the relations among such commitments, and returns one amongst different possible mismatches that may have caused an exception. In particular, we consider two possibilities: misalignment, when two agents have two different views of the same commitment, and misbehavior, when there is no misalignment, but a debtor agent fails to oblige. We also provide a realignment policy that can be applied in case of a misalignment. Our formalization uses a reactive form of Event Calculus. We illustrate the workings of our approach by discussing a delivery process from e-commerce as a case study.  相似文献   

14.
We are developing an agent and server library referred to as X-Economy, by which we can execute multiagent simulations and network games for financial and economic systems. To this end, we analyzed the characteristics of network games in a financial context and compared them with traditional ones. X-Economy has also provided a new research direction in market micro-structure analysis. We executed several kinds of multiagent simulations for technical traders (indices) and obtained non-trivial suggestions regarding the relationship between the market randomness and the effectiveness of technical indices. For instance, the performance of complex technical indices seemed to deeply depend on the characteristics and nature of a market when a market became complex, i.e. it moved far from the Wiener process.  相似文献   

15.
16.
Agent's flexibility and autonomy, as well as their capacity to coordinate and cooperate, are some of the features which make multiagent systems useful to work in dynamic and distributed environments. These key features are directly related to the way in which agents communicate and perceive each other, as well as their environment and surrounding conditions. Traditionally, this has been accomplished by means of message exchange or by using blackboard systems. These traditional methods have the advantages of being easy to implement and well supported by multiagent platforms; however, their main disadvantage is that the amount of social knowledge in the system directly depends on every agent actively informing of what it is doing, thinking, perceiving, etc. There are domains, for example those where social knowledge depends on highly distributed pieces of data provided by many different agents, in which such traditional methods can produce a great deal of overhead, hence reducing the scalability, efficiency and flexibility of the multiagent system. This work proposes the use of event tracing in multiagent systems, as an indirect interaction and coordination mechanism to improve the amount and quality of the information that agents can perceive from both their physical and social environment, in order to fulfill their goals more efficiently. In order to do so, this work presents an abstract model of a tracing system and an architectural design of such model, which can be incorporated to a typical multiagent platform.  相似文献   

17.
This paper describes some of the basic cooperative mechanisms of dialogue. Ideal cooperation is seen as consisting of four features (cognitive consideration, joint purpose, ethical consideration and trust), which can also to some extent be seen as requirements building on each other. Weaker concepts such as “coordination” and “collaboration” have only some of these features or have them to lesser degrees. We point out the central role of ethics and trust in cooperation, and contrast the result with popular AI accounts of collaboration. Dialogue is also seen as associated with social activities, in which certain obligations and rights are connected with particular roles. Dialogue is seen to progress through the written, vocal or gestural contributions made by participants. Each of the contributions has associated with it both expressive and evocative functions, as well as specific obligations for participants. These functions are dependent on the surface form of a contribution, the activity and the local context, for their interpretation. We illustrate the perspective by analysing dialogue extracts from three different activity types (a travel dialogue, a quarrel and a dialogue with a computer system). Finally, we consider what kind of information is shared in dialogue, and the ways in which dialogue participants manifest this sharing to each other through linguistic and other communicative behaviour. The paper concludes with a comparison to other accounts of dialogue and prospects for integration of these ideas within dialogue systems.  相似文献   

18.

Carrying out distributed business processes over networks is rapidly shifting the nature of application architectures from the simple command and control client-server model to complex peer-to-peer models supporting dynamic patterns of social interaction and behavior among autonomous, proactive, goal oriented agents. Trusting agents to autonomously make decisions and execute actions on behalf of humans, as part of global business processes, requires both understanding and modeling of the social laws that govern collective behavior and a practically useful operationalization of the models into agent programming tools. In this article we present a solution to these problems based on a representation of obliged and forbidden behavior in an organizational framework, together with an inference method that also decides which obligations to break in conflicting situations. These are integrated into an operational, practically useful agent development language that covers the spectrum from the definition of organizations, roles, agents, obligations, goals, and conversations to inferring and executing coordinated agent behaviors in multiagent applications. The major strength of the approach is the way it supports coordination by exchanging constraints about obliged and forbidden behavior among agents. We illustrate this and the entire system with solution examples to the feature interaction problem in the telecommunications industry and to integrated supply chain management.  相似文献   

19.

Teamwork is becoming increasingly critical in multiagent environments ranging from virtual environments for training and education, to information integration on the internet, to potential multirobotic space missions. Teamwork in such complex, dynamic environments is more than a simple union of simultaneous individual activity, even if supplemented with preplanned coordination. Indeed, in these dynamic environments, unanticipated events can easily cause a breakdown in such preplanned coordination. The central hypothesis in this article is that for effective teamwork, agents should be provided explicit representation of team goals and plans, as well as an explicit representation of a model of teamwork to support the execution of team plans. In our work, this model of teamwork takes the form of a set of domain independent rules that clearly outline an agent's commitments and responsibilities as a participant in team activities, and thus guide the agent's social activities while executing team plans. This article describes two implementations of agent-teams based on the above principles, one for a realworld helicopter combat simulation, and one for the RoboCup soccer simulation. The article also provides a preliminary comparison of the two agent-teams to illustrate some of the strengths and weaknesses of RoboCup as a common test bed for multiagent systems.  相似文献   

20.
Interaction protocols are specific, often standard, constraints on the behaviors of autonomous agents in a multiagent system. Protocols are essential to the functioning of open systems, such as those that arise in most interesting web applications. A variety of common protocols in negotiation and electronic commerce are best treated as commitment protocols, which are defined, or at least analyzed, in terms of the creation, satisfaction, or manipulation of the commitments among the participating agents.When protocols are employed in open environments, such as the Internet, they must be executed by agents that behave more or less autonomously and whose internal designs are not known. In such settings, therefore, there is a risk that the participating agents may fail to comply with the given protocol. Without a rigorous means to verify compliance, the very idea of protocols for interoperation is subverted. We develop an approach for testing whether the behavior of an agent complies with a commitment protocol. Our approach requires the specification of commitment protocols in temporal logic, and involves a novel way of synthesizing and applying ideas from distributed computing and logics of program.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号