共查询到20条相似文献,搜索用时 15 毫秒
1.
An agent is a computer software that is capable of taking independent action on behalf of its user or owner. It is an entity with goals, actions and domain knowledge, situated in an environment. Multiagent systems comprises of multiple autonomous, interacting computer software, or agents. These systems can successfully emulate the entities active in a distributed environment. The analysis of multiagent behavior has been studied in this paper based on a specific board game problem similar to the famous problem of GO. In this paper a framework is developed to define the states of the multiagent entities and measure the convergence metrics for this problem. An analysis of the changes of states leading to the goal state is also made. We support our study of multiagent behavior by simulations based on a CORBA framework in order to substantiate our findings. 相似文献
2.
A multiagent framework for coordinated parallel problem solving 总被引:1,自引:1,他引:0
Today’s organizations, under increasing pressure on the effectiveness and the increasing need for dealing with complex tasks
beyond a single individual’s capabilities, need technological support in managing complex tasks that involve highly distributed
and heterogeneous information sources and several actors. This paper describes CoPSF, a multiagent system middle-ware that
simplifies the development of coordinated problem solving applications while ensuring standard compliance through a set of
system services and agents. CoPSF hosts and serves multiple concurrent teams of problem solving contributing both to the limitation
of communication overheads and to the reduction of redundant work across teams and organizations. The framework employs (i) an interleaved task decomposition and allocation approach, (ii) a mechanism for coordination of agents’ work, and (iii) a mechanism that enables synergy between parallel teams. 相似文献
3.
Multiagent systems constitute an independent topic at the intersection between distributed computing and artificial intelligence. As the algorithmic techniques and the applications for multiagent systems have been continuously developing over the last two decades reaching significantly mature stages, many methodological problems have been addressed. In this paper, we aim to contribute to this methodological assessment of multiagent systems by considering the problem of choosing, or recruiting, a subset of agents from a set of available agents to satisfy a given request. This problem, which we call problem of recruitment, is encountered, for example, in matchmaking and in task allocation. We present and study a novel formal approach to the problem of recruitment, based on the algebraic formalism of lattices. The resulting formal framework can support the development of algorithms for automatic recruitment. 相似文献
4.
Jaume Jordán Stella Heras Soledad Valero Vicente Julián 《Computational Intelligence》2015,31(3):418-441
Multiagent systems are suitable for providing a framework that allows agents to perform collaborative processes in a social context. Furthermore, argumentation is a natural way of reaching agreements between several parties. However, it is difficult to find infrastructures of argumentation offering support for agent societies and their social context. Offering support for agent societies allows representation of more realistic environments to have argumentation dialogues. We propose an infrastructure to develop and execute argumentative agents in an open multiagent system. It offers tools to develop agents with argumentation capabilities. It also offers support for agent societies and their social context. The infrastructure is publicly available. Also, it has been implemented in an application scenario where argumentative agents try to reach an agreement about the best solution to solve a problem reported to the system. 相似文献
5.
Taha Khedro 《Advances in Engineering Software》1996,25(2-3):243-252
A framework for collaborative facility engineering is presented. The framework is based on a distributed problem-solving approach to collaborative facility engineering and employs an integration approach called Agent-Based Software Engineering as an implementation vehicle of this approach. The focal entity of this framework is a Multiagent Design Team (MDT) that comprises a collection of software agents (e.g. design software applications with a certain standard communication interface) and a design specialist, which together perform specific design tasks. Multiagent design teams are autonomous and form an organizational structure based on a federation architecture. Every multiagent design team surrenders its autonomy to a system program called facilitator, which coordinates the interaction among software agents in the federation architecture. Facilitators can be viewed as representatives of one or more teams that facilitate the exchange of design information and knowledge in support of the design tasks they perform. In the federation architecture, design specialists collaborate by exchanging design information with others via their software agents, and by identifying and resolving design conflicts by negotiation. In addition to a discussion of the framework's primary components, its realization in an integrated distributed environment for collaborative building engineering is described. 相似文献
6.
An ontology for commitments in multiagent systems: 总被引:2,自引:0,他引:2
Munindar P. Singh 《Artificial Intelligence and Law》1999,7(1):97-113
Social commitments have long been recognized as an important concept for multiagent systems. We propose a rich formulation of social commitments that motivates an architecture for multiagent systems, which we dub spheres of commitment. We identify the key operations on commitments and multiagent systems. We distinguish between explicit and implicit commitments. Multiagent systems, viewed as spheres of commitment (SoComs), provide the context for the different operations on commitments. Armed with the above ideas, we can capture normative concepts such as obligations, taboos, conventions, and pledges as different kinds of commitments. In this manner, we synthesize ideas from multiagent systems, particularly the idea of social context, with ideas from ethics and legal reasoning, specifically that of directed obligations in the Hohfeldian tradition. 相似文献
7.
Karen Yorav 《International Journal on Software Tools for Technology Transfer (STTT)》2009,11(4):269-272
This special section contains a selection of contributions originally presented at the Third Haifa Verification Conference
(HVC’07). The scope of this conference covers all types of verification of both hardware and software systems. While there
is widespread agreement on the importance of verification, it is clear that different systems require different approaches.
Several distinct fields of research have developed, devoted to either software or hardware, or to a particular verification
approach such as formal or testing/simulation. Each of these paradigms has an extensive publication history and its own dedicated
conference. Yet there is much to be gained from sharing knowledge and experience. HVC’s goal is to serve as a venue for researchers
from all fields of verification, enabling them to exchange ideas and learn from one another. It is our hope that by gathering
these experts together in one conference, we are fostering the emergence of new trends that combine ideas and insights from
different domains. 相似文献
8.
Stock trading is one of the key items in an economy and estimating its behavior and taking the best decision in it are among the most challenging issues. Solutions based on intelligent agent systems are proposed to cope with those challenges. Agents in a multiagent system (MAS) can share a common goal or they can pursue their own interests. That nature of MASs exactly fits the requirements of a free market economy. Although existing studies include noteworthy proposals on agent‐based market simulation and researchers discuss theoretical design issues of agent‐based stock exchange systems, unfortunately only a very few of the studies consider exact development and implementation of multiagent stock trading systems within the software engineering perspective and guides to the software engineers for constructing such software systems starting from scratch. To fill this gap, in this paper, we discuss the development of a multiagent‐based stock trading system by taking into consideration software design according to a well‐defined agent oriented software engineering methodology and implementation with a widely‐used MAS software development framework. Each participant in the system is first designed as belief–desire–intention agents with their facts, goals, and plans, and then belief–desire–intention reasoning and behavioral structure of the designed agents are implemented. Lessons learned during design and development within the software engineering perspective and evaluation of the implemented multiagent stock exchange system are also reported. Copyright © 2011 John Wiley & Sons, Ltd. 相似文献
9.
Logan Yliniemi Kagan Tumer 《Soft Computing - A Fusion of Foundations, Methodologies and Applications》2016,20(10):3869-3887
Multiagent systems have had a powerful impact on the real world. Many of the systems it studies (air traffic, satellite coordination, rover exploration) are inherently multi-objective, but are often treated as single-objective problems within the research. A key concept within multiagent systems is that of credit assignment: quantifying an individual agent’s impact on the overall system performance. In this work,we extend the concept of credit assignment into multi-objective problems. We apply credit assignment through difference evaluations to two different policy selection paradigms to demonstrate their broad applicability. We first examine reinforcement learning, in which using difference evaluations improves performance by (i) increasing learning speed by up to 10\(\times \), (ii) producing solutions that dominate all solutions discovered by a traditional team-based credit assignment schema and (iii) losing only 0.61 % of dominated hypervolume in a scenario where 20 % of agents act in their own interests instead of the system’s interests (compared to a 43 % loss when using a traditional global reward in the same scenario). We then derive multiple methods for incorporating difference evaluations into a state-of-the-art multi-objective evolutionary algorithm, NSGA-II. Median performance of the NSGA-II considering credit assignment dominates best-case performance of NSGA-II not considering credit assignment in a multiagent multi-objective problem. Our results strongly suggest that in a multiagent multi-objective problem, proper credit assignment is at least as important to performance as the choice of multi-objective algorithm. 相似文献
10.
Multiagent cooperative negotiation is a promising technique for modeling and controlling complex systems. Effective and flexible
cooperative negotiations are especially useful for open complex systems characterized by high decentralization (which implies
a low amount of exchanged information) and by dynamic connection and disconnection of agents. Applications include ad hoc
network management, vehicle formation, and physiological model combination. To obtain an effective control action, the stability of the negotiation, namely the guarantee that an agreement will be eventually reached, is of paramount importance. However,
the techniques usually employed for assessing the stability of a negotiation can be hardly applied in open scenarios. In this
paper, whose nature is mainly theoretical, we make a first attempt towards engineering stable cooperative negotiations proposing
a framework for their analysis and design. Specifically, we present a formal protocol for cooperative negotiations between a number of agents and we propose a criterion for negotiation stability based on the concept of connective stability. This is a form of stability that accounts for the effects of structural changes on the composition of a system and that
appears very suitable for multiagent cooperative negotiations. To show its possible uses, we apply our framework for connective
stability to some negotiations taken from literature. 相似文献
11.
Jamie Cullen 《Minds and Machines》2009,19(2):237-254
Turing’s Imitation Game is often viewed as a test for theorised machines that could ‘think’ and/or demonstrate ‘intelligence’.
However, contrary to Turing’s apparent intent, it can be shown that Turing’s Test is essentially a test for humans only. Such
a test does not provide for theorised artificial intellects with human-like, but not human-exact, intellectual capabilities.
As an attempt to bypass this limitation, I explore the notion of shifting the goal posts of the Turing Test, and related tests
such as the Total Turing Test, away from the exact imitation of human capabilities, and towards communication with humans instead. While the continued philosophical relevance of such tests is open to debate, the outcome is a different
class of tests which are, unlike the Turing Test, immune to failure by means of sub-cognitive questioning techniques. I suggest
that attempting to instantiate such tests could potentially be more scientifically and pragmatically relevant to some Artificial
Intelligence researchers, than instantiating a Turing Test, due to the focus on producing a variety of goal directed outcomes
through communicative methods, as opposed to the Turing Test’s emphasis on ‘fooling’ an Examiner.
相似文献
Jamie CullenEmail: |
12.
Coalition formation is a central problem in multiagent systems research, but most models assume common knowledge of agent
types. In practice, however, agents are often unsure of the types or capabilities of their potential partners, but gain information about these capabilities through repeated interaction. In this paper, we
propose a novel Bayesian, model-based reinforcement learning framework for this problem, assuming that coalitions are formed
(and tasks undertaken) repeatedly. Our model allows agents to refine their beliefs about the types of others as they interact
within a coalition. The model also allows agents to make explicit tradeoffs between exploration (forming “new” coalitions
to learn more about the types of new potential partners) and exploitation (relying on partners about which more is known),
using value of information to define optimal exploration policies. Our framework effectively integrates decision making during
repeated coalition formation under type uncertainty with Bayesian reinforcement learning techniques. Specifically, we present
several learning algorithms to approximate the optimal Bayesian solution to the repeated coalition formation and type-learning
problem, providing tractable means to ensure good sequential performance. We evaluate our algorithms in a variety of settings,
showing that one method in particular exhibits consistently good performance in practice. We also demonstrate the ability
of our model to facilitate knowledge transfer across different dynamic tasks. 相似文献
13.
Jose Manuel Lopez‐Guede Borja Fernandez‐Gauna Manuel Graña Ekaitz Zulueta 《Computational Intelligence》2015,31(3):498-512
Multiagent systems are increasingly present in computational environments. However, the problem of agent design or control is an open research field. Reinforcement learning approaches offer solutions that allow autonomous learning with minimal supervision. The Q‐learning algorithm is a model‐free reinforcement learning solution that has proven its usefulness in single‐agent domains; however, it suffers from dimensionality curse when applied to multiagent systems. In this article, we discuss two approaches, namely TRQ‐learning and distributed Q‐learning, that overcome the limitations of Q‐learning offering feasible solutions. We test these approaches in two separate domains. The first is the control of a hose by a team of robots. The second is the trash disposal problem. Computational results show the effectiveness of Q‐learning solutions to multiagent systems’ control. 相似文献
14.
Multiagent Systems: A Survey from a Machine Learning Perspective 总被引:27,自引:0,他引:27
Distributed Artificial Intelligence (DAI) has existed as a subfield of AI for less than two decades. DAI is concerned with systems that consist of multiple independent entities that interact in a domain. Traditionally, DAI has been divided into two sub-disciplines: Distributed Problem Solving (DPS) focuses on the information management aspects of systems with several components working together towards a common goal; Multiagent Systems (MAS) deals with behavior management in collections of several independent entities, or agents. This survey of MAS is intended to serve as an introduction to the field and as an organizational framework. A series of general multiagent scenarios are presented. For each scenario, the issues that arise are described along with a sampling of the techniques that exist to deal with them. The presented techniques are not exhaustive, but they highlight how multiagent systems can be and have been used to build complex systems. When options exist, the techniques presented are biased towards machine learning approaches. Additional opportunities for applying machine learning to MAS are highlighted and robotic soccer is presented as an appropriate test bed for MAS. This survey does not focus exclusively on robotic systems. However, we believe that much of the prior research in non-robotic MAS is relevant to robotic MAS, and we explicitly discuss several robotic MAS, including all of those presented in this issue. 相似文献
15.
Anastasia Pagnoni 《Natural computing》2011,10(2):711-725
The paper introduces error-correcting Petri nets, an algebraic methodology for designing synthetic biologic systems with monitoring
capabilities. Linear error-correcting codes are used to extend the net’s structure in a way that allows for the algebraic
detection and correction of non-reachable net markings. The presented methodology is based on modulo-p Hamming codes—which are optimal for the modulo-p correction of single errors—but also works with any other linear error-correcting code. 相似文献
16.
《Engineering Applications of Artificial Intelligence》2005,18(2):191-204
Nowadays, with the expansion of Internet, there is a need of methodologies and software tools to ease the development of applications where distributed homogeneous entities can participate. Multiagent systems, and electronic institutions in particular, can play a main role in the development of this type of systems. Electronic institutions define the rules of the game in agent societies, by fixing what agents are permitted and forbidden to do and under what circumstances. The goal of this paper is to present EIDE, an integrated development environment for supporting the engineering of multiagent systems as electronic institutions. 相似文献
17.
Gianluigi Greco 《Annals of Mathematics and Artificial Intelligence》2007,50(1-2):143-194
An extension of abduction is investigated where explanations are jointly computed by sets of interacting agents. On the one
hand, agents are allowed to partially contribute to the reasoning task, so that joint explanations can be singled out even if each agent does not have enough knowledge for carrying out abduction on its own. On the other
hand, agents maintain their autonomy in choosing explanations, each one being equipped with a weighting function reflecting
its perception about the reliability of sets of hypotheses. Given that different agents may have different and possibly contrasting
preferences on the hypotheses to be chosen, some reasonable notions of agents’ agreement are introduced, and their computational
properties are thoroughly studied. As an example application of the framework discussed in the paper, it is shown how to handle
data management issues in Peer-to-Peer systems and, specifically, how to provide a repair-based semantics to inconsistent
ones.
相似文献
18.
For agents to collaborate in open multi-agent systems, each agent must trust in the other agents’ ability to complete tasks
and willingness to cooperate. Agents need to decide between cooperative and opportunistic behavior based on their assessment
of another agents’ trustworthiness. In particular, an agent can have two beliefs about a potential partner that tend to indicate
trustworthiness: that the partner is competent and that the partner expects to engage in future interactions. This paper explores an approach that models competence as an agent’s probability of successfully performing an action, and
models belief in future interactions as a discount factor. We evaluate the underlying decision framework’s performance given
accurate knowledge of the model’s parameters in an evolutionary game setting. We then introduce a game-theoretic framework
in which an agent can learn a model of another agent online, using the Harsanyi transformation. The learning agents evaluate
a set of competing hypotheses about another agent during the simulated play of an indefinitely repeated game. The Harsanyi
strategy is shown to demonstrate robust and successful online play against a variety of static, classic, and learning strategies
in a variable-payoff Iterated Prisoner’s Dilemma setting. 相似文献
19.
We describe a relational learning by observation framework that automatically creates cognitive agent programs that model expert task performance in complex dynamic domains.
Our framework uses observed behavior and goal annotations of an expert as the primary input, interprets them in the context
of background knowledge, and returns an agent program that behaves similar to the expert. We map the problem of creating an
agent program on to multiple learning problems that can be represented in a “supervised concept learning’’ setting. The acquired
procedural knowledge is partitioned into a hierarchy of goals and represented with first order rules. Using an inductive logic
programming (ILP) learning component allows our framework to naturally combine structured behavior observations, parametric
and hierarchical goal annotations, and complex background knowledge. To deal with the large domains we consider, we have developed
an efficient mechanism for storing and retrieving structured behavior data. We have tested our approach using artificially
created examples and behavior observation traces generated by AI agents. We evaluate the learned rules by comparing them to
hand-coded rules.
Editor: Rui Camacho 相似文献
20.
Discovering colored Petri nets from event logs 总被引:1,自引:0,他引:1
A. Rozinat R. S. Mans M. Song W. M. P. van der Aalst 《International Journal on Software Tools for Technology Transfer (STTT)》2008,10(1):57-74
Process-aware information systems typically log events (e.g., in transaction logs or audit trails) related to the actual execution
of business processes. Analysis of these execution logs may reveal important knowledge that can help organizations to improve
the quality of their services. Starting from a process model, which can be discovered by conventional process mining algorithms,
we analyze how data attributes influence the choices made in the process based on past process executions using decision mining,
also referred to as decision point analysis. In this paper we describe how the resulting model (including the discovered data
dependencies) can be represented as a Colored Petri Net (CPN), and how further perspectives, such as the performance and organizational perspective, can be incorporated. We also
present a
CPN Tools Export
plug-in implemented within the ProM framework. Using this plug-in, simulation models in ProM obtained via a combination of
various process mining techniques can be exported to CPN Tools. We believe that the combination of automatic discovery of process models using ProM and the simulation capabilities of CPN Tools offers an
innovative way to improve business processes. The discovered process model describes reality better than most hand-crafted simulation models. Moreover, the simulation
models are constructed in such a way that it is easy to explore various redesigns.
A. Rozinat’s research was supported by the IOP program of the Dutch Ministry of Economic Affairs.
M. Song’s research was supported by the Technology Foundation
STW. 相似文献