首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 140 毫秒
1.
多Agent系统中信任的动态性处理   总被引:4,自引:0,他引:4  
王平  张自力 《计算机科学》2005,32(3):182-185
信任是多Agent系统中进行决策和交互的重要内容。收集必要的信息确定信任关系,动态地管理、维护信任关系,以及监控和重估已有的信任关系是多Agent系统中信任管理的关键问题。虽然研究者对上述关键问题提出了一系列解决方案,但依然存在一些问题有待进一步解决。本文针对信任的动态性处理这一问题,在分析现有的典型信任模型基础之上,提出一个具有动态性的Confidence-Reputation信任模型。模型中,我们不仅考虑了Agent的直接交互历史(Confidence)和信誉(Reputation),同时也考虑了信任的本体性。  相似文献   

2.
移动P2P网络的开放性和松耦合特性使得节点恶意攻击行为普遍存在,而现有基于声誉的信任模型大都基于“信誉值高的节点评价推荐越可信”的假设,无法识别恶意节点动态策略性攻击行为。针对这一问题,将社会网络相关理论引入信任系统,提出一种基于社会距离的信任模型(SD2Trust)。该模型区分了服务可信度和推荐评价可信度,用多维结构同型性描述向量刻画节点网络地位和行为特征,根据社会距离确定推荐节点集和推荐信誉计算权重,综合信任考虑了诋毁风险。理论分析和实验结果表明,该模型能有效对抗恶意节点动态策略攻击行为。  相似文献   

3.
An integrated trust and reputation model for open multi-agent systems   总被引:12,自引:1,他引:12  
Trust and reputation are central to effective interactions in open multi-agent systems (MAS) in which agents, that are owned by a variety of stakeholders, continuously enter and leave the system. This openness means existing trust and reputation models cannot readily be used since their performance suffers when there are various (unforseen) changes in the environment. To this end, this paper presents FIRE, a trust and reputation model that integrates a number of information sources to produce a comprehensive assessment of an agent’s likely performance in open systems. Specifically, FIRE incorporates interaction trust, role-based trust, witness reputation, and certified reputation to provide trust metrics in most circumstances. FIRE is empirically evaluated and is shown to help agents gain better utility (by effectively selecting appropriate interaction partners) than our benchmarks in a variety of agent populations. It is also shown that FIRE is able to effectively respond to changes that occur in an agent’s environment.  相似文献   

4.
All trust management systems must take into account the possibility of error: of misplaced trust. Therefore, regardless of whether it uses reputation or not, is centralized or distributed, a trust management system must be evaluated with consideration for the consequences of misplaced or abused trust. Thus, the issue of fairness has always been implicitly considered in the design and evaluation of trust management systems. This paper attempts to show that an implicit consideration, using the utilitarian paradigm of maximizing the sum of agents' utilities, is insufficient. Two case studies presented in the paper concern the design of a new reputation systems that uses implicit and emphasized negative feedbacks, and the evaluation of reputation systems' robustness to discrimination. The case studies demonstrate that considering fairness explicitly leads to different trust management system design and evaluation. Trust management systems can realize a goal of system fairness, identified with distributional fairness of agents' utilities. The realization of this goal can be achieved in a laboratory setting when all other factors that affect utilities can be excluded, and where the system can be tested using modeled adversaries. Taking the fairness of agent behavior explicitly into account when building trust or distrust can help to realize the goal of fairness of trust management systems.  相似文献   

5.
We rely on computers to control our power plants and water supplies, our automobiles and transportation systems, and soon our economic and political systems. Increasingly, software agents are enmeshed in these systems, serving as the glue that connects distributed components. Clearly, we need mechanisms to determine whether these agents are trustworthy. What do we need to establish trust? Agents are often characterized by features such as autonomy, sociability, proactiveness, and persistent identity. This latter feature is key in determining trust. When agents operate over an extended period, they can earn a reputation for competence, timeliness, ease of use, and trustworthiness, which is something ephemeral agents cannot do. Along with persistence, we need a reliable way to identify an agent and ensure that its true identity is not concealed. How can we assess an agent's trustworthiness? As with other aspects of agents and multiagent systems, we can take our cue from the human domain. Our reputations for trustworthiness are determined and maintained by the people we deal with. Analogously, a software agent's reputation will reside within the other agents with whom it interacts. For some agent interactions, such as those involving commerce, agents will simply inherit the reputation of their human owner, sharing, for example, their owner's credit rating and financial capability. For other types of interactions, such as those involving information gathering, an agent will determine its own reputation through its efforts at gathering and distilling information. An agent with a reputation for conducting thorough searches will be trusted by other agents wishing to use its Web search results  相似文献   

6.
Electronic transactions are becoming more important everyday. Several tasks like buying goods, booking flights or hotel rooms, or paying for streaming a movie, for instance, can be carried out through the Internet. Nevertheless, they are still some drawbacks due to security threats while performing such operations. Trust and reputation management rises as a novel way of solving some of those problems. In this paper we present our work TRIMS (a privacy-aware trust and reputation model for identity management systems), which applies a trust and reputation model to guarantee an acceptable level of security when deciding if a different domain might be considered reliable when receiving certain sensitive user’s attributes. Specifically, we will address the problems which surfaces when a domain needs to decide whether to exchange some information with another possibly unknown domain to effectively provide a service to one of its users. This decision will be determined by the trust deposited in the targeting domain. As far as we know, our proposal is one of the first approaches dealing with trust and reputation management in a multi-domain scenario. Finally, the performed experiments have demonstrated the robustness and accuracy of our model in a wide variety of scenarios.  相似文献   

7.
Artificial societies—distributed systems of autonomous agents—are becoming increasingly important in open distributed environments, especially in e‐commerce. Agents require trust and reputation concepts to identify communities of agents with which to interact reliably. We have noted in real environments that adversaries tend to focus on exploitation of the trust and reputation model. These vulnerabilities reinforce the need for new evaluation criteria for trust and reputation models called exploitation resistance which reflects the ability of a trust model to be unaffected by agents who try to manipulate the trust model. To examine whether a given trust and reputation model is exploitation‐resistant, the researchers require a flexible, easy‐to‐use, and general framework. This framework should provide the facility to specify heterogeneous agents with different trust models and behaviors. This paper introduces a Distributed Analysis of Reputation and Trust (DART) framework. The environment of DART is decentralized and game‐theoretic. Not only is the proposed environment model compatible with the characteristics of open distributed systems, but it also allows agents to have different types of interactions in this environment model. Besides direct, witness, and introduction interactions, agents in our environment model can have a type of interaction called a reporting interaction, which represents a decentralized reporting mechanism in distributed environments. The proposed environment model provides various metrics at both micro and macro levels for analyzing the implemented trust and reputation models. Using DART, researchers have empirically demonstrated the vulnerability of well‐known trust models against both individual and group attacks.  相似文献   

8.

Trust is one of the most important concepts guiding decision-making and contracting in human societies. In artificial societies, this concept has been neglected until recently. The inherent benevolence assumption implemented in many multiagent systems can have hazardous consequences when dealing with deceit in open systems. The aim of this paper is to establish a mechanism that helps agents to cope with environments inhabited by both selfish and cooperative entities. This is achieved by enabling agents to evaluate trust in others. A formalization and an algorithm for trust are presented so that agents can autonomously deal with deception and identify trustworthy parties in open systems. The approach is twofold: agents can observe the behavior of others and thus collect information for establishing an initial trust model. In order to adapt quickly to a new or rapidly changing environment, one enables agents to also make use of observations from other agents. The practical relevance of these ideas is demonstrated by means of a direct mapping from a scenario to electronic commerce.  相似文献   

9.
MAS中信任和信誉系统的研究进展   总被引:2,自引:2,他引:0  
信任在人类社会的合作中起着非常重要的作用,在诸多领域也受到了广泛的关注。在开放多Agent系统(MAS)的研究中,引入了信任的方法,用于解决交互伙伴的选择问题。信誉与信任密切相关,可以视信誉为信任的信息来源之一,信誉系统是用于完成信任评价的机制。MAS中信任研究应担负起发现计算实体之间信任的一般规律的重任。讨论了信任和信誉模型研究的内容、要求以及应用。在技术层面,信任表示有认知和数值两种观点,形成了集中式、分布式和混合式的体系结构,用于信任的汇总包括统计、概率、信念理论及模糊推理等方法。群体信誉、信息不准确、信息贫乏、异构模型互操作等问题有待进一步深入研究。  相似文献   

10.
Several models have been proposed in the past for representing both reliability and reputation. However, we remark that a crucial point in the practical use of these two measures is represented by the possibility of suitably combining them to support the agent's decision. In the past, we proposed a reliability–reputation model, called RRAF, that allows the user to choose how much importance to give to the reliability with respect to the reputation. However, RRAF shows some limitations, namely: (i) The weight to assign to the reliability versus reputation is arbitrarily set by the user, without considering the system evolution; (ii) the trust measure that an agent a perceives about an agent b is completely independent of the trust measure perceived by each other agent c, while in the reality the trust measures are mutually dependent. In this paper, we propose an extension of RRAF, aiming at facing the limitations above. In particular, we introduce a new trust reputation model, called TRR, that considers, from a mathematical viewpoint, the interdependence among all the trust measures computed in the systems. Moreover, this model dynamically computes a parameter measuring the importance of the reliability with respect to the reputation. Some experiments performed on the well‐known ART(Agent Reputation and Trust) platform show the significant advantages in terms of effectiveness introduced by TRR with respect to RRAF. © 2011 Wiley Periodicals, Inc.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号