首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In open multi-agent systems trust models are an important tool for agents to achieve effective interactions. However, in these kinds of open systems, the agents do not necessarily use the same, or even similar, trust models, leading to semantic differences between trust evaluations in the different agents. Hence, to successfully use communicated trust evaluations, the agents need to align their trust models. We explicate that currently proposed solutions, such as common ontologies or ontology alignment methods, lead to additional problems and propose a novel approach. We show how the trust alignment can be formed by considering the interactions that agents share and describe a mathematical framework to formulate precisely how the interactions support trust evaluations for both agents. We show how this framework can be used in the alignment process and explain how an alignment should be learned. Finally, we demonstrate this alignment process in practice, using a first-order regression algorithm, to learn an alignment and test it in an example scenario.  相似文献   

2.
《Applied Soft Computing》2007,7(2):492-505
E-commerce markets can increase their efficiency through the usage of intelligent agents which negotiate and execute contracts on behalf of their owners. The measurement and computation of trust to secure interactions between autonomous agents is crucial for the success of automated e-commerce markets. Building a knowledge sharing network among peer agents helps to overcome trust-related boundaries in an environment where least human intervention is desired. Nevertheless, a risk management model which allows individual customisation to meet the different security needs of agent–owners is vital.The calculation and measurement of trust in unsupervised virtual communities like multi-agent environments involves complex aspects such as credibility rating for opinions delivered by peer agents, or the assessment of past experiences with the peer node one wishes to interact with. The deployment of suitable algorithms and models imitating human reasoning can help to solve these problems.This paper proposes not only a customisable trust evaluation model based on fuzzy logic but also demonstrates the integration of post-interaction processes like business interaction reviews and credibility adjustment. Fuzzy logic provides a natural framework to deal with uncertainty and the tolerance of imprecise data inputs to fuzzy-based systems makes fuzzy reasoning especially attractive for the subjective tasks of trust evaluation, business-interaction review and credibility adjustment.  相似文献   

3.
Contracting With Uncertain Level Of Trust   总被引:10,自引:0,他引:10  
The paper investigates the impact of trust on market efficiency and bilateral contracts. We prove that a market in which agents are trusted to the degree they deserve to be trusted is as efficient as a market with complete trustworthiness. In other words, complete trustworthiness is not a necessary condition for market efficiency. We prove that distrust could significantly reduce market efficiency, and we show how to solve the problem by using appropriately designed multiagent contracts. The problem of trust is studied in the context of a bilateral negotiation game between a buyer and a seller. It is shown that if the seller's trust equals the buyer's trustworthiness, then the social welfare, the amount of trade, and the agents' utility functions are maximized. The paper also studies the efficiency of advance payment contracts as a tool for improving trustworthiness. It is proved that advance payment contracts maximize the social welfare and the amount of trade. Finally, the paper studies the problem of how to make agents truthfully reveal their level of trustworthiness. An incentive–compatible contract is defined, in which agents do not benefit from lying about their trustworthiness. The analysis and the solutions proposed in this paper could help agent designers avoid many market failures and produce efficient interaction mechanisms.  相似文献   

4.
Trust models are mechanisms that allow agents to build trust without relying on a trusted central authority. Our goal was to develop a trust model that would operate with values that humans easily understand and manipulate: qualitative and ordinal values. The result is a trust model that computes trust from experiences created in interactions and from opinions obtained from third-party agents. The trust model, termed qualitative trust model (QTM), uses qualitative and ordinal values for assessing experiences, expressing opinions and estimating trust. We treat such values appropriately; we never convert them to numbers, but merely use their relative order. To aggregate a collection of such values, we propose an aggregation method that is based on comparing distributions and show some of its properties; the method can be used in other domains and can be seen as an alternative to median and similar methods. To cope with lying agents, QTM estimates trustworthiness in opinion providers with a modified version of the weighted majority algorithm, and additionally combines trustworthiness with social links between agents; such links are obtained implicitly by observing how agents provide opinions about each other. Finally, we compare QTM against a set of well-known trust models and demonstrate that it consistently performs well and on par with other quantitative models, and in many cases even outperforms them, particularly when the number of direct experiences is low.  相似文献   

5.
随着Internet技术的迅速发展和广泛应用,出现了基于Internet虚拟社会,如P2P、网格、无线网络、多代理网络、无线传感器网络等,从而引起了信任概念从人类交互世界向虚拟交互世界的迁移。近年来研究者针对虚拟交互世界提出了各种信任管理模型,这些模型的多样性使得在实际应用中会产生多种信任域。一些文献提出了如何实现多信任域中代理间的可信交互。分析了多信任域的相关概念及其产生的原因,对选取新的典型的多信任域信任管理模型及其使用的各种方法进行了评述;分析了目前研究中存在的问题,并展望了未来的发展方向。研究表明,多信任域中代理间的可信交互是目前信任管理中的难点问题。  相似文献   

6.
一种服务Agent 的可信性评估方法   总被引:2,自引:1,他引:1  
朱曼玲  金芝 《软件学报》2011,22(11):2593-2609
提出了一个基于服务Agent的计算框架,并从社交认知的角度建立了一个服务Agent的信任本体,支持服务Agent对信任信息进行推理.根据该信任本体,提出一系列基于信任推理的计算规则支持信任值的计算,帮助服务Agent进行理性的选择决策.案例研究结果表明,该方法能够有效地帮助服务请求方进行信任评价以及服务选择.  相似文献   

7.
A model of a trust-based recommendation system on a social network   总被引:3,自引:0,他引:3  
In this paper, we present a model of a trust-based recommendation system on a social network. The idea of the model is that agents use their social network to reach information and their trust relationships to filter it. We investigate how the dynamics of trust among agents affect the performance of the system by comparing it to a frequency-based recommendation system. Furthermore, we identify the impact of network density, preference heterogeneity among agents, and knowledge sparseness to be crucial factors for the performance of the system. The system self-organises in a state with performance near to the optimum; the performance on the global level is an emergent property of the system, achieved without explicit coordination from the local interactions of agents.  相似文献   

8.
Artificial societies—distributed systems of autonomous agents—are becoming increasingly important in open distributed environments, especially in e‐commerce. Agents require trust and reputation concepts to identify communities of agents with which to interact reliably. We have noted in real environments that adversaries tend to focus on exploitation of the trust and reputation model. These vulnerabilities reinforce the need for new evaluation criteria for trust and reputation models called exploitation resistance which reflects the ability of a trust model to be unaffected by agents who try to manipulate the trust model. To examine whether a given trust and reputation model is exploitation‐resistant, the researchers require a flexible, easy‐to‐use, and general framework. This framework should provide the facility to specify heterogeneous agents with different trust models and behaviors. This paper introduces a Distributed Analysis of Reputation and Trust (DART) framework. The environment of DART is decentralized and game‐theoretic. Not only is the proposed environment model compatible with the characteristics of open distributed systems, but it also allows agents to have different types of interactions in this environment model. Besides direct, witness, and introduction interactions, agents in our environment model can have a type of interaction called a reporting interaction, which represents a decentralized reporting mechanism in distributed environments. The proposed environment model provides various metrics at both micro and macro levels for analyzing the implemented trust and reputation models. Using DART, researchers have empirically demonstrated the vulnerability of well‐known trust models against both individual and group attacks.  相似文献   

9.
Agent trust researches become more and more important because they will ensure good interactions among the software agents in large-scale open systems. Moreover, individual agents often interact with long-term coalitions such as some E-commerce web sites. So the agents should choose a coalition based on utility and trust. Unfortunately, few studies have been done on agent coalition credit and there is a need to do it in detail. To this end, a long-term coalition credit model (LCCM) is presented. Furthermore, the relationship between coalition credit and coalition payoff is also attended. LCCM consists of internal trust based on agent direct interactions and external reputation based on agent direct observation. Generalization of LCCM can be demonstrated through experiments applied in both cooperative and competitive domain environment. Experimental results show that LCCM is capable of coalition credit computation efficiently and can properly reflect various factors effect on coalition credit. Another important advantage that is a useful and basic property of credit is that LCCM can effectively filter inaccurate or lying information among interactions.  相似文献   

10.
Target sales rebate (TSR) contracts have been shown to be useful in coordinating supply chains with risk-neutral agents. However, there have been few studies on the cases with risk sensitive agents. As a result, based on the classic Markowitz portfolio theory in finance, we carry out in this paper a mean–variance (MV) analysis of supply chains under TSR contracts. We study a supply chain with a single supplier and a single risk averse retailer. We propose TSR contracts for achieving coordination. We demonstrate how TSR contracts can coordinate the supply chain which takes into consideration the degree of risk aversion of the retailer. We find that the supplier can coordinate the channel with flexible TSR contracts. In addition, we extend the supply chain model to include sales effort decision of the retailer. Conditions for TSR contracts to coordinate the supply chain with sales effort of retailer are also derived.  相似文献   

11.

Instead of establishing trust through defining compliance-based standards like protocols augmented by cryptographic methods, it is shown that trust can emerge as a self-organizing phenomenon in a complex dynamical system. It is assumed that trust can be modeled on the basis of an intrinsic property called trustworthiness in every individual i. Trustworthiness is an objective measure for other individuals , whether it is desirable to engage in an interaction with i or not. Trustworthiness cannot directly be perceived. Building trust, therefore, relates to estimating trustworthiness. Subjective criteria like outer appearance are important for building trust as they allow the handling of unknown agents for whom data from previous interactions do not exist. Here, trustworthiness is grounded in the strategies of agents who engage in an extended version of the iterated prisoner's dilemma. Trust is represented as a preference to be grouped together with agents with a certain label to play a game. It is shown that stable relations of trust can emerge and that the coevolution of trust boosts the evolution of cooperation.  相似文献   

12.
Trust management and trust theory revision   总被引:3,自引:0,他引:3  
A theory of trust for a given system consists of a set of rules that describe trust of agents in the system. In a certain logical framework, the theory is generally established based on the initial trust of agents in the security mechanisms of the system. Such a theory provides a foundation for reasoning about agent beliefs as well as security properties that the system may satisfy. However, trust changes dynamically. When agents lose their trust or gain new trust in a dynamic environment, the theory established based on the initial trust of agents in the system must be revised, otherwise it can no longer be used for any security purpose. This paper investigates the factors influencing trust of agents and discusses how to revise theories of trust in dynamic environments. A methodology for revising and managing theories of trust for multiagent systems is proposed. This methodology includes a method for modeling trust changes, a method for expressing theory changes, and a technique for obtaining a new theory based on a given trust change. The proposed approach is very general and can be applied to obtain an evolving theory of trust for agent-based systems.  相似文献   

13.
General competence trust among supply chain partners, referring to the trust that a partner holds the general ability of fulfilling contracts, is a critical factor to ensure effective cooperation in a supply chain, especially in the current financial crisis. The method of supply chain trust diagnosis (SCTD) is to evaluate whether or not a partner holds such competence. This research devotes to an early investigation on diagnosing competence trust of supply chain with the method of inductive case-based reasoning ensemble (ICBRE). The so-called supply chain trust diagnosis with inductive case-based reasoning ensemble consists of five levels, that is, information level, the level of ratios of general competence states, the level of inductive case-based reasoning, ensemble level, and diagnosis result level. Knowledge for diagnosing competence trust, which composes of a case base, is hidden in data represented by ratios of general competence states. Inductive approach is combined with randomness to construct diverse and good member methods of inductive case-based reasoning. Finally, simple voting is used to integrate outputs of member inductive case-based reasoning methods in order to produce the final diagnosis on whether or not a partner holds the general ability of fulfilling contracts. We statistically validated results of the method of supply chain trust diagnosis with inductive case-based reasoning ensemble by comparing them with those of multivariate discriminant analysis, logistic regression, single Euclidean case-based reasoning, and single inductive case-based reasoning. The results indicate that the method of supply chain trust diagnosis with inductive case-based reasoning ensemble significantly improves predictive capability of case-based reasoning in this problem and outperforms all the comparative models by group decision of several decision-making agents and non-strict assumptions like statistical methods.  相似文献   

14.
Multiagent systems (MASs) are increasingly popular for modeling distributed environments that are highly complex and dynamic, such as e‐commerce, smart buildings, and smart grids. Typically, agents assumed to be goal driven with limited abilities, which restrains them to working with other agents for accomplishing complex tasks. Trust is considered significant in MASs to make interactions effectively, especially when agents cannot assure that potential partners share the same core beliefs about the system or make accurate statements regarding their competencies and abilities. Due to the imprecise and dynamic nature of trust in MASs, we propose a hybrid trust model that uses fuzzy logic and Q‐learning for trust modeling. as an improvement over Q‐learning‐based trust evaluation. Q‐learning is used to estimate trust on the long term, fuzzy inferences are used to aggregate different trust factors, and suspension is used as a short‐term response to dynamic changes. The performance of the proposed model is evaluated using simulation. Simulation results indicate that the proposed model can help agents select trustworthy partners to interact with. It has a better performance compared to some of the popular trust models in the presence of misbehaving interaction partners.  相似文献   

15.
信任是多主体系统(MAS)研究的一个热点问题。为了解决MAS的动态性和不确定性带来的信任问题,提出了一种基于概率论的信任模型。与现有的信任模型相比,该模型考虑了信任信息的完整性和信任的动态性:即在估价agent的信任关系时引入了信任的精确度信心和时间退化因子。模拟实验表明,时间退化因子和信心的引入,能更加有效地评估agent之间的信任关系。  相似文献   

16.
In many dynamic open systems, agents have to interact with one another to achieve their goals. Here, agents may be self-interested, and when trusted to perform an action for another, may betray that trust by not performing the action as required. In addition, due to the size of such systems, agents will often interact with other agents with which they have little or no past experience. There is therefore a need to develop a model of trust and reputation that will ensure good interactions among software agents in large scale open systems. Against this background, we have developed TRAVOS (Trust and Reputation model for Agent-based Virtual OrganisationS) which models an agent’s trust in an interaction partner. Specifically, trust is calculated using probability theory taking account of past interactions between agents, and when there is a lack of personal experience between agents, the model draws upon reputation information gathered from third parties. In this latter case, we pay particular attention to handling the possibility that reputation information may be inaccurate.  相似文献   

17.
信任管理是解决开放多agent系统安全性问题最有前途的思路,而其基础之一就是信任获取。该文在Demp-ster-Shafer证据理论框架内,提出了一种新的证据获取方法,文章认为agent之间一次交互的服务质量提供了关于服务提供者可信任程度的一个证据,多次的服务提供了多个独立的证据,这些证据的合成构成了更准确的证据信任评价。与目前常用的多次服务质量直方图加门限的信任获取方法相比,该方法具有评价结果对门限参数敏感度低,以及对个别a-gent之间交互次数要求少的优点。  相似文献   

18.

In multi-agent system (MAS) applications, teamwork among the agents is essential as the agents are required to collaborate and pool resources to execute the given tasks and complete the objectives successfully. A vital part of the collaboration is sharing of information and resources in order to optimize their efforts in achieving the given objectives. Under such collaborative environment, trust among the agents plays a critical role to ensure efficient cooperation. This study looks into developing a trust evaluation model that can empirically evaluate the trust of one agent on the other. The proposed model is developed using temporal difference learning method, incorporating experience gained through interactions into trust evaluation. Simulation experiments are conducted to evaluate the performance of the developed model against some of the most recent models reported in the literature. The results of the simulation experiments indicate that the proposed model performs better than the comparison models in estimating trust more effectively.

  相似文献   

19.
Making components contract aware   总被引:1,自引:0,他引:1  
Components have long promised to encapsulate data and programs into a box that operates predictably without requiring that users know the specifics of how it does so. Many advocates have predicted that components will bring about widespread software reuse, spawning a market for components usable with such mainstream software buses as the Common Object Request Broker Architecture (CORBA) and the Distributed Component Object Model (DCOM). In the Windows world, at least, this prediction is becoming a reality. Yet recent reports indicate mixed results when using and reusing components in mission-critical settings. Such results raise disturbing questions. How can you trust a component? What if the component behaves unexpectedly, either because it is faulty or simply because you misused it? Before we can trust a component in mission-critical applications, we must be able to determine, reliably and in advance, how it will behave. In this article the authors define a general model of sofware contracts and show how existing mechanisms could be used to turn traditional components into contract-aware ones  相似文献   

20.
Developing, maintaining, and disseminating trust in open, dynamic environments is crucial. We propose self-organizing referral networks as a means for establishing trust in such environments. A referral network consists of autonomous agents that model others in terms of their trustworthiness and disseminate information on others' trustworthiness. An agent may request a service from another; a requested agent may provide the requested service or give a referral to someone else. Possibly with its user's help, each agent can judge the quality of service obtained. Importantly, the agents autonomously and adaptively decide with whom to interact and choose what referrals to issue, if any. The choices of the agents lead to the evolution of the referral network, whereby the agents move closer to those that they trust. This paper studies the guidelines for engineering self-organizing referral networks. To do so, it investigates properties of referral networks via simulation. By controlling the actions of the agents appropriately, different referral networks can be generated. This paper first shows how the exchange of referrals affects service selection. It identifies interesting network topologies and shows under which conditions these topologies emerge. Based on the link structure of the network, some agents can be identified as authorities. Finally, the paper shows how and when such authorities emerge. The observations of these simulations are then formulated into design recommendations that can be used to develop robust, self-organizing referral networks.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号