首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
可信保证体系是虚拟计算环境的基础组件.虚拟计算环境下信任管理具有不确定性和动态性特点,因此,虚拟计算环境下的可信保证体系应具备主观性、基于证据以及上下文相关性的特性.针对虚拟计算环境下虚拟共同体的服务选取以及自主元素的可信度计算的安全问题,提出了基于贝叶斯分析的信任模型.  相似文献   

2.
针对异构无线网络不确定性的特点,提出一种基于信任度的接入选择算法。引入直接信任度、推荐信任度、推荐主体可信度和信任时间戳4个参量计算网络的信任度,利用信任度对网络性能指标进行信任加权。同时,综合考虑用户个人偏好,对网络进行逼近理想解排序,得到最佳接入网络。仿真实验结果表明,该方法综合考虑了网络多种性能指标及其信任度,能提高异构无线网络中用户选择网络的安全性。  相似文献   

3.
Developing, maintaining, and disseminating trust in open, dynamic environments is crucial. We propose self-organizing referral networks as a means for establishing trust in such environments. A referral network consists of autonomous agents that model others in terms of their trustworthiness and disseminate information on others' trustworthiness. An agent may request a service from another; a requested agent may provide the requested service or give a referral to someone else. Possibly with its user's help, each agent can judge the quality of service obtained. Importantly, the agents autonomously and adaptively decide with whom to interact and choose what referrals to issue, if any. The choices of the agents lead to the evolution of the referral network, whereby the agents move closer to those that they trust. This paper studies the guidelines for engineering self-organizing referral networks. To do so, it investigates properties of referral networks via simulation. By controlling the actions of the agents appropriately, different referral networks can be generated. This paper first shows how the exchange of referrals affects service selection. It identifies interesting network topologies and shows under which conditions these topologies emerge. Based on the link structure of the network, some agents can be identified as authorities. Finally, the paper shows how and when such authorities emerge. The observations of these simulations are then formulated into design recommendations that can be used to develop robust, self-organizing referral networks.  相似文献   

4.
For agents to collaborate in open multi-agent systems, each agent must trust in the other agents’ ability to complete tasks and willingness to cooperate. Agents need to decide between cooperative and opportunistic behavior based on their assessment of another agents’ trustworthiness. In particular, an agent can have two beliefs about a potential partner that tend to indicate trustworthiness: that the partner is competent and that the partner expects to engage in future interactions. This paper explores an approach that models competence as an agent’s probability of successfully performing an action, and models belief in future interactions as a discount factor. We evaluate the underlying decision framework’s performance given accurate knowledge of the model’s parameters in an evolutionary game setting. We then introduce a game-theoretic framework in which an agent can learn a model of another agent online, using the Harsanyi transformation. The learning agents evaluate a set of competing hypotheses about another agent during the simulated play of an indefinitely repeated game. The Harsanyi strategy is shown to demonstrate robust and successful online play against a variety of static, classic, and learning strategies in a variable-payoff Iterated Prisoner’s Dilemma setting.  相似文献   

5.
ABSTRACT

Virtual training allows the learning and rehearsal of implicit cues, e.g., trustworthy leading action in an emergency evacuation, that cannot be easily understood through merely reading about situations, while mitigating the danger and expense of live rehearsals. We have focused our efforts on designing social agents that can engage in and help to train humans to generate the trustworthy behaviors that help to ensure a successful evacuation. Drawing upon social science research and using a “role-reversal method,” we successfully constructed agents that can perceive trustworthiness as humans do. The agents first collect human responses to their own nonverbal cues in controlled experimental training scenarios. Using these results, we obtain optimal parameters for nonverbal cues of trustworthiness, and then can use them to guide agents who evaluate human performance in the same training scenarios. The method enables us to convert social psychological findings into computational mechanisms.  相似文献   

6.

Multiagent systems (MASs) are societies whose individuals are software delegatees (agents) acting on behalf of their owners or delegators (people or organizations). When deployed in an open network such as the Internet, MASs face some trust and security issues. Agents comeand go, and interact with strangers. Assumptions about security and general trustworthiness of agents and their deployers are inadequate in this context. In this paper, the design of a security infrastructure is presented applicable to MASs in general. This design addresses both security threats and trust issues. In this design, there are mechanisms for ensuring secure communication among agents and secure naming and resource location services. And two types of trusts are addressed: trust that agents will not misbehave and trust that agents are really delegatees of whom they claim to be. To establish the first type of trust, deployers of agents are made liable for the actions of their agents; to establish the second type of trust, it is proposed that agents prove that they know secrets that only their delegators know.  相似文献   

7.

Executives argue intuitively that trust is critical to effective organizational performance. Although articulated as a cognitive/ affective property ofindividuals, the collective effect of events influencing(and being influenced by) trust judgments must certainly impact organizational behavior. To begin to explore this, we conducted a simulation study of trust and organizational performance. Specifically, we defined a set ofcomputational agents, each with a trust function capable of evaluating the quality ofadvice from the other agents, and rendering judgments on the trustworthiness of the communicating agent. As agent judgments impact subsequent choices to accept or to generate communications, organizational performance is influenced. We manipulated two agent properties(trustworthiness, benevolence), two organizational variables (group size, group homogeneity/liar-to-honest ratio), and one environmental variable (stable, unstable). Results indicate that in homogeneous groups, honest groups did better than groups of liars, but under environmental instability, benevolent groups did worse. Under all conditions for heterogeneous groups, it only took one to three liars to degrade organizational performance.  相似文献   

8.
Trust models are mechanisms that allow agents to build trust without relying on a trusted central authority. Our goal was to develop a trust model that would operate with values that humans easily understand and manipulate: qualitative and ordinal values. The result is a trust model that computes trust from experiences created in interactions and from opinions obtained from third-party agents. The trust model, termed qualitative trust model (QTM), uses qualitative and ordinal values for assessing experiences, expressing opinions and estimating trust. We treat such values appropriately; we never convert them to numbers, but merely use their relative order. To aggregate a collection of such values, we propose an aggregation method that is based on comparing distributions and show some of its properties; the method can be used in other domains and can be seen as an alternative to median and similar methods. To cope with lying agents, QTM estimates trustworthiness in opinion providers with a modified version of the weighted majority algorithm, and additionally combines trustworthiness with social links between agents; such links are obtained implicitly by observing how agents provide opinions about each other. Finally, we compare QTM against a set of well-known trust models and demonstrate that it consistently performs well and on par with other quantitative models, and in many cases even outperforms them, particularly when the number of direct experiences is low.  相似文献   

9.
Multiple types of users (i.e. patients and care providers) have experiences with the same technologies in healthcare environments and may have different processes for developing trust in those technologies. The objective of this study was to assess how patients and care providers make decisions about the trustworthiness of mutually used medical technology in an obstetric work system. Using a grounded theory methodology, we conducted semi-structured interviews with 25 patients who had recently given birth and 12 obstetric healthcare providers to examine the decision-making process for developing trust in technologies used in an obstetric work system. We expected the two user groups to have similar criteria for developing trust in the technologies, though we found patients and physicians differed in processes for developing trust. Trust in care providers, the technologies' characteristics and how care providers used technology were all related to trust in medical technology for the patient participant group. Trustworthiness of the system and trust in self were related to trust in medical technology for the physician participant group. Our findings show that users with different perspectives of the system have different criteria for developing trust in medical technologies.  相似文献   

10.
In this article, we introduce trust ontologies. An ontology represents a set of concepts that are commonly shared and agreed to by all parties in a particular domain. Here, we introduce generic and specific trust ontologies. These ontologies include the following: an agent trust ontology and trustworthiness; agents include sellers, service providers, Web sites, brokers, shops, suppliers, buyers, or reviewers. A services trust ontology and trustworthiness assists in measuring the quality of service that agents provide in the service‐oriented environment such as sales, orders, track and trace, warehousing, logistics, education, governance, advertising, entertainment, trading, online databases, virtual community services, security, information services, opinions, and e‐reviews. A goods or products trust ontology and trustworthiness is useful for measuring the quality of products such as commercial products, information products, entertainment products, or second‐hand products. We present a trust ontology that is suitable for all types of agents that exist in the service‐oriented environment. As agent trust is measured through the quality of goods and services, we introduce two additional distinct concepts of service trust ontology and product trust ontology. © 2007 Wiley Periodicals, Inc. Int J Int Syst 22: 519–545, 2007.  相似文献   

11.
Contracting With Uncertain Level Of Trust   总被引:10,自引:0,他引:10  
The paper investigates the impact of trust on market efficiency and bilateral contracts. We prove that a market in which agents are trusted to the degree they deserve to be trusted is as efficient as a market with complete trustworthiness. In other words, complete trustworthiness is not a necessary condition for market efficiency. We prove that distrust could significantly reduce market efficiency, and we show how to solve the problem by using appropriately designed multiagent contracts. The problem of trust is studied in the context of a bilateral negotiation game between a buyer and a seller. It is shown that if the seller's trust equals the buyer's trustworthiness, then the social welfare, the amount of trade, and the agents' utility functions are maximized. The paper also studies the efficiency of advance payment contracts as a tool for improving trustworthiness. It is proved that advance payment contracts maximize the social welfare and the amount of trade. Finally, the paper studies the problem of how to make agents truthfully reveal their level of trustworthiness. An incentive–compatible contract is defined, in which agents do not benefit from lying about their trustworthiness. The analysis and the solutions proposed in this paper could help agent designers avoid many market failures and produce efficient interaction mechanisms.  相似文献   

12.

An agent-society of the future is envisioned to be as complex as a human society. Just like human societies, such multiagent systems (MAS) deserve an in-depth study of the dynamics, relationships, and interactions of the constituent agents. An agent in a MAS may have only approximate a priori estimates of the trustworthiness of another agent. But it can learn from interactions with other agents, resulting in more accurate models of these agents and their dependencies together with the influences of other environmental factors. Such models are proposed to be represented as Bayesian or belief networks. An objective mechanism is presented to enable an agent elicit crucial information from the environment regarding the true nature of the other agents. This mechanism allows the modeling agent to choose actions that will produce guaranteed minimal improvement of the model accuracy. The working of the proposed maxim in entropy procedure is demonstrated in a multiagent scenario.  相似文献   

13.
Statistical relational learning of trust   总被引:1,自引:0,他引:1  
The learning of trust and distrust is a crucial aspect of social interaction among autonomous, mentally-opaque agents. In this work, we address the learning of trust based on past observations and context information. We argue that from the truster’s point of view trust is best expressed as one of several relations that exist between the agent to be trusted (trustee) and the state of the environment. Besides attributes expressing trustworthiness, additional relations might describe commitments made by the trustee with regard to the current situation, like: a seller offers a certain price for a specific product. We show how to implement and learn context-sensitive trust using statistical relational learning in form of a Dirichlet process mixture model called Infinite Hidden Relational Trust Model (IHRTM). The practicability and effectiveness of our approach is evaluated empirically on user-ratings gathered from eBay. Our results suggest that (i) the inherent clustering achieved in the algorithm allows the truster to characterize the structure of a trust-situation and provides meaningful trust assessments; (ii) utilizing the collaborative filtering effect associated with relational data does improve trust assessment performance; (iii) by learning faster and transferring knowledge more effectively we improve cold start performance and can cope better with dynamic behavior in open multiagent systems. The later is demonstrated with interactions recorded from a strategic two-player negotiation scenario.  相似文献   

14.
15.
For commerce (electronic or traditional) to be effective, there must be a degree of trust between buyers and sellers. In traditional commerce, this kind of trust is based on such things as societal laws and customs, and on the intuition people tend to develop about each other during interpersonal interactions. The trustworthiness of these factors is based, to a large extent, on the geographical proximity between buyers and sellers. But this proximity is lost in e-commerce. In conventional electronic marketplaces the trust among participants is supported by a central server which imposes certain trading rules on all transactions. But such centralized marketplaces have serious drawbacks, among them: lack of scalability, and high cost. In this paper we propose the concept of Decentralized Electronic Marketplace (DEM) which allow buyers and sellers to engage in commercial transactions, subject to an explicitly stated set of trading rules, called the law of this marketplace—which they can trust to be observed by their trading partners. This trust is due to a decentralized, and thus scalable, mechanism that enforces the stated law of the DEM. We implement an electronic marketplace for airline tickets in order to illustrate the feasibility of the proposed concepts for decentralized and secure electronic marketplace.  相似文献   

16.
17.
Collaboration in virtual project teams heavily relies on interpersonal trust, for which perceived professional trustworthiness is an important determinant. In face to face teams colleagues form a first impression of each others trustworthiness based on signs and signals that are ‘naturally’ available. However, virtual project team members do not have the same opportunities to assess trustworthiness. This study provides insight in the information elements that virtual project team members value to assess professional trustworthiness in the initial phase of collaboration. The trustworthiness formed initially is highly influential on interpersonal trust formed during latter collaboration. We expect trustors in virtual teams to especially value information elements (= small containers for personal data stimulating the availability of specific information) that provide them with relevant cues of trust warranting properties of a trustee. We identified a list with fifteen information elements that were highly valued across trustors (n?=?226) to inform their trustworthiness assessments. We then analyzed explanations for preferences with the help of a theory-grounded coding scheme for perceived trustworthiness. Results show that respondents value those particular information elements that provide them with multiple cues (signaling multiple trust warranting properties) to assess the trustworthiness of a trustee. Information elements that provide unique cues (signaling for a specific trust warranting property) could not be identified. Insight in these information preferences can inform the design of artefacts, such as personal profile templates, to support acquaintanceships and social awareness especially in the initial phase of a virtual project team.  相似文献   

18.
ABSTRACT

Existing group recommender systems generate a consensus function to aggregate individual preference into group preference. However, the systems encounter difficulty in gathering rating-scores and validating their reliability, since the aggregation strategy requires user rating-scores. To solve these problems, we propose Group Recommendation based on Social Affinity and Trustworthiness (GRSAT) based on social affinity and trustworthiness, which is obtained from the user’s watching-history and content features, without rating-score. Our experiment proves that GRSAT has outstanding performance for group recommendation compared with the other consensus functions, in terms of the number of the movies and users, on both biased and unbiased groups.  相似文献   

19.
Trust relationships occur naturally in many diverse contexts such as collaborative systems, e-commerce, interpersonal interactions, social networks, and semantic sensor web. As agents providing content and services become increasingly removed from the agents that consume them, the issue of robust trust inference and update becomes critical. There is a need to find online substitutes for traditional (direct or face-to-face) cues to derive measures of trust, and create efficient and robust systems for managing trust in order to support decision-making. Unfortunately, there is neither a universal notion of trust that is applicable to all domains nor a clear explication of its semantics or computation in many situations. We motivate the trust problem, explain the relevant concepts, summarize research in modeling trust and gleaning trustworthiness, and discuss challenges confronting us. The goal is to provide a comprehensive broad overview of the trust landscape, with the nitty-gritties of a handful of approaches. We also provide details of the theoretical underpinnings and comparative analysis of Bayesian approaches to binary and multi-level trust, to automatically determine trustworthiness in a variety of reputation systems including those used in sensor networks, e-commerce, and collaborative environments. Ultimately, we need to develop expressive trust networks that can be assigned objective semantics.  相似文献   

20.
不可信赖的算法和系统不是人工智能的初衷,准确率较高的不可信赖系统如果有相当 水平的专家的协同作业,有可能提高工作效率;但在完全自动化的条件下或者在与其配合工 作的人不是非常可信赖的情况下,不可信赖系统则将丧失使用与存在价值.相反,可信赖系 统却能够出色地工作,达到并长期保持制作它的群体专家的水平.可信赖系统不仅在效率方 面而且将在质量方面体现人工智能的优越性和不可替代性,因此将是推动知识经济的发动机 .已经证明,一个不可信赖的算法,最终必然破坏系统中其他可信赖算法的可信赖性,应该 予以扬弃.实现可信赖算法和系统的工作量和难度都大大增加.可信赖算法本质上只适用于 可界定、可判定的类.实现基于类与OOP理论的若干可界定领域的可信赖算法与系统在实践 上有确切的用途.已经证明,若干可信赖算法的集成仍然是可信赖的;因此通过不断地继承 和集成能够逼近最终的目标.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号