首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Recent studies have demonstrated that people show social reactions when interacting with human-like virtual agents. For instance, human users behave in a socially desirable way, show increased cooperation or apply human-like communication. It has, however, so far not been tested whether users are prone to mimic the artificial agent’s behavior although this is a widely cited phenomenon of human–human communication that seems to be especially indicative of the sociality of the situation. We therefore conducted an experiment, in which we analyzed whether humans reciprocate an agent’s smile. In a between-subjects design, 104 participants conducted an 8-min small-talk conversation with an agent that either did not smile, showed occasional smiles, or displayed frequent smiles. Results show that although smiling did not have a distinct impact on the evaluation of the agent, the human interaction partners themselves smiled longer when the agent was smiling.  相似文献   

2.
This article explores the relation between consistency of social cues and persuasion by an artificial agent. Including (minimal) social cues in Persuasive Technology (PT) increases the probability that people attribute human-like characteristics to that technology, which in turn can make that technology more persuasive (see, e.g., Nass, Steuer, Tauber, & Reeder, 1993). PT in the social actor role can be equipped with a variety of social cues to create opportunities for applying social influence strategies (for an overview, see Fogg, 2003). However, multiple social cues may not always be perceived as being consistent, which could decrease their perceived human-likeness and their persuasiveness. In the current article, we investigate the relation between consistency of social cues and persuasion by an artificial agent. Findings of two studies show that consistency of social cues increases people’s recognition and recall of artificial agents’ emotional expressions, and make those agents more persuasive. These findings show the importance of the combined meaning of social cues in the design of persuasive artificial agents.  相似文献   

3.
Learning from rewards generated by a human trainer observing an agent in action has been proven to be a powerful method for teaching autonomous agents to perform challenging tasks, especially for those non-technical users. Since the efficacy of this approach depends critically on the reward the trainer provides, we consider how the interaction between the trainer and the agent should be designed so as to increase the efficiency of the training process. This article investigates the influence of the agent’s socio-competitive feedback on the human trainer’s training behavior and the agent’s learning. The results of our user study with 85 participants suggest that the agent’s passive socio-competitive feedback—showing performance and score of agents trained by trainers in a leaderboard—substantially increases the engagement of the participants in the game task and improves the agents’ performance, even though the participants do not directly play the game but instead train the agent to do so. Moreover, making this feedback active—sending the trainer her agent’s performance relative to others—further induces more participants to train agents longer and improves the agent’s learning. Our further analysis shows that agents trained by trainers affected by both the passive and active social feedback could obtain a higher performance under a score mechanism that could be optimized from the trainer’s perspective and the agent’s additional active social feedback can keep participants to further train agents to learn policies that can obtain a higher performance under such a score mechanism.  相似文献   

4.
As computer interfaces can display more life-like qualities such as speech output and personable characters or agents, it becomes important to understand and assess users’ interaction behavior within a social interaction framework rather than only a narrower machine interaction one. We studied how the appearance of a life-like interface agent influenced people’s interaction with it, using a social interaction framework of making and keeping promises to cooperate. Participants played a social dilemma game with a human confederate via realtime video conferencing or with one of three interface agents: a person-like interface agent, a dog-like interface agent, or a cartoon dog interface agent. Technology improvements from a previous version of the human-like interface led to increased cooperation with it; participants made and kept promises to cooperate with the person-like interface agent as much as with the confederate. Dog owners also made and kept promises to dog-like interface agents. General evaluations of likability and appealingness of the interface agent did not lead people to cooperate with it. Our findings demonstrate the importance of placing user interface studies within a social interaction framework as interfaces become more social.  相似文献   

5.
There has been growing interest on agents that represent people’s interests or act on their behalf such as automated negotiators, self-driving cars, or drones. Even though people will interact often with others via these agent representatives, little is known about whether people’s behavior changes when acting through these agents, when compared to direct interaction with others. Here we show that people’s decisions will change in important ways because of these agents; specifically, we showed that interacting via agents is likely to lead people to behave more fairly, when compared to direct interaction with others. We argue this occurs because programming an agent leads people to adopt a broader perspective, consider the other side’s position, and rely on social norms—such as fairness—to guide their decision making. To support this argument, we present four experiments: in Experiment 1 we show that people made fairer offers in the ultimatum and impunity games when interacting via agent representatives, when compared to direct interaction; in Experiment 2, participants were less likely to accept unfair offers in these games when agent representatives were involved; in Experiment 3, we show that the act of thinking about the decisions ahead of time—i.e., under the so-called “strategy method”—can also lead to increased fairness, even when no agents are involved; and, finally, in Experiment 4 we show that participants were less likely to reach an agreement with unfair counterparts in a negotiation setting. We discuss theoretical implications for our understanding of the nature of people’s social behavior with agent representatives, as well as practical implications for the design of agents that have the potential to increase fairness in society.  相似文献   

6.
In this work, we address a relatively unexplored aspect of designing agents that learn from human reward. We investigate how an agent’s non-task behavior can affect a human trainer’s training and agent learning. We use the TAMER framework, which facilitates the training of agents by human-generated reward signals, i.e., judgements of the quality of the agent’s actions, as the foundation for our investigation. Then, starting from the premise that the interaction between the agent and the trainer should be bi-directional, we propose two new training interfaces to increase a human trainer’s active involvement in the training process and thereby improve the agent’s task performance. One provides information on the agent’s uncertainty which is a metric calculated as data coverage, the other on its performance. Our results from a 51-subject user study show that these interfaces can induce the trainers to train longer and give more feedback. The agent’s performance, however, increases only in response to the addition of performance-oriented information, not by sharing uncertainty levels. These results suggest that the organizational maxim about human behavior, “you get what you measure”—i.e., sharing metrics with people causes them to focus on optimizing those metrics while de-emphasizing other objectives—also applies to the training of agents. Using principle component analysis, we show how trainers in the two conditions train agents differently. In addition, by simulating the influence of the agent’s uncertainty–informative behavior on a human’s training behavior, we show that trainers could be distracted by the agent sharing its uncertainty levels about its actions, giving poor feedback for the sake of reducing the agent’s uncertainty without improving the agent’s performance.  相似文献   

7.
Designers of embodied agents constantly strive to create agents that appear more human-like, with the belief that increasing the human-likeness of agents will improve users’ interactions with agents. While designers have focused on visual realism, less attention has been paid to the effects of agents’ behavioral realism on users’ responses. This paper presents an empirical study that compared three theories of agent realism: Realism Maximization Theory, Uncanny Valley Theory, and Consistency Theory. Results of this study showed that people responded best to an embodied agent when it demonstrated moderately realistic, inconsistent behavior. These results support Uncanny Valley Theory and demonstrate the powerful influence of agent behavior on users’ responses.  相似文献   

8.

In this article, we expose some of the issues raised by the critics of the neoclassical approach to rational agent modeling and we propose a formal approach for the design of artificial rational agents that includes some of the functions of emotions found in the human system. We suggest that emotions and rationality are closely linked in the human mind (and in the body, for that matter) and, therefore, need to be included in architectures for designing rational artificial agents, whether these agents are to interact with humans, to model humans' behaviors and actions, or both. We describe an Affective Knowledge Representation (AKR) scheme to represent emotion schemata, which we developed to guide the design of a variety of socially intelligent artificial agents. Our approach focuses on the notion of "social expertise" of socially intelligent agents in terms of their external behavior and internal motivational goal-based abilities. AKR, which uses probabilistic frames, is derived from combining multiple emotion theories into a hierarchical model of affective phenomena useful for artificial agent design. AKR includes a taxonomy of affect, mood, emotion, and personality, and a framework for emotional state dynamics using probabilistic Markov Models.  相似文献   

9.
This work addresses the challenge of creating virtual agents that are able to portray culturally appropriate behavior when interacting with other agents or humans. Because culture influences how people perceive their social reality it is important to have agent models that explicitly consider social elements, such as existing relational factors. We addressed this necessity by integrating culture into a novel model for simulating human social behavior. With this model, we operationalized a particular dimension of culture—individualism versus collectivism—within the context of an interactive narrative scenario that is part of an agent-based tool for intercultural training. Using this scenario we conducted a cross-cultural study in which participants from a collectivistic country (Portugal) were compared with participants from an individualistic country (the Netherlands) in the way they perceived and interacted with agents whose behavior was either individualistic or collectivistic, according to the configuration of the proposed model. In the obtained results, Portuguese subjects rated the collectivistic agents more positively than the Dutch but both countries had a similarly positive opinion about the individualistic agents. This experiment sheds new light on how people from different countries differ when assessing the social appropriateness of virtual agents, while also raising new research questions on this matter.  相似文献   

10.
ABSTRACT

Off-the-shelf conversational agents are permeating people’s everyday lives. In these artificial intelligence devices, trust plays a key role in users’ initial adoption and successful utilization. Factors enhancing trust toward conversational agents include appearances, voice features, and communication styles. Synthesizing such work will be useful in designing evidence-based, trustworthy conversational agents appropriate for various contexts. We conducted a systematic review of the experimental studies that investigated the effect of conversational agents’ and users’ characteristics on trust. From a full-text review of 29 articles, we identified five agent design-themes affecting trust toward conversational agents: social intelligence of the agent, voice characteristics and communication style, look of the agent, non-verbal communication, and performance quality. We also found that participants’ demographic, personality, or use context moderate the effect of these themes. We discuss implications for designing trustworthy conversational agents and responsibilities around on stereotypes and social norm building through agent design.  相似文献   

11.
Pedagogical agent research seeks to exploit Reeves and Nass's media equation theory, which holds that users respond to interactive media as if they were social actors. Investigations have tended to focus on the media used to realize the pedagogical agent, e.g., the use of animated talking heads and voices, and the results have been mixed. This paper focuses instead on the manner in which a pedagogical agent communicates with learners, i.e., on the extent to which it exhibits social intelligence. A model of socially intelligent tutorial dialog was developed based on politeness theory, and implemented in an agent interface within an online learning system called virtual factory teaching system. A series of Wizard-of-Oz studies was conducted in which subjects either received polite tutorial feedback that promotes learner face and mitigates face threat, or received direct feedback that disregards learner face. The polite version yielded better learning outcomes, and the effect was amplified in learners who expressed a preference for indirect feedback, who had less computer experience, and who lacked engineering backgrounds. These results confirm the hypothesis that learners tend to respond to pedagogical agents as social actors, and suggest that research should focus less on the media in which agents are realized, and place more emphasis on the agent's social intelligence.  相似文献   

12.
In the previous research, we demonstrated that people distinguish between human and nonhuman intelligence by assuming that humans are more likely to engage in intentional goal-directed behaviors than computers or robots. In the present study, we tested whether participants who respond relatively quickly when making predictions about an entity are more or less likely to distinguish between human and nonhuman agents on the dimension of intentionality. Participants responded to a series of five scenarios in which they chose between intentional and nonintentional actions for a human, a computer, and a robot. Results indicated that participants who chose quickly were more likely to distinguish human and nonhuman agents than participants who deliberated more over their responses. We suggest that the short-response time participants were employing a first-line default to distinguish between human intentionality and more mechanical nonhuman behavior, and that the slower, more deliberative participants engaged in deeper second-line reasoning that led them to change their predictions for the behavior of a human agent.  相似文献   

13.
During the 1950s, there was a burst of enthusiasm about whether artificial intelligence might surpass human intelligence. Since then, technology has changed society so dramatically that the focus of study has shifted toward society’s ability to adapt to technological change. Technology and rapid communications weaken the capacity of society to integrate into the broader social structure those people who have had little or no access to education. (Most of the recent use of communications by the excluded has been disruptive, not integrative.) Interweaving of socioeconomic activity and large-scale systems had a dehumanizing effect on people excluded from social participation by these trends. Jobs vanish at an accelerating rate. Marketing creates demand for goods which stress the global environment, even while the global environment no longer yields readily accessible resources. Mining and petroleum firms push into ever more challenging environments (e.g., deep mines and seabed mining) to meet resource demands. These activities are expensive, and resource prices rise rapidly, further excluding groups that cannot pay for these resources. The impact of large-scale systems on society leads to mass idleness, with the accompanying threat of violent reaction as unemployed masses seek to blame both people in power as well as the broader social structure for their plight. Perhaps, the impact of large-scale systems on society has already eroded essential qualities of humanness. Humans, when they feel “socially useless,” are dehumanized. (At the same time, machines (at any scale) seem incapable of emotion or empathy.) Has the cost of technological progress been too high to pay? These issues are addressed in this paper.  相似文献   

14.
Empirical studies have repeatedly shown that autonomous artificial entities, so-called embodied conversational agents, elicit social behavior on the part of the human interlocutor. Various theoretical approaches have tried to explain this phenomenon: According to the Threshold Model of Social Influence (Blascovich et al., 2002), the social influence of real persons who are represented by avatars will always be high, whereas the influence of an artificial entity depends on the realism of its behavior. Conversely, the Ethopoeia concept (Nass & Moon, 2000) predicts that automatic social reactions are triggered by situations as soon as they include social cues. The presented study evaluates whether participants´ belief in interacting with either an avatar (a virtual representation of a human) or an agent (autonomous virtual person) lead to different social effects. We used a 2 × 2 design with two levels of agency (agent or avatar) and two levels of behavioral realism (showing feedback behavior versus showing no behavior). We found that the belief of interacting with either an avatar or an agent barely resulted in differences with regard to the evaluation of the virtual character or behavioral reactions, whereas higher behavioral realism affected both. It is discussed to what extent the results thus support the Ethopoeia concept.  相似文献   

15.
As technology advances, robots and virtual agents will be introduced into the home and healthcare settings to assist individuals, both young and old, with everyday living tasks. Understanding how users recognize an agent׳s social cues is therefore imperative, especially in social interactions. Facial expression, in particular, is one of the most common non-verbal cues used to display and communicate emotion in on-screen agents (Cassell et al., 2000). Age is important to consider because age-related differences in emotion recognition of human facial expression have been supported (Ruffman et al., 2008), with older adults showing a deficit for recognition of negative facial expressions. Previous work has shown that younger adults can effectively recognize facial emotions displayed by agents (Bartneck and Reichenbach, 2005, Courgeon et al., 2009, Courgeon et al., 2011, Breazeal, 2003); however, little research has compared in-depth younger and older adults’ ability to label a virtual agent׳s facial emotions, an import consideration because social agents will be required to interact with users of varying ages. If such age-related differences exist for recognition of virtual agent facial expressions, we aim to understand if those age-related differences are influenced by the intensity of the emotion, dynamic formation of emotion (i.e., a neutral expression developing into an expression of emotion through motion), or the type of virtual character differing by human-likeness. Study 1 investigated the relationship between age-related differences, the implication of dynamic formation of emotion, and the role of emotion intensity in emotion recognition of the facial expressions of a virtual agent (iCat). Study 2 examined age-related differences in recognition expressed by three types of virtual characters differing by human-likeness (non-humanoid iCat, synthetic human, and human). Study 2 also investigated the role of configural and featural processing as a possible explanation for age-related differences in emotion recognition. First, our findings show age-related differences in the recognition of emotions expressed by a virtual agent, with older adults showing lower recognition for the emotions of anger, disgust, fear, happiness, sadness, and neutral. These age-related difference might be explained by older adults having difficulty discriminating similarity in configural arrangement of facial features for certain emotions; for example, older adults often mislabeled the similar emotions of fear as surprise. Second, our results did not provide evidence for the dynamic formation improving emotion recognition; but, in general, the intensity of the emotion improved recognition. Lastly, we learned that emotion recognition, for older and younger adults, differed by character type, from best to worst: human, synthetic human, and then iCat. Our findings provide guidance for design, as well as the development of a framework of age-related differences in emotion recognition.  相似文献   

16.
FreeWalk is a social interaction platform where people and agents can socially and spatially interact with one another. FreeWalk has evolved to support heterogeneous interaction styles including meetings, cross-cultural encounters, and evacuation drills. Each of them is usually supported by an individual virtual environment. This evolution extended the capability to control social interaction. The first prototype only provides people with an environment in which they can gather to talk with one another while the third prototype provides them with a whole situation to behave according to their assigned roles and tasks. FreeWalk1 is a spatial videoconferencing system. In this system, the positions of participants make spontaneous simultaneous conversations possible. Spatial movements are integrated with video-mediated communication. FreeWalk1 is able to make social interaction more casual and relaxed than telephone-like telecommunication media. In contrast to conventional videoconferencing systems, people formed concurrent multiple groups to greet and chat with others. In FreeWalk2, a social agent acts as an in-between of people to reduce the problem of the low social context in virtual spaces. When the agent notes an awkward pause in a conversation, it approaches those involved in the conversation with a suggestion for a new topic to talk about. We used this agent to support cross-cultural communication between Japan and US. Our agent strongly influenced people's impressions of their partners, and also, their stereotypes about their partner's nationality. FreeWalk3 is a virtual city simulator to conduct virtual evacuation drills. This system brings social interaction into crisis management simulation. People can join a virtual scene of a disaster at home. Social agents can also join to play their roles assigned by simulation designers. The system architecture has a split control interface to divide control of multiple agents into high-level instruction for them and simulation of their low-level actions. The interface helps simulation designers to control many agents efficiently.  相似文献   

17.
The ability to recognize facial emotions is target behaviour when treating people with social impairment. When assessing this ability, the most widely used facial stimuli are photographs. Although their use has been shown to be valid, photographs are unable to capture the dynamic aspects of human expressions. This limitation can be overcome by creating virtual agents with feasible expressed emotions. The main objective of the present study was to create a new set of dynamic virtual faces with high realism that could be integrated into a virtual reality (VR) cyberintervention to train people with schizophrenia in the full repertoire of social skills. A set of highly realistic virtual faces was created based on the Facial Action Coding System. Facial movement animation was also included so as to mimic the dynamism of human facial expressions. Consecutive healthy participants (n = 98) completed a facial emotion recognition task using both natural faces (photographs) and virtual agents expressing five basic emotions plus a neutral one. Repeated-measures ANOVA revealed no significant difference in participants’ accuracy of recognition between the two presentation conditions. However, anger was better recognized in the VR images, and disgust was better recognized in photographs. Age, the participant’s gender and reaction times were also explored. Implications of the use of virtual agents with realistic human expressions in cyberinterventions are discussed.  相似文献   

18.
We examined how human mental workload and the corresponding eye movement behaviors are affected by the stages and levels of autonomy in routine and autonomy failure conditions in human-autonomy teams (HAT). Thirty participants performed monitoring and diagnosing tasks with the autonomous agent in a three-factor experiment. The factors included information processing stage, level of autonomy, and agent operation condition. The results indicated that the later the agent-supported information processing stage or the higher the autonomy level, the higher the participants’ mental workload following autonomous agent failure. Compared to the continuous manual operation condition, the HAT performance did not decline following autonomous agent failure at the cost of increased mental workload. The eye movement results indicated a top-down compensatory control mechanism of attention, indicating the risk of team performance decline following autonomous agent failure. These findings can be applied in designing autonomous agents and setting human mental workload levels in a HAT.  相似文献   

19.
While Reinforcement Learning (RL) is not traditionally designed for interactive supervisory input from a human teacher, several works in both robot and software agents have adapted it for human input by letting a human trainer control the reward signal. In this work, we experimentally examine the assumption underlying these works, namely that the human-given reward is compatible with the traditional RL reward signal. We describe an experimental platform with a simulated RL robot and present an analysis of real-time human teaching behavior found in a study in which untrained subjects taught the robot to perform a new task. We report three main observations on how people administer feedback when teaching a Reinforcement Learning agent: (a) they use the reward channel not only for feedback, but also for future-directed guidance; (b) they have a positive bias to their feedback, possibly using the signal as a motivational channel; and (c) they change their behavior as they develop a mental model of the robotic learner. Given this, we made specific modifications to the simulated RL robot, and analyzed and evaluated its learning behavior in four follow-up experiments with human trainers. We report significant improvements on several learning measures. This work demonstrates the importance of understanding the human-teacher/robot-learner partnership in order to design algorithms that support how people want to teach and simultaneously improve the robot's learning behavior.  相似文献   

20.
This paper studies to what extent agent development changes one’s own strategy. While this question has many general implications it is of special interest to the study of peer designed agents (PDAs), which are computer agents developed by non-experts. This latter emerging technology, has been widely advocated in recent literature for the purpose of replacing people in simulations and investigating human behavior. Its main premise is that strategies programmed into these agents reliably reflect, to some extent, the behavior used by their programmers in real life. We show that PDA development has an important side effect that has not been addressed to date—the process that merely attempts to capture one’s strategy is also likely to affect the developer’s strategy. This result has many implications concerning the appropriate design of PDA-based simulations as well as the validity of using PDAs for studying individual decision making. The phenomenon is demonstrated experimentally, using two very different application domains and several performance measures. Our findings suggest that the effects on one’s strategy arise both in situations where it is potentially possible for people to reason about the optimal strategy (in which case PDA development will enhance the use of an optimal strategy) and in those where calculating the optimal strategy is computationally challenging (in which case PDA development will push people to use more effective strategies, on average). Since in our experiments PDA development actually improved the developer’s strategy, PDA development can be suggested as a means for improving people’s problem solving skills. Finally, we show that the improvement achieved in people’s strategies through agent development is not attributed to the expressive aspect of agent development per-se but rather there is a crucial additional gain in the process of designing and programming ones strategy into an agent.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号