首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 577 毫秒
1.
As robots move into more human centric environments we require methods to develop robots that can naturally interact with humans. Doing so requires testing in the real-world and addressing multidisciplinary challenges. Our research is focused on child–robot interaction which includes very young children, for example toddlers, and children diagnosed with autism. More traditional forms of human–robot communication, such as speech or gesture recognition, may not be appropriate with these users, where as touch may help to provide a more natural and appropriate means of communication for such instances. In this paper, we present our findings on these topics obtained from a project involving a spherical robot that acquires information regarding natural touch from analysing sensory patterns over-time to characterize the information. More specifically, from this project we have derived important factors for future consideration, we describe our iterative experimental methodology of testing in and out of the ‘wild’ (lab based and real world), and outline discoveries that were made by doing so.  相似文献   

2.
As humanoid social robots are developed rapidly in recent years and experimented in social situations, comparing them to humans provides insights into practical as well as philosophical concerns. This study uses the theoretical framework of communication constraints, derived in human–human communication research, to compare whether people apply social-oriented constraints and task-oriented constraints differently to human targets versus humanoid social robot targets. A total of 230 students from the University of Hawaii at Manoa participated in the study. The participants completed a questionnaire, which determined their concern for the five communication constraints (feelings, non-imposition, disapproval, clarity, and effectiveness) in situations involving humans or robots. The results show people were more concerned with avoiding hurting the human’s feelings, avoiding inconveniencing the human interactive partner, and avoiding being disliked by the human and less concerned with avoiding hurting the robot’s feelings, avoiding inconveniencing the robot partner, and avoiding being disliked by the robot. But people did not differ in their concerns of the two task-oriented constraints (clarity and effectiveness) in response to humans versus humanoid robots. The results of the research suggest that people are more likely to emphasize the social-oriented constraints in communication with humans.  相似文献   

3.
4.
The automatic tendency to anthropomorphize our interaction partners and make use of experience acquired in earlier interaction scenarios leads to the suggestion that social interaction with humanoid robots is more pleasant and intuitive than that with industrial robots. An objective method applied to evaluate the quality of human–robot interaction is based on the phenomenon of motor interference (MI). It claims that a face-to-face observation of a different (incongruent) movement of another individual leads to a higher variance in one’s own movement trajectory. In social interaction, MI is a consequence of the tendency to imitate the movement of other individuals and goes along with mutual rapport, sense of togetherness, and sympathy. Although MI occurs while observing a human agent, it disappears in case of an industrial robot moving with piecewise constant velocity. Using a robot with human-like appearance, a recent study revealed that its movements led to MI, only if they were based on human prerecording (biological velocity), but not on constant (artificial) velocity profile. However, it remained unclear, which aspects of the human prerecorded movement triggered MI: biological velocity profile or variability in movement trajectory. To investigate this issue, we applied a quasi-biological minimum-jerk velocity profile (excluding variability in the movement trajectory as an influencing factor of MI) to motion of a humanoid robot, which was observed by subjects performing congruent or incongruent arm movements. The increase in variability in subjects’ movements occurred both for the observation of a human agent and for the robot performing incongruent movements, suggesting that an artificial human-like movement velocity profile is sufficient to facilitate the perception of humanoid robots as interaction partners.  相似文献   

5.
The objective of this study was to examine the extent to which a model of linguistic etiquette in human–human interaction could be applied to human–robot interaction (HRI) domain, and how different etiquette strategies proposed through the model might influence performance of humans and robots as mediated by manipulations of robot physical features, in a simulated medicine delivery task. A “wizard of Oz” experiment was conducted in which either a humanoid robot or a mechanical-looking robot was used to present medicine reminding utterances (following different etiquette strategies) to participants, who were engaged in a primary cognitive task (a Sudoku puzzle). Results revealed the etiquette model to partially extend to the HRI domain. Participants were not sensitive to positive language from robots (e.g., appreciation of human values/wants) and such a strategy did not succeed in supporting or enhancing the “positive face” of human users. Both “bald” (no linguistic courtesy) and mixed strategies (positive and “negative face” (minimizing user imposition) saving) resulted in moderate user perceived etiquette scores (PE). However, individual differences suggested such robot linguistic strategies should be applied with caution. Opposite to this, a negative face saving strategy (supporting user freedom of choice) promoted user task and robot performance (in terms of user response time to robot requests), and resulted in the highest PE score. There was also evidence that humanoid robot features provide additional social cues that may be used by patients and support human and robot performance, but not PE. These results provide a basis for determining appropriate etiquette strategies and robot appearance to promote better collaborative task performances for future health care delivery applications of service robots.  相似文献   

6.
Robots have been envisaged as both workers and partners of humans from the earliest period in their history. Therefore, robots should become artificial entities that can socially interact with human beings in social communities. Recent advances in technology have added various functions to robots. The development of actuators and grippers show us infinite possibilities for factory automation, and robots can now walk and perform very smoothly. All of these functions have been developed as solutions for improving robot movement and performance. However, there are many remaining problems in the communication between robots and humans. Communication robots provide one approach to the realization of embodied interfaces. Furthermore, the unsolved problems of human–robot communication can be clarified by adopting the concept of subtractive methods. In this article, we consider the minimal design of robots from the viewpoint of designing communication. By minimal design, we mean eliminating the nonessential portions and keeping only the most fundamental functions. We expect that the simple and clean nature of minimally designed objects will allow humans to interact with these robots without becoming uninterested too quickly. By exploiting the fact that humans have “a natural dislike for the absence of reasoning,” artificial entities built according to minimal design principles can extract the human drive to relate with others. We propose a method of designing a robot that has “character” and is situated in a social context from the viewpoint of minimal design. This work was presented in part at the 10th International Symposium on Artificial Life and Robotics, Oita, Japan, February 4–6, 2005  相似文献   

7.
In this article, a novel human–machine interaction based on the machine intention recognition of the human is presented. This work is motivated by the desire that intelligent machines as robots imitate human–human interaction, that is to minimize the need for classical direct human–machine interface and communication. A philosophical and technical background for intention recognition is discussed. Here, the intention–action–state scenario is modified and modeled by Dynamic Bayesian Networks to facilitate for probabilistic intention inference. The recognized intention, then, drives the interactive behavior of the machine such that it complies with the human intention in light of the real state of the world. An illustrative example of a human commanding a mobile robot remotely is given and discussed in details.  相似文献   

8.
In this paper we argue that substitution-based function allocation methods (such as MABA-MABA, or Men-Are-Better-At/Machines-Are-Better-At lists) cannot provide progress on human–automation co-ordination. Quantitative ‘who does what’ allocation does not work because the real effects of automation are qualitative: it transforms human practice and forces people to adapt their skills and routines. Rather than re-inventing or refining substitution-based methods, we propose that the more pressing question on human–automation co-ordination is ‘How do we make them get along together?’ Correspondence and offprint requests to: S. W. A. Dekker, Department of Mechanical Engineering, IKP, Link?ping Institute of Technology, SE - 581 83 Link?ping, Sweden. Tel.: +46 13 281646; fax +4613282579; email: sidde@ikp.liu.se  相似文献   

9.
It is remarkable how much robotics research is promoted by appealing to the idea that the only way to deal with a looming demographic crisis is to develop robots to look after older persons. This paper surveys and assesses the claims made on behalf of robots in relation to their capacity to meet the needs of older persons. We consider each of the roles that has been suggested for robots in aged care and attempt to evaluate how successful robots might be in these roles. We do so from the perspective of writers concerned primarily with the quality of aged care, paying particular attention to the social and ethical implications of the introduction of robots, rather than from the perspective of robotics, engineering, or computer science. We emphasis the importance of the social and emotional needs of older persons—which, we argue, robots are incapable of meeting—in almost any task involved in their care. Even if robots were to become capable of filling some service roles in the aged-care sector, economic pressures on the sector would most likely ensure that the result was a decrease in the amount of human contact experienced by older persons being cared for, which itself would be detrimental to their well-being. This means that the prospects for the ethical use of robots in the aged-care sector are far fewer than first appears. More controversially, we believe that it is not only misguided, but actually unethical, to attempt to substitute robot simulacra for genuine social interaction. A subsidiary goal of this paper is to draw attention to the discourse about aged care and robotics and locate it in the context of broader social attitudes towards older persons. We conclude by proposing a deliberative process involving older persons as a test for the ethics of the use of robots in aged care.We dedicate this paper to the memory of Jean Woodroffe, whose strength and courage at the end of her life journey inspired the authors’ interest in aged-care issues.  相似文献   

10.
Reduced uncertainty through human communication in complex environments   总被引:1,自引:0,他引:1  
This paper describes and analyzes the central role of human–human communication in a dynamic, high-risk environment. The empirical example is a UN peace-enforcing and peace-keeping operation where uncertainty about the situation in the environment and about the own organization’s capability was intertwined, requiring extensive control activities and, hence, special attention to communication between humans. Theoretically, focus lays on what efficient communication means, how to understand and use social relations, and use technology when making socio-technical systems also cooperative systems. We conclude that “control” largely is based on the ability to communicate and that efficient human–human communication is grounded in relations between individuals, which preferably should be based on physical meetings. Uncertainty, and how humans cope with it through interpersonal communication, is exemplified and discussed. In theoretical terms, relating the study to systems science and its application in organizational life and cognitive engineering, the case illustrates that an organization is not only an economy but also an adaptive social structure. But neither cognition nor control is an end state. The organization’s raison d’être in this kind of operation is cooperation rather than confrontation. Its use of force is strictly regulated by Rules of Engagement (ROE). In the organization, strong emotions may govern, interpersonal trust can be established and rule-sets for further cooperation established. Without considering the power of such aspects, economical rationality and detached cognitive thinking may end up in perfect, but less relevant, support technologies where people act in roles rather than as wholes.  相似文献   

11.
A basic goal in human–robot interaction is to establish such a communication mode between the two parties that the humans perceive it as effective and natural; effective in the sense of being responsive to the information needs of the humans, and natural in the sense of communicating information in modes familiar to humans. This paper sets the framework for a robot guide to visitors in art collections and other assistive environments, which incorporates the principles of effectiveness and naturalness. The human–robot interaction takes place in natural language in the form of a dialogue session during which the robot describes exhibits, but also recommends exhibits that might be of interest to the visitors. It is also possible for the robot to explain its reasoning to the visitors, with a view to increasing transparency and consequently trust in the robot’s suggestions. Furthermore, the robot leads the visitors to the location of the desired exhibit. The framework is general enough to be implemented in different hardware, including portable computational devices. The framework is based on a cognitive model comprised of four modules: a reactive, a deliberative, a reflective and an affective one. An initial implementation of a dialogue system realising this cognitive model is presented. main ontology.  相似文献   

12.
This paper develops a semantics with control over scope relations using Vermeulen’s stack valued assignments as information states. This makes available a limited form of scope reuse and name switching. The goal is to have a general system that fixes available scoping effects to those that are characteristic of natural language. The resulting system is called Scope Control Theory, since it provides a theory about what scope has to be like in natural language. The theory is shown to replicate a wide range of grammatical dependencies, including options for, and constraints on, ‘donkey’, ‘binding’, ‘movement’, ‘Control’ and ‘scope marking’ dependencies.  相似文献   

13.
For a robot to cohabit with people, it should be able to learn people’s nonverbal social behavior from experience. In this paper, we propose a novel machine learning method for recognizing gestures used in interaction and communication. Our method enables robots to learn gestures incrementally during human–robot interaction in an unsupervised manner. It allows the user to leave the number and types of gestures undefined prior to the learning. The proposed method (HB-SOINN) is based on a self-organizing incremental neural network and the hidden Markov model. We have added an interactive learning mechanism to HB-SOINN to prevent a single cluster from running into a failure as a result of polysemy of being assigned more than one meaning. For example, a sentence: “Keep on going left slowly” has three meanings such as, “Keep on (1)”, “going left (2)”, “slowly (3)”. We experimentally tested the clustering performance of the proposed method against data obtained from measuring gestures using a motion capture device. The results show that the classification performance of HB-SOINN exceeds that of conventional clustering approaches. In addition, we have found that the interactive learning function improves the learning performance of HB-SOINN.  相似文献   

14.
In order to satisfy need for enhanced user affinity for robots, we are attempting to give robots a “consciousness” such as that identified in humans and animals. We developed software to control a robot’s actions including emotion by introducing the evaluation function of action choice into the hierarchical structure model. This connected the robot’s consciousness with the robot’s action. We named the process Consciousness-based Architecture (CBA). However, it is difficult to change the consciousness of the robot only using this CBA model. In order to induce and change autonomously consciousness and action for the robot, some motivation is required. Therefore, a motivation model has been developed to induce and change autonomously, and is combined with CBA. To inspect CBA including the motivation model, a robot arm (Conbe-I) has been developed with a small Web camera built into the fingers. CBA was installed on this Conbe-I, and the autonomous actions performed to catch an object were inspected. A motivation model of the robot was devised to describe interests for the aim thing of the robot and the desire of the robot. To build this motivation model, we studied the action of dopamine, which added activity to the robot, in conjunction with the incentive to do an action. In this paper described about the expression of the emotion by a robot incorporated this motivation model in Conbe-I. This work was presented in part at the 13th International Symposium on Artificial Life and Robotics, Oita, Japan, January 31–February 2, 2008  相似文献   

15.
This article describes a multimodal command language for home robot users, and a robot system which interprets users’ messages in the language through microphones, visual and tactile sensors, and control buttons. The command language comprises a set of grammar rules, a lexicon, and nonverbal events detected in hand gestures, readings of tactile sensors attached to the robots, and buttons on the controllers in the users’ hands. Prototype humanoid systems which immediately execute commands in the language are also presented, along with preliminary experiments of faceto-face interactions and teleoperations. Subjects unfamiliar with the language were able to command humanoids and complete their tasks with brief documents at hand, given a short demonstration beforehand. The command understanding system operating on PCs responded to multimodal commands without significant delay. This work was presented in part at the 13th International Symposium on Artificial Life and Robotics, Oita, Japan, January 31–February 2, 2008  相似文献   

16.
Important aspects of present-day humanoid robot research is to make such robots look realistic and human-like, both in appearance, as well as in motion and mannerism. In this paper, we focus our study on advanced control leading to realistic motion coordination for a humanoid’s robot neck and eyes while tracking an object. The motivating application for such controls is conversational robotics, in which a robot head “actor” should be able to detect and make eye contact with a human subject. Therefore, in such a scenario, the 3D position and orientation of an object of interest in space should be tracked by the redundant head–eye mechanism partly through its neck, and partly through its eyes. In this paper, we propose an optimization approach, combined with a real-time visual feedback to generate the realistic robot motion and robustify it. We also offer experimental results showing that the neck–eye motion obtained from the proposed algorithm is realistic comparing to the head–eye motion of humans.  相似文献   

17.
The current concept of robots has been greatly influenced by the image of robots from science fiction. Since robots were introduced into human society as partners with them, the importance of human–robot interaction has grown. In this paper, we have designed seven musical sounds, five of which express intention and two that express emotion for the English teacher robot, Silbot. To identify the sound design considerations, we analyzed the sounds of robots, R2-D2 and Wall-E, from two popular movies, Star Wars and Wall-E, respectively. From the analysis, we found that intonation, pitch, and timbre are dominant musical parameters to express intention and emotion. To check the validity of these designed sounds for intention and emotion, we performed a recognition rate experiment. The experiment showed that the five designed sounds for intentions and the two for emotions are sufficient to deliver the intended emotions.  相似文献   

18.
Socio-ethics covers the relation of the individual with the group and with society, as the individual acquires the skills for social life with others and the conduct of ‘normal responsible behaviour’ (Leal in AI Soc 9:29–32, 1995) that guides moral action. For a consideration of what it means to be socially skilled in everyday human interaction and the ethical issues arising from the new conditions of interaction that come with the integration of intelligent interactive artefacts, we will provide an analysis at multiple levels of these phenomena and draw on a variety of application domains, for example, healthcare and interactive media.  相似文献   

19.
Traditional human–computer interaction (HCI) allowed researchers and practitioners to share and rely on the ‘five E’s’ of usability, the principle that interactive systems should be designed to be effective, efficient, engaging, error tolerant, and easy to learn. A recent trend in HCI, however, is that academic researchers as well as practitioners are becoming increasingly interested in user experiences, i.e., understanding and designing for relationships between users and artifacts that are for instance affective, engaging, fun, playable, sociable, creative, involving, meaningful, exciting, ambiguous, and curious. In this paper, it is argued that built into this shift in perspective there is a concurrent shift in accountability that is drawing attention to a number of ethical, moral, social, cultural, and political issues that have been traditionally de-emphasized in a field of research guided by usability concerns. Not surprisingly, this shift in accountability has also received scarce attention in HCI. To be able to find any answers to the question of what makes a good user experience, the field of HCI needs to develop a philosophy of technology. One building block for such a philosophy of technology in HCI is presented. Albert Borgmann argues that we need to be cautious and rethink the relationship as well as the often-assumed correspondence between what we consider useful and what we think of as good in technology. This junction—that some technologies may be both useful and good, while some technologies that are useful for some purposes might also be harmful, less good, in a broader context—is at the heart of Borgmann’s understanding of technology. Borgmann’s notion of the device paradigm is a valuable contribution to HCI as it points out that we are increasingly experiencing the world with, through, and by information technologies and that most of these technologies tend to be designed to provide commodities that effortlessly grant our wishes without demanding anything in return, such as patience, skills, or effort. This paper argues that Borgmann’s work is relevant and makes a valuable contribution to HCI in at least two ways: first, as a different way of seeing that raises important social, cultural, ethical, and moral issues from which contemporary HCI cannot escape; and second, as providing guidance as to how specific values might be incorporated into the design of interactive systems that foster engagement with reality.  相似文献   

20.
WOZ experiments for understanding mutual adaptation   总被引:1,自引:0,他引:1  
A robot that is easy to teach not only has to be able to adapt to humans but also has to be easily adaptable to. In order to develop a robot with mutual adaptation ability, we believe that it will be beneficial to first observe the mutual adaptation behaviors that occur in human–human communication. In this paper, we propose a human–human WOZ (Wizard-of-Oz) experiment setting that can help us to observe and understand how the mutual adaptation procedure occurs between human beings in nonverbal communication. By analyzing the experimental results, we obtained three important findings: alignment-based action, symbol-emergent learning, and environmental learning.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号