首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 35 毫秒
1.
Although humor is a well-known social lubricant defusing a complicated conflict between two parties, the efficacy of humor in human–robot interaction has barely been tested yet. This study compared the characteristics of humor performed by a robot and human to identify the possible type of jokes that a robot may play.In the experiment, a human actor performed disparaging – racist and sexist jokes, and non-disparaging (human condition and sexual) jokes, and a robot counterpart mimicked the same performance. Fifty-eight university students, 30 male and 28 female with mean age 23.10 (SD = 2.00), watched the randomly assigned jokes performed either by the robot or the human actor. The participants rated perceived humorousness, offensiveness, and willingness to share the joke with others, the perceived social presence and social attractions of the actor. The result showed that participants perceived non-disparaging jokes to be more humorous when performed by the human actor. On the other hand, the participants exhibited less disgust toward disparaging jokes when they were performed by the robot actor. This shows that humor can be used as an effective way to enrich the interaction between human and robot; but the acceptable types of humor should be carefully selected.  相似文献   

2.
Book reviews     
《Ergonomics》2012,55(5):531-542
Abstract

Industrial robots often operate at high speed, with unpredictable motion patterns and erratic idle times. Serious injuries and deaths have occurred due to operator misperception of these robot design and performance characteristics. The main objective of the research project was to study human perceptual aspects of hazardous robotics workstations. Two laboratory experiments were designed to investigate workers' perceptions of two industrial robots with different physical configurations and performance capabilities. Twenty-four subjects participated in the study. All subjects were chosen from local industries, and had had considerable exposure to robots and other automated equipment in their working experience. Experiment 1 investigated the maximum speed of robot arm motions that workers, who were experienced with operation of industrial robots, judged to be ‘safe’ for monitoring tasks. It was found that the selection of safe speed depends on the size of the robot and the speed with which the robot begins its operation. Speeds of less than 51 cm/s and 63cm/s for large and small robots, respectively, were perceived as safe, i.e., ones that did not result in workers feeling uneasy or endangered when working in close proximity to the robot and monitoring its actions. Experiment 2 investigated the minimum value of robot idle time (inactivity) perceived by industrial workers as system malfunction, and an indication of the ‘safe-to-approach’ condition. It was found that idle times of 41 s and 28 s or less for the small and large robots, respectively, were perceived by workers to be a result of system malfunction. About 20% of the workers waited only 10 s or less before deciding that the robot had stopped because of system malfunction. The idle times were affected by the subjects' prior exposure to a simulated robot accident. Further interpretations of the results and suggestions for operational limitations of robot systems are discussed.  相似文献   

3.
One of the UNESCO intangible cultural heritages Bunraku puppets can play one of the most beautiful puppet motions in the world. The Bunraku puppet motions can express emotions without the so-called ‘Uncanny Valley.’ We try to convert these emotional motions into robot affective motions so that robots can interact with human beings more comfortable. In so doing, in the present paper, we present a robot motion design framework using Bunraku affective motions that are based on the so-called ‘Jo-Ha-Kyū,’ and convert a few simple Bunraku motions into a robot motions using one of deep learning methods. Our primitive experiments show that Jo-Ha-Kyū can be incorporated into robot motion design smoothly, and some simple affective robot motions can be designed using our proposed framework.  相似文献   

4.
张婷 《系统仿真技术》2013,(4):327-331,338
描述了自闭症谱系障碍(ASD)患儿基于人形机器人NAO人机互动情况下和在普通课堂环境下的行为反应,并给出其在干预治疗及普通课堂环境下的评估结果。大量实验结果表明,基于NAO机器人的人机互动干预过程中患儿自闭症状行为较普通课堂大大减少,其症状得到有效控制。因此可得出结论:人形机器人NAO可以作为帮助自闭症谱系障碍患儿干预治疗的平台。  相似文献   

5.
《Advanced Robotics》2013,27(6):767-784
The purpose of this study is to develop an interactive and emotional robot that is designed with passive characteristics and a delicate interaction concept using the interaction response: 'Bruises and complexion color due to emotional stimuli'. In order to overcome the mismatch of cue realism issues that cause the Uncanny Valley, an emotional interaction robot, Mung, was developed with a simple design composed of a body and two eyes. Mung could recognize human emotions from human–robot or human–human verbal communications. The developed robot expresses its emotions according to its emotional state modeled by a mass-spring-damper system with an elastic-hysteresis spring. The robot displays a bruise when it is in a negative emotional state, the same as a human becomes bruised when being physically hurt, and the robot shows a natural complexion when its emotional wound is removed. The effectiveness of the emotional expression using color with the concepts of bruising and complexion colors is qualified, and the feasibility of the developed robot was tested in several exhibitions and field trials.  相似文献   

6.
Investigation into robot-assisted intervention for children with autism spectrum disorder (ASD) has gained momentum in recent years. Therapists involved in interventions must overcome the communication impairments generally exhibited by children with ASD by adeptly inferring the affective cues of the children to adjust the intervention accordingly. Similarly, a robot must also be able to understand the affective needs of these children—an ability that the current robot-assisted ASD intervention systems lack—to achieve effective interaction that addresses the role of affective states in human–robot interaction and intervention practice. In this paper, we present a physiology-based affect-inference mechanism for robot-assisted intervention where the robot can detect the affective states of a child with ASD as discerned by a therapist and adapt its behaviors accordingly. This paper is the first step toward developing “understanding” robots for use in future ASD intervention. Experimental results with six children with ASD from a proof-of-concept experiment (i.e., a robot-based basketball game) are presented. The robot learned the individual liking level of each child with regard to the game configuration and selected appropriate behaviors to present the task at his/her preferred liking level. Results show that the robot automatically predicted individual liking level in real time with 81.1% accuracy. This is the first time, to our knowledge, that the affective states of children with ASD have been detected via a physiology-based affect recognition technique in real time. This is also the first time that the impact of affect-sensitive closed-loop interaction between a robot and a child with ASD has been demonstrated experimentally.   相似文献   

7.
In this article, an android robot head is proposed for stage performances. As is well known, an android robot is a type of humanoid robot which is considered to be more like a human than many others types. An android robot has human-like joint structures and artificial skin, and so is the robot which is closest to a human in appearance. To date, several android robots have been developed, but most of them have been made for research purposes or exhibitions. In this article, attention is drawn to the more commercial value of an android robot, especially in the acting field. EveR-3, the android robot described here, has already been used in commercial plays in the theater, and through these it has been possible to learn which features of an android robot are necessary for it to function as an actor. A new 9-DOF head has been developed for stage performances. The DOF are reduced when larger motors are used to make exaggerated expressions, because exaggerated expressions are more important on the stage than detailed, complex expressions. LED lights are installed in both cheeks to emphasize emotional expressions by changes in color in the way that make-up is used to achieve a similar effect on human faces. From these trials, a new head which is more suitable for stage performances has been developed.  相似文献   

8.
Use of technology often has unpleasant side effects, which may include strong, negative emotional states that arise during interaction with computers. Frustration, confusion, anger, anxiety and similar emotional states can affect not only the interaction itself, but also productivity, learning, social relationships, and overall well-being. This paper suggests a new solution to this problem: designing human–computer interaction systems to actively support users in their ability to manage and recover from negative emotional states. An interactive affect–support agent was designed and built to test the proposed solution in a situation where users were feeling frustration. The agent, which used only text and buttons in a graphical user interface for its interaction, demonstrated components of active listening, empathy, and sympathy in an effort to support users in their ability to recover from frustration. The agent's effectiveness was evaluated against two control conditions, which were also text-based interactions: (1) users’ emotions were ignored, and (2) users were able to report problems and ‘vent’ their feelings and concerns to the computer. Behavioral results showed that users chose to continue to interact with the system that had caused their frustration significantly longer after interacting with the affect–support agent, in comparison with the two controls. These results support the prediction that the computer can undo some of the negative feelings it causes by helping a user manage his or her emotional state.  相似文献   

9.
In the previous research, we demonstrated that people distinguish between human and nonhuman intelligence by assuming that humans are more likely to engage in intentional goal-directed behaviors than computers or robots. In the present study, we tested whether participants who respond relatively quickly when making predictions about an entity are more or less likely to distinguish between human and nonhuman agents on the dimension of intentionality. Participants responded to a series of five scenarios in which they chose between intentional and nonintentional actions for a human, a computer, and a robot. Results indicated that participants who chose quickly were more likely to distinguish human and nonhuman agents than participants who deliberated more over their responses. We suggest that the short-response time participants were employing a first-line default to distinguish between human intentionality and more mechanical nonhuman behavior, and that the slower, more deliberative participants engaged in deeper second-line reasoning that led them to change their predictions for the behavior of a human agent.  相似文献   

10.
11.
Using computers as an assistive technology for people with various types of physical and perceptual disabilities has been studied extensively. However, research on computer technology used by individuals with Down syndrome is limited. This paper reports an empirical study that investigated the use of three input techniques (keyboard and mouse, word prediction, and speech recognition) by children and young adults with Down syndrome and neurotypical children. The results suggest that the performance of the Down syndrome participants vary substantially. The high performing Down syndrome participants are capable of using the keyboard or the word prediction software to generate text at approximately 6 words per minute with error rates below 5%, which is similar to the performance of the younger neurotypical participants. No significant difference was observed between the keyboard condition and the word prediction condition. Recognition error rate observed under the speech input condition is very high for the Down syndrome participants. The neurotypical children achieved better performance than the participants with Down syndrome on the input tasks and demonstrated different preferences when interacting with the input techniques. Limitations of this study and implications for future research are also discussed.  相似文献   

12.
The present study investigates how children from two different cultural backgrounds (Pakistani, Dutch) and two different age groups (8 and 12 year olds) experience interacting with a social robot (iCat) during collaborative game play. We propose a new method to evaluate children’s interaction with such a robot, by asking whether playing a game with a state-of-the-art social robot like the iCat is more similar to playing this game alone or with a friend. A combination of self-report scores, perception test results and behavioral analyses indicate that child–robot interaction in game playing situations is highly appreciated by children, although more by Pakistani and younger children than by Dutch and older children. Results also suggest that children enjoyed playing with the robot more than playing alone, but enjoyed playing with a friend even more. In a similar vein, we found that children were more expressive in their non-verbal behavior when playing with the robot than when they were playing alone, but less expressive than when playing with a friend. Our results not only stress the importance of using new benchmarks for evaluating child–robot interaction but also highlight the significance of cultural differences for the design of social robots.  相似文献   

13.
《Ergonomics》2012,55(10):901-921
To improve the understanding of factors affecting automobile seat cushion comfort in static conditions (i.e. without vibration), relationships between the static physical characteristics of a seat cushion and seat comfort have been investigated. The static seat comfort of four automobile cushions, with the same foam hardness but diOEerent foam compositions, was investigated using Scheffe?s method of paired comparisons. The comfort judgements were correlated with sample stiOEness, given by the gradient of a force-deflection curve at 490 N (= 50 kgf). Samples with lower stiffness were judged to be more comfortable than samples with greater stiffness. A similar comfort evaluation was conducted using five rectangular foam samples of the same composition but different foam hardness (and a wider range than in the first experiment). There was no linear relationship between the sample stiffness and seat comfort for these samples. Static seat cushion comfort seemed to be affected by two factors, a ‘bottoming feeling’ and a ‘foam hardness feeling’. The bottoming feeling was reflected in the sample stiffness when loaded to 490 N, while the foam hardness feeling was reflected in foam characteristics at relatively low forces. The pressures underneath the buttocks of subjects were compared with the comfort judgements. The total pressure over a 4 cm × cm area beneath the ischial bones was correlated with static seat comfort, even when the differences among samples were great; samples with less total pressure in this area were judged to be more comfortable than samples with greater total pressure. It is concluded that the pressure beneath the ischial bones may reflect both comfort factors: the bottoming feeling and the foam hardness feeling.  相似文献   

14.
In this article, we describe and interpret a set of acoustic and linguistic features that characterise emotional/emotion-related user states – confined to the one database processed: four classes in a German corpus of children interacting with a pet robot. To this end, we collected a very large feature vector consisting of more than 4000 features extracted at different sites. We performed extensive feature selection (Sequential Forward Floating Search) for seven acoustic and four linguistic types of features, ending up in a small number of ‘most important’ features which we try to interpret by discussing the impact of different feature and extraction types. We establish different measures of impact and discuss the mutual influence of acoustics and linguistics.  相似文献   

15.
Abstract

Searle (1980, 1989) has produced a number of arguments purporting to show that computer programs, no matter how intelligently they may act, lack ‘ intentionality’ Recently, Harnad (1989) has accepted Searle' s arguments as having ‘ shaken the foundations of Artificial Intelligence’ (p. 5). To deal with Searle' s arguments, Harnad has introduced the need for ‘ noncomputational devices’ (e.g. transducers) to realize ‘ symbol grounding’ This paper critically examines both Searle' s and Hamad' s arguments and concludes that the foundations of AT remain unchanged by these arguments, that the Turing Test remains adequate as a test of intentionality, and that the philosophical position of computationalism remains perfectly reasonable as a working hypothesis for the task of describing and embodying intentionality in brains and machines.  相似文献   

16.
Recently, researchers have tried to better understand human behaviors so as to let robots act in more human ways, which means a robot may have its own emotions defined by its designers. To achieve this goal, in this study, we designed and simulated a robot, named Shiau_Lu, which is empowered with six universal human emotions, including happiness, anger, fear, sadness, disgust and surprise. When we input a sentence to Shiau_Lu through voice, it recognizes the sentence by invoking the Google speech recognition method running on an Android system, and outputs a sentence to reveal its current emotional states. Each input sentence affects the strength of the six emotional variables used to represent the six emotions, one corresponding to one. After that, the emotional variables will change into new states. The consequent fuzzy inference process infers and determines the most significant emotion as the primary emotion, with which an appropriate output sentence as a response of the input is chosen from its Output-sentence database. With the new states of the six emotional variables, when the robot encounters another sentence, the above process repeats and another output sentence is then selected and replied. Artificial intelligence and psychological theories of human behaviors have been applied to the robot to simulate how emotions are influenced by the outside world through languages. In fact, the robot may help autistic children to interact more with the world around them and relate themselves well to the outside world.  相似文献   

17.
Children's participation in information and communication technology (ICT) design is an established interdisciplinary research field. Methods for children's participation have been developed, but a closer link between theory and design has been called for, as well as an examination of various participants influencing children's participation in ICT design. This paper addresses these gaps by introducing the research strategy of nexus analysis as a promising theoretical framework. Especially the concepts of ‘interaction order’ and ‘historical body’ are utilised in the analysis of six empirical studies on ICT design with children. The analysis shows that through the participating children there were also ‘others’ involved, multiple voices to be heard, often invisible but informing design. Some of these ‘others’ have already been acknowledged in literature but the issue has not been examined in depth and common vocabulary for this is lacking. Some practical implications will be offered by illustrating how to consider these concepts in different phases of ICT design: when establishing relationships with children, involving children as participant designers and analysing the results of these participative processes.  相似文献   

18.
ABSTRACT

The design of humanoid robots’ emotional behaviors has attracted many scholars’ attention. However, users’ emotional responses to humanoid robots’ emotional behaviors which differ from robots’ traditional behaviors remain well understood. This study aims to investigate the effect of a humanoid robot’s emotional behaviors on users’ emotional responses using subjective reporting, pupillometry, and electroencephalography. Five categories of the humanoid robot’s emotional behaviors expressing joy, fear, neutral, sadness, or anger were designed, selected, and presented to users. Results show that users have a significant positive emotional response to the humanoid robot’s joy behavior and a significant negative emotional response to the humanoid robot’s sadness behavior, indicated by the metrics of reported valence and arousal, pupil diameter, frontal middle relative theta power, and frontal alpha asymmetry score. The results suggest that humanoid robot’s emotional behaviors can evocate users’ significant emotional response. The evocation might relate to the recognition of these emotional behaviors. In addition, the study provides a multimodal physiological method of evaluating users’ emotional responses to the humanoid robot’s emotional behaviors.  相似文献   

19.
We present a method through which domestic service robots can comprehend natural language instructions. For each action type, a variety of natural language expressions can be used, for example, the instruction, ‘Go to the kitchen’ can also be expressed as ‘Move to the kitchen.’ We are of the view that natural language instructions are intuitive and, therefore, constitute one of the most user-friendly robot instruction methods. In this paper, we propose a method that enables robots to comprehend instructions spoken by a human user in his/her natural language. The proposed method combines action-type classification, which is based on a support vector machine, and slot extraction, which is based on conditional random fields, both of which are required in order for a robot to execute an action. Further, by considering the co-occurrence relationship between the action type and the slots along with the speech recognition score, the proposed method can avoid degradation of the robot’s comprehension accuracy in noisy environments, where inaccurate speech recognition can be problematic. We conducted experiments using a Japanese instruction data-set collected using a questionnaire-based survey. Experimental results show that the robot’s comprehension accuracy is higher in a noisy environment using our method than when using a baseline method with only a 1-best speech recognition result.  相似文献   

20.
For human–robot interaction to proceed in a smooth, natural manner, robots must adhere to human social norms. One such human convention is the use of expressive moods and emotions as an integral part of social interaction. Such expressions are used to convey messages such as “I’m happy to see you” or “I want to be comforted,” and people’s long-term relationships depend heavily on shared emotional experiences. Thus, we have developed an affective model for social robots. This generative model attempts to create natural, human-like affect and includes distinctions between immediate emotional responses, the overall mood of the robot, and long-term attitudes toward each visitor to the robot, with a focus on developing long-term human–robot relationships. This paper presents the general affect model as well as particular details of our implementation of the model on one robot, the Roboceptionist. In addition, we present findings from two studies that demonstrate the model’s potential.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号