首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 118 毫秒
1.
Unwind is a musical biofeedback interface which combines nature sounds and sedative music into a form of New-Age music for relaxation exercises. The nature sounds respond to the user’s physiological data, functioning as an informative layer for biofeedback display. The sedative music aims to induce calmness and evoke positive emotions. UnWind incorporates the benefits of biofeedback and sedative music to facilitate deep breathing, moderate arousal, and promote mental relaxation. We evaluated Unwind in a 2?×?2 factorial experiment with music and biofeedback as independent factors. Forty young adults performed the relaxation exercise under one of the following conditions after experiencing a stressful task: Nature sounds only (NS), Nature sounds with music (NM), and Auditory biofeedback with nature sounds (NSBFB), and UnWind musical biofeedback (NMBFB). The results revealed a significant interaction effect between music and biofeedback on the improvement of heart rate variability. The combination of music and nature sounds also showed benefits in lowering arousal and reducing self-report anxiety. We conclude with a discussion of UnWind for biofeedback and the wider potential of blending nature sounds with music as a musical interface.  相似文献   

2.
《Advanced Robotics》2013,27(1-2):47-67
Depending on the emotion of speech, the meaning of the speech or the intention of the speaker differs. Therefore, speech emotion recognition, as well as automatic speech recognition is necessary to communicate precisely between humans and robots for human–robot interaction. In this paper, a novel feature extraction method is proposed for speech emotion recognition using separation of phoneme class. In feature extraction, the signal variation caused by different sentences usually overrides the emotion variation and it lowers the performance of emotion recognition. However, as the proposed method extracts features from speech in parts that correspond to limited ranges of the center of gravity of the spectrum (CoG) and formant frequencies, the effects of phoneme variation on features are reduced. Corresponding to the range of CoG, the obstruent sounds are discriminated from sonorant sounds. Moreover, the sonorant sounds are categorized into four classes by the resonance characteristics revealed by formant frequency. The result shows that the proposed method using 30 different speakers' corpora improves emotion recognition accuracy compared with other methods by 99% significance level. Furthermore, the proposed method was applied to extract several features including prosodic and phonetic features, and was implemented on 'Mung' robots as an emotion recognizer of users.  相似文献   

3.
Auditory displays are developed and investigated for mobile service robots in a human–machine environment. The service robot domain was chosen as an example for future use of auditory displays within multimedia process supervision and control applications in industrial, transportation, and medical systems. The design of directional sounds and of additional sounds for robot states as well as the design of more complicated robot sound tracks are explained. Basic musical elements and robot-movement sounds are combined. Experimental studies on the auditory perception of directional sounds as well as of sound tracks for the predictive display of intended robot trajectories in a simulated supermarket scenario are described.  相似文献   

4.
This research is aimed to devise an anthropomorphic robotic head with a human-like face and a sheet of artificial skin that can read a randomly composed simplified musical notation and sing the corresponding content of the song once. The face robot is composed of an artificial facial skin that can express a number of facial expressions via motions driven by internal servo motors. Two cameras, each of them installed inside each eyeball of the face, provide vision capability for reading simplified musical notations. Computer vision techniques are subsequently used to interpret simplified musical notations and lyrics of their corresponding songs. Voice synthesis techniques are implemented to enable the face robot to sing songs by enunciating synthesized sounds. Mouth patterns of the face robot will be automatically changed to match the emotions corresponding to the lyrics of the songs. The experiments show that the face robot can successfully read and then accurately sing a song which is assigned discriminately.  相似文献   

5.
When people interact with communication robots in daily life, their attitudes and emotions toward the robots affect their behavior. From the perspective of robotics design, we need to investigate the influences of these attitudes and emotions on human-robot interaction. This paper reports our empirical study on the relationships between people's attitudes and emotions, and their behavior toward a robot. In particular, we focused on negative attitudes, anxiety, and communication avoidance behavior, which have important implications for robotics design. For this purpose, we used two psychological scales that we had developed: negative attitudes toward robots scale (NARS) and robot anxiety scale (RAS). In the experiment, subjects and a humanoid robot are engaged in simple interactions including scenes of meeting, greeting, self-disclosure, and physical contact. Experimental results indicated that there is a relationship between negative attitudes and emotions, and communication avoidance behavior. A gender effect was also suggested.  相似文献   

6.
CEOs of big companies may travel frequently to give their philosophies and policies to the employees who are working at world wide branches. Video technology makes it possible to give their lectures anywhere and anytime in the world very easily. However, 2-dimentional video systems lack the reality. If we can give natural realistic lectures through humanoid robots, CEOs do not need to meet the employees in person. They can save their time and money for traveling.We propose a substitute robot of remote person. The substitute robot is a humanoid robot that can reproduce the lecturers’ facial expressions and body movements, and that can send the lecturers to everywhere in the world instantaneously with the feeling of being at a live performance. There are two major tasks for the development; they are the facial expression recognition/reproduction and the body language reproduction.For the former task, we proposed a facial expression recognition method based on a neural network model. We recognized five emotions, or surprise, anger, sadness, happiness and no emotion, in real time. We also developed a facial robot to reproduce the recognized emotion on the robot face. Through experiments, we showed that the robot could reproduce the speakers’ emotions with its face.For the latter task, we proposed a degradation control method to reproduce the natural movement of the lecturer even when a robot rotary joint fails. For the fundamental stage of our research for this sub-system, we proposed a control method for the front view movement model, or 2-dimentional model.  相似文献   

7.
Recently, researchers have tried to better understand human behaviors so as to let robots act in more human ways, which means a robot may have its own emotions defined by its designers. To achieve this goal, in this study, we designed and simulated a robot, named Shiau_Lu, which is empowered with six universal human emotions, including happiness, anger, fear, sadness, disgust and surprise. When we input a sentence to Shiau_Lu through voice, it recognizes the sentence by invoking the Google speech recognition method running on an Android system, and outputs a sentence to reveal its current emotional states. Each input sentence affects the strength of the six emotional variables used to represent the six emotions, one corresponding to one. After that, the emotional variables will change into new states. The consequent fuzzy inference process infers and determines the most significant emotion as the primary emotion, with which an appropriate output sentence as a response of the input is chosen from its Output-sentence database. With the new states of the six emotional variables, when the robot encounters another sentence, the above process repeats and another output sentence is then selected and replied. Artificial intelligence and psychological theories of human behaviors have been applied to the robot to simulate how emotions are influenced by the outside world through languages. In fact, the robot may help autistic children to interact more with the world around them and relate themselves well to the outside world.  相似文献   

8.
《Advanced Robotics》2013,27(4):311-326
The purpose of this paper is to construct a methodology for smooth communication between humans and robots. Here, focus is on a mindreading mechanism, which is indispensable in human-human communication. We propose a model of utterance understanding based on this mechanism. Concretely speaking, we apply the model of a mindreading system to a model of human-robot communication. Moreover, we implement a robot interface system that applies our proposed model. The characteristic of our interface system is its ability to construct a relationship between a human and a robot by a method of having an agent, which interacts with the person, migrate from the mobile PC of the person to the robot. Psychological experiments were carried out to explore the validity of the following hypothesis: By reading a robot's mind based on such a relationship, a person can estimate the robot's intention with ease and, moreover, the person can even understand the robot's unclear utterances made by synthesized speech sounds. The results of the experiments statistically supported our hypothesis.  相似文献   

9.
10.
An experimental investigation into how the appearance of an agent such as a robot or PC affects people's interpretations of the agent's attitudes is presented. In general, people are said to create stereotypical agent behavioral models in their minds based on the agents' appearances, and these appearances significantly affect their way of interaction. Therefore, it is quite important to address with the following research question: How does an agent's appearance affect its interactions with people? Specifically, a preliminary experiment was conducted to select eight artificial sounds for which people can estimate two specific primitive attitudes (e.g., positive or negative). Then an experiment was conducted where the participants were presented with the selected artificial sounds through three kinds of agents: a MindStorms robot, AIBO robot, and laptop PC. In particular, the participants were asked to select the correct attitudes based on the sounds expressed by these three agents. The results showed that the participants had better interpretation rates when a PC presented the sounds and lower rates when the MindStorms and AIBO robots presented the sounds, even though the sounds expressed by these agents were the same. The results of this study contribute to the design policy of the interactive agents, such as, What types of appearances should agents have to effectively interact with people, and which kinds of information should these agents express to people?  相似文献   

11.
ABSTRACT

We create robots to help people, and we believe that our robots can be more helpful if they look and behave as humans do. People usually prefer help from those whom they like. Some people can be likeable for their personality. A robot, especially an android, may become likeable too if it expresses likeable personality. A robot, or at least an android, needs the ability to express personality such that it can become more likeable, that its help can be more readily accepted. As the first step, we have investigated how the robotic eyes of an android may express personality. Humans have expressive eyes. We postulate that human eyes express personality. We experimented to see whether our test subjects could perceive distinct personalities in an android we programmed to imitate the eye movements of two people. The result showed that the android did express distinct personalities, but not without limitations.  相似文献   

12.
A team of small, low-cost robots instead of a single large, complex robot is useful in operations such as search and rescue, urban exploration etc. However, performance of such a team is limited due to restricted mobility of the team members. We propose solutions based on physical cooperation among mobile robots to improve the overall mobility. Our focus is on the development of the low level system components. Recognizing that small robots need to overcome discrete obstacles, we develop specific analytical maneuvers to negotiate each obstacle where a maneuver is built from a sequence of fundamental cooperative behaviors. In this paper we present cooperative behaviors that are achieved by interactions among robots via un-actuated links thus avoiding the need for additional actuation. We analyze the cooperative lift behavior and demonstrate that useful maneuvers such a gap crossing can be built using this behavior. We prove that the requirements on ground friction and wheel torques set fundamental limits for physical cooperation. Using the design guidelines based on static analysis we have developed simple and low cost hardware to illustrate cooperative gap crossing with two robots. We have developed a complete dynamic model of two-robot cooperation which leads to control design. A novel connecting link design is proposed that can change the system configuration with no additional actuators. A decentralized control architecture is designed for the two-robot system, where each robot controls its own state with no information about the state of the other robot thus avoiding the need of continuous communication between the two robots. Simulation and hardware results demonstrate a successful implementation with the gap crossing example. We have analytically proved that robot dynamics can be used to reduce the friction requirements and have demonstrated, with simulations, the implementation of this idea for the cooperative lifting behavior.
Jonathan LuntzEmail:
  相似文献   

13.
Exploring the design space of robots: Children''s perspectives   总被引:2,自引:0,他引:2  
Children's perceptions and evaluations of different robot designs are an important unexplored area within robotics research considering that many robots are specifically designed for children. To examine children's feelings and attitudes towards robots, a large sample of children (N = 159) evaluated 40 robot images by completing a questionnaire for each image, which enquired about robot appearance, robot personality dimensions and robot emotions. Results showed that depending on a robot's appearance children clearly distinguished robots in terms of their intentions (i.e. friendly vs. unfriendly), their capability to understand, and their emotional expression. Results of a principal components analysis of the children's ratings of the robots' personality attributes revealed two dimensions labelled ‘Behavioural Intention’ and ‘Emotional Expression’. Robots were classified according to their scores on these two dimensions and a content analysis of their appearance was conducted in an attempt to identify salient features of different robot personalities. Children judged human-like robots as aggressive, but human–machine robots as friendly. Results on children's perceptions of the robots' behavioural intentions provided tentative empirical support for the Uncanny Valley, hypothesized by (Mori, M., 1970), reflecting a situation where robots are very human-like, but still distinguishable from humans, evoking a feeling of discomfort or repulsion. The paper concludes with a discussion of design implications for robots, and the use of robots in educational contexts.  相似文献   

14.
We describe the design process of an affective control toy, named SenToy, used to control a synthetic character in a computer game. SenToy allows players to influence the emotions of a synthetic character placed in FantasyA, a 3D virtual game. By expressing gestures associated with anger, fear, surprise, sadness and joy through SenToy, players influence the emotions of the character they control in the game. When designing SenToy we hypothesized that players would manipulate the toy to express emotions using a particular set of gestures. Those gestures were drawn from literature on how we express emotions through bodily movements and from emotion theories. To evaluate our idea we performed a Wizard Of Oz study [1]. The results of the study show that there are behaviours that players easily pick up for expressing emotions through the gestures with the toy, though not necessarily the ones extracted from literature. The study also provided some indication on what type of toy we should build, in particular, its ‘look and feel’. Correspondence to: Ms A. Paiva, IST & Instituto de Engenharia de Sistemas e Computadores, Rua Alves Redol 9, 1000 Lisboa, Portugal. Email: Ana.Paiva@inesc.pt  相似文献   

15.
ABSTRACT

Research in the field of social robotics suggests that enhancing social cues in robots can elicit more social responses in users. It is however not clear how users respond socially to persuasive social robots and whether such reactions will be more pronounced when the robots feature more interactive social cues. In the current research, we examine social responses towards persuasive attempts provided by a robot featuring different numbers of interactive social cues. A laboratory experiment assessed participants’ psychological reactance, liking, trusting beliefs and compliance toward a persuasive robot that either presented users with: no interactive social cues (random head movements and random social praises), low number of interactive social cues (head mimicry), or high number of interactive social cues (head mimicry and proper timing for social praise). Results show that a persuasive robot with the highest number of interactive social cues invoked lower reactance and was liked more than the robots in the other two conditions. Furthermore, results suggest that trusting beliefs towards persuasive robots can be enhanced by utilizing praise as presented by social robots in no interactive social cues and high number of interactive social cues conditions. However, interactive social cues did not contribute to higher compliance.  相似文献   

16.
One of the UNESCO intangible cultural heritages Bunraku puppets can play one of the most beautiful puppet motions in the world. The Bunraku puppet motions can express emotions without the so-called ‘Uncanny Valley.’ We try to convert these emotional motions into robot affective motions so that robots can interact with human beings more comfortable. In so doing, in the present paper, we present a robot motion design framework using Bunraku affective motions that are based on the so-called ‘Jo-Ha-Kyū,’ and convert a few simple Bunraku motions into a robot motions using one of deep learning methods. Our primitive experiments show that Jo-Ha-Kyū can be incorporated into robot motion design smoothly, and some simple affective robot motions can be designed using our proposed framework.  相似文献   

17.
This paper deals withspontaneous behavior for cooperation through interaction in a distributed autonomous robot system. Though a human gives the robots evaluation functions for the relation of cooperation among robots, each robot decides its behavior depending on its environment, its experience, and the behavior of other robots. The robot acquires a model of the behavior of the other robots through learning. Inspired by biological systems, the robot's behaviors are interpreted as emotional by an observer of the system. In psychology, the emotions have been considered to play important roles for generation of motivation and behavior selection. In this paper, the robot's behaviors are interpreted as follows: each robot feels frustration when its behavior decision does not fit its environment. Then, it changes its behavior to change its situation actively and spontaneously. The results show potential of intelligent behavior by emotions. This work was presented, in part, at the International Symposium on Artificial Life and Robotics, Oita, Japan, February 18–20, 1996  相似文献   

18.
A facial expression emotion recognition based human-robot interaction (FEER-HRI) system is proposed, for which a four-layer system framework is designed. The FEERHRI system enables the robots not only to recognize human emotions, but also to generate facial expression for adapting to human emotions. A facial emotion recognition method based on 2D-Gabor, uniform local binary pattern (LBP) operator, and multiclass extreme learning machine (ELM) classifier is presented, which is applied to real-time facial expression recognition for robots. Facial expressions of robots are represented by simple cartoon symbols and displayed by a LED screen equipped in the robots, which can be easily understood by human. Four scenarios, i.e., guiding, entertainment, home service and scene simulation are performed in the human-robot interaction experiment, in which smooth communication is realized by facial expression recognition of humans and facial expression generation of robots within 2 seconds. As a few prospective applications, the FEERHRI system can be applied in home service, smart home, safe driving, and so on.   相似文献   

19.
针对动态非结构化环境下多机器人之间存在的空间冲突问题,提出了一种基于情绪量的多机器人冲突消解方法。该方法可以使机器人根据情绪量自主判定对其他机器人的躲避半径,无须预先设定固定的避碰优先级或进行机器人之间的协商。仿真结果表明该方法是一种有效的多机器人冲突消解方法。  相似文献   

20.
This study examined the effect of adding an emotion regulation feature into fitness trackers. Applying the theoretical framework of emotion regulation, we argue that such feature can mitigate tracker users’ downward emotions due to failure to meet their fitness goals, and as such, the users would be continuously motivated to meet their fitness goals. To answer our hypotheses and research questions, we conducted a 2 (emotional intensity: low vs. high) × 3 (emotion regulation strategy: no regulation vs. cognitive change vs. attention deployment) online between-subjects experiment (N = 228). Our results indicate that emotion regulation function successfully regulated users’ downward emotions, which enhanced their state psychological well-being, perceived self-efficacy for exercise, and then facilitated more favorable fitness outcomes. We discuss design implications based on our results.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号