首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper introduces the integrated system of a smart-device-based cognitive robot partner called iPhonoid-C. Interaction with a robot partner requires many elements, including verbal communication, nonverbal communication, and embodiment as well. A robot partner should be able to understand human sentences, as well as nonverbal information such as human gestures. In the proposed system, the robot has an emotional model connecting the input information from the human with the robot’s behavior. Since emotions are involved in human natural communication, and emotion has a significant impact on humans’ actions, it is important to develop an emotional model for the robot partner to enhance human robot interaction. In our proposed system, human sentences and gestures influence the robot’s emotional state, and then the robot will perform gestural and facial expressions and generate sentences according to its emotional state. The proposed cognitive method is validated using a real robot partner.  相似文献   

2.
In human–human communication we can adapt or learn new gestures or new users using intelligence and contextual information. Achieving natural gesture-based interaction between humans and robots, the system should be adaptable to new users, gestures and robot behaviors. This paper presents an adaptive visual gesture recognition method for human–robot interaction using a knowledge-based software platform. The system is capable of recognizing users, static gestures comprised of the face and hand poses, and dynamic gestures of face in motion. The system learns new users, poses using multi-cluster approach, and combines computer vision and knowledge-based approaches in order to adapt to new users, gestures and robot behaviors. In the proposed method, a frame-based knowledge model is defined for the person-centric gesture interpretation and human–robot interaction. It is implemented using the frame-based Software Platform for Agent and Knowledge Management (SPAK). The effectiveness of this method has been demonstrated by an experimental human–robot interaction system using a humanoid robot ‘Robovie’.  相似文献   

3.
Given the growth of and competition among mobile messenger applications (MMAs), attracting users’ attention and enhancing their loyalty have become large challenges for MMA service providers. This study provides a theoretical view for understanding the mechanisms that lead to user loyalty toward MMAs. Although emotions and the dedication-constraint model are the two main research disciplines via which the formation of user loyalty has been investigated, few studies have unified these two disciplines. A theoretical model is developed by synthesizing emotional responses and the dedication-constraint model. Based on the ambivalent view of emotions, we examine the exact effects of positive and negative emotions on user loyalty to MMA. Moreover, we identify an encompassing set of antecedents to affective and calculative commitments in the MMA context. A structural equation modeling (SEM) method is used to test the research model based on a sample of 300 KakaoTalk users in South Korea. Our findings reveal that user loyalty to MMAs is jointly shaped by dedication- and constraint-based mechanisms and emotional responses. The findings indicate that affective commitment significantly influences user loyalty, both directly and indirectly, through positive emotions. However, calculative commitment has significant positive effects on positive emotions and user loyalty, but it is also positively related to negative emotions. Perceived usefulness, perceived enjoyment, and trust significantly influence affective commitment to MMAs, while social norms significantly affect calculative commitment to MMAs. Theoretical and managerial implications and future research directions are subsequently discussed.  相似文献   

4.
5.
Remote communication between people typically relies on audio and vision although current mobile devices are increasingly based on detecting different touch gestures such as swiping. These gestures could be adapted to interpersonal communication by using tactile technology capable of producing touch stimulation to a user's hand. It has been suggested that such mediated social touch would allow for new forms of emotional communication. The aim was to study whether vibrotactile stimulation that imitates human touch can convey intended emotions from one person to another. For this purpose, devices were used that converted touch gestures of squeeze and finger touch to vibrotactile stimulation. When one user squeezed his device or touched it with finger(s), another user felt corresponding vibrotactile stimulation on her device via four vibrating actuators. In an experiment, participant dyads comprising a sender and receiver were to communicate variations in the affective dimensions of valence and arousal using the devices. The sender's task was to create stimulation that would convey unpleasant, pleasant, relaxed, or aroused emotional intention to the receiver. Both the sender and receiver rated the stimulation using scales for valence and arousal so that the match between sender's intended emotions and receiver's interpretations could be measured. The results showed that squeeze was better at communicating unpleasant and aroused emotional intention, while finger touch was better at communicating pleasant and relaxed emotional intention. The results can be used in developing technology that enables people to communicate via touch by choosing touch gesture that matches the desired emotion.  相似文献   

6.
《Advanced Robotics》2013,27(3-4):363-381
Music has long been used to strengthen bonds between humans. In our research, we develop musical coplayer robots with the hope that music may improve human–robot symbiosis as well. In this paper, we underline the importance of non-verbal, visual communication for ensemble synchronization at the start, during and end of a piece. We propose three cues for interplayer communication, and present a thereminplaying, singing robot that can detect them and adapt its play to a human flutist. Experiments with two naive flutists suggest that the system can recognize naturally occurring flutist gestures without requiring specialized user training. In addition, we show how the use of audio-visual aggregation can allow a robot to adapt to tempo changes quickly.  相似文献   

7.
In human–robot interaction scenarios, an intelligent robot should be able to synthesize an appropriate behavior adapted to human profile (i.e., personality). Recent research studies discussed the effect of personality traits on human verbal and nonverbal behaviors. The dynamic characteristics of the generated gestures and postures during the nonverbal communication can differ according to personality traits, which similarly can influence the verbal content of human speech. This research tries to map human verbal behavior to a corresponding verbal and nonverbal combined robot behavior based on the extraversion–introversion personality dimension. We explore the human–robot personality matching aspect and the similarity attraction principle, in addition to the different effects of the adapted combined robot behavior expressed through speech and gestures, and the adapted speech-only robot behavior, on interaction. Experiments with the humanoid NAO robot are reported.  相似文献   

8.
This article provides the first survey of computational models of emotion in reinforcement learning (RL) agents. The survey focuses on agent/robot emotions, and mostly ignores human user emotions. Emotions are recognized as functional in decision-making by influencing motivation and action selection. Therefore, computational emotion models are usually grounded in the agent’s decision making architecture, of which RL is an important subclass. Studying emotions in RL-based agents is useful for three research fields. For machine learning (ML) researchers, emotion models may improve learning efficiency. For the interactive ML and human–robot interaction community, emotions can communicate state and enhance user investment. Lastly, it allows affective modelling researchers to investigate their emotion theories in a successful AI agent class. This survey provides background on emotion theory and RL. It systematically addresses (1) from what underlying dimensions (e.g. homeostasis, appraisal) emotions can be derived and how these can be modelled in RL-agents, (2) what types of emotions have been derived from these dimensions, and (3) how these emotions may either influence the learning efficiency of the agent or be useful as social signals. We also systematically compare evaluation criteria, and draw connections to important RL sub-domains like (intrinsic) motivation and model-based RL. In short, this survey provides both a practical overview for engineers wanting to implement emotions in their RL agents, and identifies challenges and directions for future emotion-RL research.  相似文献   

9.
The iCat is a user-interface robot with the ability to express a range of emotions through its facial features. This article summarizes our research to see whether we can increase the believability and likability of the iCat for its human partners through the application of gaze behaviour. Gaze behaviour serves several functions during social interaction such as mediating conversation flow, communicating emotional information and avoiding distraction by restricting visual input. There are several types of eye and head movements that are necessary for realizing these functions. We designed and evaluated a gaze behaviour system for the iCat robot that implements realistic models of the major types of eye and head movements found in living beings: vergence, vestibulo ocular reflexive, smooth pursuit movements and gaze shifts. We discuss how these models are integrated into the software environment of the iCat and can be used to create complex interaction scenarios. We report about some user tests and draw conclusions for future evaluation scenarios.  相似文献   

10.
User psychology is a human–technology interaction research approach that uses psychological concepts, theories, and findings to structure problems of human–technology interaction. As the notion of user experience has become central in human–technology interaction research and in product development, it is necessary to investigate the user psychology of user experience. This analysis of emotional human–technology interaction is based on the psychological theory of basic emotions. Three studies, two laboratory experiments, and one field study are used to investigate the basic emotions and the emotional mind involved in user experience. The first and second experiments study the measurement of subjective emotional experiences during novel human–technology interaction scenarios in a laboratory setting. The third study explores these aspects in a real-world environment. As a result of these experiments, a bipolar competence–frustration model is proposed, which can be used to understand the emotional aspects of user experience.  相似文献   

11.
由于4轮驱动机器人的轮间耦合特性及系统非线性的存在,即使单个驱动电机的控制精度达到最优,机器人整体的运动控制效果也未必理想.针对这一问题,提出一种基于大脑情感学习的机器人速度补偿控制方法.基于大脑情感学习计算模型,设计了融合机器人整体速度跟踪误差及其积分、微分信息的补偿控制器,通过计算模型内部各节点权值的在线学习,及时地调整控制器的参数,实现对4个轮子速度的自适应补偿.仿真实验表明,该方法有效减小了非线性干扰对系统的影响,具有较高的稳态控制精度和较快的响应速度,大大提高了机器人整体的速度和轨迹跟踪精度.  相似文献   

12.
Service robotics is currently a highly active research area in robotics, with enormous societal potential. Since service robots directly interact with people, finding natural and easy-to-use user interfaces is of fundamental importance. While past work has predominately focussed on issues such as navigation and manipulation, relatively few robotic systems are equipped with flexible user interfaces that permit controlling the robot by natural means. This paper describes a gesture interface for the control of a mobile robot equipped with a manipulator. The interface uses a camera to track a person and recognize gestures involving arm motion. A fast, adaptive tracking algorithm enables the robot to track and follow a person reliably through office environments with changing lighting conditions. Two alternative methods for gesture recognition are compared: a template based approach and a neural network approach. Both are combined with the Viterbi algorithm for the recognition of gestures defined through arm motion (in addition to static arm poses). Results are reported in the context of an interactive clean-up task, where a person guides the robot to specific locations that need to be cleaned and instructs the robot to pick up trash.  相似文献   

13.
In addressing user experience issues, users’ perceptions and emotions need to be considered important. This study examines the relationships between perceived usability/aesthetics and emotional valence/arousal/engagement through an experiment using 15 existing websites from various domains and questionnaire items developed to measure users’ responses. According to the experimental results, both perceived usability and perceived aesthetics were positively correlated with emotional valence and negatively correlated with emotional engagement. No specific relationship was found between perceived usability/aesthetics and emotional arousal. Perceived aesthetics potentially had a greater impact on valence than perceived usability. Unlike valence, engagement could be more influenced by perceived usability than by perceived aesthetics. These findings can be utilized as bases for applying users’ emotional responses in each dimension to the product-use situations in the chain of perceptions, emotions, and behaviors.  相似文献   

14.
Service robots have been developed to assist nurses in routine patient services. Prior research has recognized that patient emotional experiences with robots may be as important as robot task performance in terms of user acceptance and assessments of effectiveness. The objective of this study was to understand the effect of different service robot interface features on elderly perceptions and emotional responses in a simulated medicine delivery task. Twenty-four participants sat in a simulated patient room and a service robot delivered a bag of “medicine” to them. Repeated trials were used to present variations on three robot features, including facial configuration, voice messaging and interactivity. Participant heart rate (HR) and galvanic skin response (GSR) were collected. Participant ratings of robot humanness [perceived anthropomorphism (PA)] were collected post-trial along with subjective ratings of arousal (bored–excited) and valence (unhappy–happy) using the self-assessment manikin (SAM) questionnaire. Results indicated the presence of all three types of robot features promoted higher PA, arousal and valence, compared to a control condition (a robot without any of the features). Participant physiological responses varied with events in their interaction with the robot. The three types of features also had different utility for stimulating participant arousal and valence, as well as physiological responses. In general, results indicated that adding anthropomorphic and interactive features to service robots promoted positive emotional responses [increased excitement (GSR) and happiness (HR)] in elderly users. It is expected that results from this study could be used as a basis for developing affective robot interface design guidelines to promote user emotional experiences.  相似文献   

15.
In robotics, the idea of human and robot interaction is receiving a lot of attention lately. In this paper, we describe a multi-modal system for generating a map of the environment through interaction of a human and home robot. This system enables people to teach a newcomer robot different attributes of objects and places in the room through speech commands and hand gestures. The robot learns about size, position, and topological relations between objects, and produces a map of the room based on knowledge learned through communication with the human. The developed system consists of several sections including: natural language processing, posture recognition, object localization and map generation. This system combines multiple sources of information and model matching to detect and track a human hand so that the user can point toward an object of interest and guide the robot to either go near it or to locate that object's position in the room. The positions of objects in the room are located by monocular camera vision and depth from focus method.  相似文献   

16.
In this work, we propose a mapping function based feature transformation framework for developing consonant–vowel (CV) recognition system in the emotional environment. An effective way of conveying messages is by expressing emotions during human conversations. The characteristics of CV units differ from one emotion to other emotions. The performance of existing CV recognition systems is degraded in emotional environments. Therefore, we have proposed mapping functions based on artificial neural network and GMM models for increasing the accuracy of CV recognition in the emotional environment. The CV recognition system has been explored to transform emotional features to neutral features using proposed mapping functions at CV and phone levels to minimize mismatch between training and testing environments. Vowel onset and offset points have been used to identify vowel, consonant and transition segments. Transition segments are identified by considering initial 15% speech samples between vowel onset and offset points. The average performance of CV recognition system is increased significantly using feature mapping technique at phone level in three emotional environments (anger, happiness, and sadness).  相似文献   

17.
A McKibben-type pneumatic actuator is widely used for utilization of its self-stabilization characteristics with a simple actuator model and a simple control method. However, how its characteristics act on the stability of robot motion have not been sufficiently discussed. The purpose of the paper is to analyze how various characteristics of McKibben pneumatic actuator (MPA) influence the stability of movements generated by MPA. In this paper, at first, we introduced two static models of the MPA which were proposed in the previous research and verified the models through validation experiments. The models of MPA form as simply as possible for a stability analysis. Next, we showed that the tension of MPA monotonically decreased according to the contraction velocity through validation experiments. Finally, the model was applied to a same simple robot model with the previous study and the stability of motions generated by the actuators was analyzed based on control theory. From the stability analysis, it was verified that the stability of the constant posture was achieved by the relatively simple static MPA model, the verified tension–velocity dependency of actuator, and the interaction with the properties of the actuator and the mechanical structure of the robot. This result suggests that the properties of MPA, particularly the verified velocity-dependent property, can contribute to the self-stability of a robot generated by the actuators, and it is important to consider the interaction between the mechanical structure and the actuator.  相似文献   

18.
In designing and developing a gesture recognition system, it is crucial to know the characteristics of a gesture selected to control, for example, an end effector of a robot arm. We conducted an experiment to collect a set of user-defined gestures and investigate characteristics of the gestures for controlling primitive motions of an end effector in human–robot collaboration. We recorded 152 gestures from 19 volunteers by presenting virtual robotic arm movements to the participants, and then asked the participants to think about and perform gestures that would cause the motions. It was found that the hands were the parts of the body used most often for gesture articulation even when the participants were holding tools and objects with both hands: a number of participants used one- and two-handed gestures interchangeably, gestures were consistently performed by the participants across all pairs of reversible gestures, and the participants expected better recognition performance for gestures that were easy to think of and perform. These findings are expected to be useful as guidelines in creating a gesture set for controlling robotic arms according to natural user behaviors.  相似文献   

19.
For a robot to cohabit with people, it should be able to learn people’s nonverbal social behavior from experience. In this paper, we propose a novel machine learning method for recognizing gestures used in interaction and communication. Our method enables robots to learn gestures incrementally during human–robot interaction in an unsupervised manner. It allows the user to leave the number and types of gestures undefined prior to the learning. The proposed method (HB-SOINN) is based on a self-organizing incremental neural network and the hidden Markov model. We have added an interactive learning mechanism to HB-SOINN to prevent a single cluster from running into a failure as a result of polysemy of being assigned more than one meaning. For example, a sentence: “Keep on going left slowly” has three meanings such as, “Keep on (1)”, “going left (2)”, “slowly (3)”. We experimentally tested the clustering performance of the proposed method against data obtained from measuring gestures using a motion capture device. The results show that the classification performance of HB-SOINN exceeds that of conventional clustering approaches. In addition, we have found that the interactive learning function improves the learning performance of HB-SOINN.  相似文献   

20.
Whang MC  Lim JS  Boucsein W 《Human factors》2003,45(4):623-634
Despite rapid advances in technology, computers remain incapable of responding to human emotions. An exploratory study was conducted to find out what physiological parameters might be useful to differentiate among 4 emotional states, based on 2 dimensions: pleasantness versus unpleasantness and arousal versus relaxation. The 4 emotions were induced by exposing 26 undergraduate students to different combinations of olfactory and auditory stimuli, selected in a pretest from 12 stimuli by subjective ratings of arousal and valence. Changes in electroencephalographic (EEG), heart rate variability, and electrodermal measures were used to differentiate the 4 emotions. EEG activity separates pleasantness from unpleasantness only in the aroused but not in the relaxed domain, where electrodermal parameters are the differentiating ones. All three classes of parameters contribute to a separation between arousal and relaxation in the positive valence domain, whereas the latency of the electrodermal response is the only differentiating parameter in the negative domain. We discuss how such a psychophysiological approach may be incorporated into a systemic model of a computer responsive to affective communication from the user.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号