首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Future robots will work in hospitals, elderly care centers, schools, and homes. Similarity with humans can facilitate interaction with a variety of users who don't have robotics expertise, so it make sense to take inspiration from humans when developing robots. However, humanlike appearance can also be deceiving, convincing users that robots can understand and do much more than they actually can. Developing a humanlike appearance must go hand in hand with increasing robots' cognitive, social, and perceptive capabilities. This installment of Trends & Controversies explores different aspects of human-inspired robots.  相似文献   

2.
在情感机器人研究中,不同个性的面部表情是情感机器人增强真实感的重要基础。为实现情感机器人更加丰富细腻的表情,将人类的个性特征引入情感机器人,分析个性理论和情感模型理论,得知不同个性机器人的情感强度。结合面部动作编码系统中面部表情与机器人控制点之间的映射关系,得到情感机器人不同个性的基本表情实现方法。利用Solidworks建立情感机器人脸部模型,在ANSYS工程软件中将SHFR-Ⅲ情感机器人脸部模型设置为弹性体,通过有限元仿真计算方法,对表情的有限元仿真方法进行了探究,得到实现SHFR-Ⅲ不同个性基本表情的控制区域载荷大小和仿真结果。最后,根据仿真结果,进行SHFR-Ⅲ情感机器人不同个性的表情动作实验。实验结果表明,有限元表情仿真可以指导SHFR-Ⅲ情感机器人实现近似人类的不同个性的基本面部表情。  相似文献   

3.
《Advanced Robotics》2013,27(4):341-355
The human face serves a variety of different communicative functions in social interaction. The face mediates person identification, the perception of emotional expressions and lipreading. Perceiving the direction of social attention, and facial attractiveness, also affects interpersonal behaviour. This paper reviews these different uses made of facial information, and considers their computational demands. The possible link between the perception of faces and deeper levels of social understanding is emphasized through a discussion of developmental deficits affecting social cognition. Finally, the implications for the development of communication between robots and humans are discussed. It is concluded that it could be useful both for robots to understand human faces, and also to display buman-like facial gestures themselves.  相似文献   

4.
As virtual humans approach photorealistic perfection, they risk making real humans uncomfortable. This intriguing phenomenon, known as the uncanny valley, is well known but not well understood. In an effort to demystify the causes of the uncanny valley, this paper proposes several perceptual, cognitive, and social mechanisms that have already helped address riddles like empathy, mate selection, threat avoidance, cognitive dissonance, and psychological defenses. In the four studies described herein, a computer generated human character’s facial proportions, skin texture, and level of detail were varied to examine their effect on perceived eeriness, human likeness, and attractiveness. In Study I, texture photorealism and polygon count increased human likeness. In Study II, texture photorealism heightened the accuracy of human judgments of ideal facial proportions. In Study III, atypical facial proportions were shown to be more disturbing on photorealistic faces than on other faces. In Study IV, a mismatch in the size and texture of the eyes and face was especially prone to make a character eerie. These results contest the depiction of the uncanny valley as a simple relation between comfort level and human likeness. This paper concludes by introducing a set of design principles for bridging the uncanny valley.  相似文献   

5.
Past work on creating robots that can make convincing emotional expressions has concentrated on the quality of those expressions, and on assessing people’s ability to recognize them in neutral contexts, without any strong emotional valence. It would be interesting to find out whether observers’ judgments of the facial cues of a robot would be affected by a surrounding emotional context. This paper takes its inspiration from the contextual effects found on our interpretation of the expressions on human faces and computer avatars, and looks at the extent to which they also apply to the interpretation of the facial expressions of a mechanical robot head. The kinds of contexts that affect the recognition of robot emotional expressions, the circumstances under which such contextual effects occur, and the relationship between emotions and the surrounding situation, are observed and analyzed. Design implications for believable emotional robots are drawn.  相似文献   

6.
The term “uncanny valley” goes back to an article of the Japanese roboticist Masahiro Mori (Mori 1970, 2005). He put forward the hypothesis that humanlike objects like certain kinds of robots elicit emotional responses similar to real humans proportionate to their degree of human likeness. Yet, if a certain degree of similarity is reached emotional responses become all of a sudden very repulsive. The corresponding recess in the supposed function is called the uncanny valley. The present paper wants to propose a philosophical explanation why we feel empathy with inanimate objects in the first place, and why the uncanny valley occurs when these objects become very humanlike. The core of this explanation—which is informed by the recently developing empirical research on the matter—will be a form of empathy involving a kind of imaginative perception. However, as will be shown, imaginative perception fails in cases of very humanlike objects.  相似文献   

7.
A robot's face is its symbolic feature, and its facial expressions are the best method for interacting with people with emotional information. Moreover, a robot's facial expressions play an important role in human-robot emotional interactions. This paper proposes a general rule for the design and realization of expressions when some mascot-type facial robots are developed. Mascot-type facial robots are developed to enable friendly human feelings. The number and type of control points for six basic expressions or emotions were determined through a questionnaire. A linear affect-expression space model is provided to realize continuous and various expressions effectively, and the effects of the proposed method are shown through experiments using a simulator and an actual robot system.  相似文献   

8.
This study investigates the relationship between facial attractiveness and facial proportions. Here, we generated facial images with different proportions using computer software, hence avoided the influence of hairstyle, facial expression as well as skin tone and texture on the perception of facial attractiveness. By analyzing the relationship between the facial proportions of 432 computer generated facial images and their attractiveness ratings, here we identified the optimum proportions for an attractive female face and further established a model of predicting facial attractiveness from four principle components of facial proportions with good predictability (R2=0.64).  相似文献   

9.
For effective interaction between humans and socially adept, intelligent service robots, a key capability required by this class of sociable robots is the successful interpretation of visual data. In addition to crucial techniques like human face detection and recognition, an important next step for enabling intelligence and empathy within social robots is that of emotion recognition. In this paper, an automated and interactive computer vision system is investigated for human facial expression recognition and tracking based on the facial structure features and movement information. Twenty facial features are adopted since they are more informative and prominent for reducing the ambiguity during classification. An unsupervised learning algorithm, distributed locally linear embedding (DLLE), is introduced to recover the inherent properties of scattered data lying on a manifold embedded in high-dimensional input facial images. The selected person-dependent facial expression images in a video are classified using the DLLE. In addition, facial expression motion energy is introduced to describe the facial muscle’s tension during the expressions for person-independent tracking for person-independent recognition. This method takes advantage of the optical flow which tracks the feature points’ movement information. Finally, experimental results show that our approach is able to separate different expressions successfully.  相似文献   

10.
Taking the term “companion” in a broad sense to include robot caregivers, playmates, assistive devices, and toys, we examine ethical issues that emerge from designing companion robots for children. We focus on the relative importance and potential ethical implications of creating robots with certain types of esthetic features. We include an examination of whether robots ought to be made to appear or act humanlike, and whether robots should be gendered. In our estimation, this line of ethical inquiry may even provide insight into the nature and appropriateness of existing institutions and widely accepted interactions among human beings.  相似文献   

11.
The attractiveness of human faces can be predicted with a high degree of accuracy if we represent the faces as feature vectors and compute their relative distances from two prototypes: the average of attractive faces and the average of unattractive faces. Moreover, the degree of attractiveness, defined in terms of the relative distance, exhibits a high degree of correlation with the average rating scores given by human assessors. These findings motivate a bi-prototype theory that relates facial attractiveness to the averages of attractive and unattractive faces rather than the average of all faces, as previously hypothesized by some researchers.  相似文献   

12.
This paper describes a parallel computing system and a software algorithm for real-time interaction between a human user and a synthesized, humanlike, moving image. The realistic humanlike agent on a monitor can recognize the palm position and finger motion (finger sign) of the user. She/he then tracks and gazes at the hand position to change her/his facial expression in response to the finger sign in real time. This visual software agent (VSA) is expected to play an important role in building an advanced human interface. We regard this type of interactive agent as avisual software robot. To achieve real-time image recognition and image synthesis, we have developed a parallel visual computer system the transputer network with visual interface to transputers (TN-VIT). The imagesynthesis speed of the TN-VIT is about 24 frames/s, including finger-sign recognition. Some samples of synthesized images and experimental results are shown.  相似文献   

13.
The ability to recognize facial emotions is target behaviour when treating people with social impairment. When assessing this ability, the most widely used facial stimuli are photographs. Although their use has been shown to be valid, photographs are unable to capture the dynamic aspects of human expressions. This limitation can be overcome by creating virtual agents with feasible expressed emotions. The main objective of the present study was to create a new set of dynamic virtual faces with high realism that could be integrated into a virtual reality (VR) cyberintervention to train people with schizophrenia in the full repertoire of social skills. A set of highly realistic virtual faces was created based on the Facial Action Coding System. Facial movement animation was also included so as to mimic the dynamism of human facial expressions. Consecutive healthy participants (n = 98) completed a facial emotion recognition task using both natural faces (photographs) and virtual agents expressing five basic emotions plus a neutral one. Repeated-measures ANOVA revealed no significant difference in participants’ accuracy of recognition between the two presentation conditions. However, anger was better recognized in the VR images, and disgust was better recognized in photographs. Age, the participant’s gender and reaction times were also explored. Implications of the use of virtual agents with realistic human expressions in cyberinterventions are discussed.  相似文献   

14.
A facial expression emotion recognition based human-robot interaction (FEER-HRI) system is proposed, for which a four-layer system framework is designed. The FEERHRI system enables the robots not only to recognize human emotions, but also to generate facial expression for adapting to human emotions. A facial emotion recognition method based on 2D-Gabor, uniform local binary pattern (LBP) operator, and multiclass extreme learning machine (ELM) classifier is presented, which is applied to real-time facial expression recognition for robots. Facial expressions of robots are represented by simple cartoon symbols and displayed by a LED screen equipped in the robots, which can be easily understood by human. Four scenarios, i.e., guiding, entertainment, home service and scene simulation are performed in the human-robot interaction experiment, in which smooth communication is realized by facial expression recognition of humans and facial expression generation of robots within 2 seconds. As a few prospective applications, the FEERHRI system can be applied in home service, smart home, safe driving, and so on.   相似文献   

15.
Robotics researchers and cognitive scientists are becoming more and more interested in so-called sociable robots. These machines normally have expressive power (facial features, voice,?…) as well as abilities for locating, paying attention to, and addressing people. The design objective is to make robots which are able to sustain natural interactions with people. This capacity falls within the range classed as social intelligence in humans. This position paper argues that the reproduction of social intelligence, as opposed to other types of human ability, may lead to fragile performance, in the sense that tested cases may produce rather different performances to future (untested) cases and situations. This limitation stems from the fact that our social abilities, which appear early in life, are mainly unconscious in origin. This is in contrast with other human abilities that we carry out using conscious effort, and for which we can easily conceive algorithms and representations. This novel perspective is deemed useful for defining the obstacles and limitations of a field that is generating increasing interest. Taking into account the mentioned issues, a development approach suited to the problem is proposed. The use of this approach is demonstrated in the development of CASIMIRO, a robotic head with basic interaction abilities.  相似文献   

16.
杨璞  易法令  刘王飞  杨远发 《微机发展》2006,16(11):131-133
人脸是人类相互交流的重要渠道,是人类的喜、怒、哀、乐等复杂表情和语言的载体。因此,具有真实感的三维人脸模型的构造和变形是计算机图形学领域中一个研究热点。如何在三维人脸模型上产生具有真实感的人脸表情和动作,是其中的一个难点。文中介绍了一种基于Delaunay和Dirichlet/Voronoi图的Dirichlet自由变形算法(Dirichlet Free-Form De-formations,简称DFFD)解决这一问题。文中详细介绍了DFFD技术,并根据MPEG-4的脸部定义参数,应用DFFD对一般人脸进行变形。同时提出了在进行人脸变形时利用脸部定义参数FDP与脸部动画参数FAP分层次控制的方法,这种两级控制点控制的设置,使三维人脸模型产生光滑变形,由此可将人脸各种表情平滑准确地展现出来。  相似文献   

17.
Multimodal interfaces incorporating embodied conversational agents enable the development of novel concepts with regard to interaction management tactics in responsive human–machine interfaces. Such interfaces provide several additional nonverbal communication channels, such as natural visualized speech, facial expression, and different body motions. In order to simulate reactive humanlike communicative behavior and attitude, the realization of motion relies on different behavioral analyses and realization tactics and approaches. This article proposes a novel environment for “online” visual modeling of humanlike communicative behavior, named EVA-framework. In this study we focus on visual speech and nonverbal behavior synthesis by using hierarchical XML-based behavioral events and expressively adjustable motion templates. The main goal of the presented abstract motion notation scheme, named EVA-Script, is to enable the synthesis of unique and responsive behavior.  相似文献   

18.
Cognitive capabilities such as perception, reasoning, learning, and planning turn technical systems into systems that “know what they are doing.” Starting from the human brain the Cluster of Excellence “CoTeSys” investigates cognition for technical systems such as vehicles, robots, and factories. Technical systems that are cognitive will be much easier to interact and cooperate with, and will be more robust, flexible, and efficient. For understanding their environment and interacting with humans the cognitive system’s most important sense is the visual one. The talk presents recent results using the visual sensor in building environment models, self localization of autonomous moving robots, navigation, simultaneous tracking of groups of humans and robots, face detection and evaluation of gaze direction and facial expression, and emotional communication between humans and robots.  相似文献   

19.
A video database of moving faces and people   总被引:3,自引:0,他引:3  
We describe a database of static images and video clips of human faces and people that is useful for testing algorithms for face and person recognition, head/eye tracking, and computer graphics modeling of natural human motions. For each person there are nine static "facial mug shots" and a series of video streams. The videos include a "moving facial mug shot," a facial speech clip, one or more dynamic facial expression clips, two gait videos, and a conversation video taken at a moderate distance from the camera. Complete data sets are available for 284 subjects and duplicate data sets, taken subsequent to the original set, are available for 229 subjects.  相似文献   

20.
In the last decade we have witnessed a rapid growth of Humanoid Robotics, which has already constituted an autonomous research field. Humanoid robots (or simply humanoids) are expected in all situations of humans’ everyday life, “living” and cooperating with us. They will work in services, in homes, and hospitals, and they are even expected to get involved in sports. Hence, they will have to be capable of doing diverse kinds of tasks. This forces the researchers to develop an appropriate mathematical model to support simulation, design, and control of these systems. Another important fact is that today’s, and especially tomorrow’s, humanoid robots will be more and more humanlike in their shape and behavior. A dynamic model developed for an advanced humanoid robot may become a very useful tool for the dynamic analysis of human motion in different tasks (walking, running and jumping, manipulation, various sports, etc.). So, we derive a general model and talk about a human-and-humanoid simulation system. The basic idea is to start from a human/humanoid considered as a free spatial system (“flier”). Particular problems (walking, jumping, etc.) are then considered as different contact tasks – interaction between the flier and various objects (being either single bodies or separate dynamic systems).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号