首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Dance has universally been used as a form of human expression for thousands of years. This common human behaviour and communication method has not been explored much in the context of computer-based technology, even within the field of virtual human research. This paper presents an experimental study investigating the impact of watching dancing virtual characters on human emotions. The study analysed the responses of 55 participants, composed of a mix of dancers and non-dancers, who watched a dancing virtual character perform 3 different dances that represented anger, sadness and happiness in different display orders. The participants’ reported changes in their emotions and their feelings of anger, sadness and happiness were significantly dependent on which dancing character’s emotion they watched and the emotional change did not rely on correct recognition of the depicted emotion. For experimental control, our characters were faceless and danced without music. Our results suggest that just by watching a dancing virtual character some of the benefits associated with dancing could be accessed in circumstances where it is not desirable or feasible to dance, justifying further research to develop a personalised character with a face and music that adapts according to the humans’ emotions and preferences.  相似文献   

2.
Video conferencing provides an environment for multiple users linked on a network to have meetings. Since a large quantity of audio and video data are transferred to multiple users in real time, research into reducing the quantity of data to be transferred has been drawing attention. Such methods extract and transfer only the features of a user from video data and then reconstruct a video conference using virtual humans. The disadvantage of such an approach is that only the positions and features of hands and heads are extracted and reconstructed, whilst the other virtual body parts do not follow the user. In order to enable a virtual human to accurately mimic the entire body of the user in a 3D virtual conference, we examined what features should be extracted to express a user more clearly and how they can be reproduced by a virtual human. This 3D video conferencing estimates the user’s pose by comparing predefined images with a photographed user’s image and generates a virtual human that takes the estimated pose. However, this requires predefining a diverse set of images for pose estimation and, moreover, it is difficult to define behaviors that can express poses correctly. This paper proposes a framework to automatically generate the pose-images used to estimate a user’s pose and the behaviors required to present a user using a virtual human in a 3D video conference. The method for applying this framework to a 3D video conference on the basis of the automatically generated data is also described. In the experiment, the framework proposed in this paper was implemented in a mobile device. The generation process of poses and behaviors of virtual human was verified. Finally, by applying programming by demonstration, we developed a system that can automatically collect the various data necessary for a video conference directly without any prior knowledge of the video conference system.  相似文献   

3.
情绪体验能够有效地提高虚拟现实系统用户的兴趣,虚拟人的情绪设计正成为构建虚拟环境的一项核心技术,目前的虚拟人情绪模型仍然处于初级阶段.综述了情绪模型的研究,讨论了情绪模型尚未解决的问题.根据情绪模型相关研究和认知科学的成果,提出了建立虚拟人情绪模型的一种新思,其目标是提高虚拟人情绪设计的效率,虚拟人的情绪状态通过情绪模型来控制,虚拟人可以具有感知、动机、情绪、个性,并可在虚拟环境中表现出恰当的自主情绪.情绪设计软件可以融合软计算理论和人机交互技术,为建立人性化的图形界面提供一种高效工具.  相似文献   

4.
Virtual reality applications with virtual humans, such as virtual reality exposure therapy, health coaches and negotiation simulators, are developed for different contexts and usually for users from different countries. The emphasis on a virtual human’s emotional expression depends on the application; some virtual reality applications need an emotional expression of the virtual human during the speaking phase, some during the listening phase and some during both speaking and listening phases. Although studies have investigated how humans perceive a virtual human’s emotion during each phase separately, few studies carried out a parallel comparison between the two phases. This study aims to fill this gap, and on top of that, includes an investigation of the cultural interpretation of the virtual human’s emotion, especially with respect to the emotion’s valence. The experiment was conducted with both Chinese and non-Chinese participants. These participants were asked to rate the valence of seven different emotional expressions (ranging from negative to neutral to positive during speaking and listening) of a Chinese virtual lady. The results showed that there was a high correlation in valence rating between both groups of participants, which indicated that the valence of the emotional expressions was as easily recognized by people from a different cultural background as the virtual human. In addition, participants tended to perceive the virtual human’s expressed valence as more intense in the speaking phase than in the listening phase. The additional vocal emotional expression in the speaking phase is put forward as a likely cause for this phenomenon.  相似文献   

5.
Future factories will feature strong integration of physical machines and cyber-enabled software, working seamlessly to improve manufacturing production efficiency. In these digitally enabled and network connected factories, each physical machine on the shop floor can have its ‘virtual twin’ available in cyberspace. This ‘virtual twin’ is populated with data streaming in from the physical machines to represent a near real-time as-is state of the machine in cyberspace. This results in the virtualization of a machine resource to external factory manufacturing systems. This paper describes how streaming data can be stored in a scalable and flexible document schema based database such as MongoDB, a data store that makes up the virtual twin system. We present an architecture, which allows third-party integration of software apps to interface with the virtual manufacturing machines. We evaluate our database schema against query statements and provide examples of how third-party apps can interface with manufacturing machines using the VMM middleware. Finally, we discuss an operating system architecture for VMMs across the manufacturing cyberspace, which necessitates command and control of various virtualized manufacturing machines, opening new possibilities in cyber-physical systems in manufacturing.  相似文献   

6.
The recognition of emotion in human speech has gained increasing attention in recent years due to the wide variety of applications that benefit from such technology. Detecting emotion from speech can be viewed as a classification task. It consists of assigning, out of a fixed set, an emotion category e.g. happiness, anger, to a speech utterance. In this paper, we have tackled two emotions namely happiness and anger. The parameters extracted from speech signal depend on speaker, spoken word as well as emotion. To detect the emotion, we have kept the spoken utterance and the speaker constant and only the emotion is changed. Different features are extracted to identify the parameters responsible for emotion. Wavelet packet transform (WPT) is found to be emotion specific. We have performed the experiments using three methods. Method uses WPT and compares the number of coefficients greater than threshold in different bands. Second method uses energy ratios of different bands using WPT and compares the energy ratios in different bands. The third method is a conventional method using MFCC. The results obtained using WPT for angry, happy and neutral mode are 85 %, 65 % and 80 % respectively as compared to results obtained using MFCC i.e. 75 %, 45 % and 60 % respectively for the three emotions. Based on WPT features a model is proposed for emotion conversion namely neutral to angry and neutral to happy emotion.  相似文献   

7.
自主性虚拟人的研究是人工生命和计算机动画交叉融合的新领域.但是由于人的心理活动是一个整体的过程,动机、感知等这些参数都是互有联系和影响的,目前的研究仍然只是局部和有限的.借鉴马斯洛理论,基于动机模型框架提出一种简化的抑制和疲劳模型控制的虚拟人自主行为选择机制,实验结果表明,该方法较好地解决了在资源有限的动态虚拟环境中,虚拟人如何在多个相互抑制的行为之间对行为进行仲裁和选择.经实验证明,该研究可以应用于智能交互领域.  相似文献   

8.
In this paper, we suggest a new approach of genetic programming for music emotion classification. Our approach is based on Thayer’s arousal-valence plane which is one of representative human emotion models. Thayer’s plane which says human emotions is determined by the psychological arousal and valence. We map music pieces onto the arousal-valence plane, and classify the music emotion in that space. We extract 85 acoustic features from music signals, rank those by the information gain and choose the top k best features in the feature selection process. In order to map music pieces in the feature space onto the arousal-valence space, we apply genetic programming. The genetic programming is designed for finding an optimal formula which maps given music pieces to the arousal-valence space so that music emotions are effectively classified. k-NN and SVM methods which are widely used in classification are used for the classification of music emotions in the arousal-valence space. For verifying our method, we compare with other six existing methods on the same music data set. With this experiment, we confirm the proposed method is superior to others.  相似文献   

9.
基于HMM的人工心理建模方法研究   总被引:2,自引:1,他引:1  
提出一种基于HMM的人工情感模型。该模型将人类的情感过程视为两层随机过程,通过调整模型的初始参数,能够构建具有不同性格特征的心理模型;同时它作为情感引擎,能以概率的形式预测情感过程的最终结果。系统运行结果表明由该模型产生的情感反应真实、自然。  相似文献   

10.
Most studies use the facial expression to recognize a user’s emotion; however, gestures, such as nodding, shaking the head, or stillness can also be indicators of the user’s emotion. In our research, we use the facial expression and gestures to detect and recognize a user’s emotion. The pervasive Microsoft Kinect sensor captures video data, from which several features representing facial expressions and gestures are extracted. An in-house extensible markup language-based genetic programming engine (XGP) evolves the emotion recognition module of our system. To improve the computational performance of the recognition module, we implemented and compared several approaches, including directed evolution, collaborative filtering via canonical voting, and a genetic algorithm, for an automated voting system. The experimental results indicate that XGP is feasible for evolving emotion classifiers. In addition, the obtained results verify that collaborative filtering improves the generality of recognition. From a psychological viewpoint, the results prove that different people might express their emotions differently, as the emotion classifiers that are evolved for particular users might not be applied successfully to other user(s).  相似文献   

11.
As technology advances, robots and virtual agents will be introduced into the home and healthcare settings to assist individuals, both young and old, with everyday living tasks. Understanding how users recognize an agent׳s social cues is therefore imperative, especially in social interactions. Facial expression, in particular, is one of the most common non-verbal cues used to display and communicate emotion in on-screen agents (Cassell et al., 2000). Age is important to consider because age-related differences in emotion recognition of human facial expression have been supported (Ruffman et al., 2008), with older adults showing a deficit for recognition of negative facial expressions. Previous work has shown that younger adults can effectively recognize facial emotions displayed by agents (Bartneck and Reichenbach, 2005, Courgeon et al., 2009, Courgeon et al., 2011, Breazeal, 2003); however, little research has compared in-depth younger and older adults’ ability to label a virtual agent׳s facial emotions, an import consideration because social agents will be required to interact with users of varying ages. If such age-related differences exist for recognition of virtual agent facial expressions, we aim to understand if those age-related differences are influenced by the intensity of the emotion, dynamic formation of emotion (i.e., a neutral expression developing into an expression of emotion through motion), or the type of virtual character differing by human-likeness. Study 1 investigated the relationship between age-related differences, the implication of dynamic formation of emotion, and the role of emotion intensity in emotion recognition of the facial expressions of a virtual agent (iCat). Study 2 examined age-related differences in recognition expressed by three types of virtual characters differing by human-likeness (non-humanoid iCat, synthetic human, and human). Study 2 also investigated the role of configural and featural processing as a possible explanation for age-related differences in emotion recognition. First, our findings show age-related differences in the recognition of emotions expressed by a virtual agent, with older adults showing lower recognition for the emotions of anger, disgust, fear, happiness, sadness, and neutral. These age-related difference might be explained by older adults having difficulty discriminating similarity in configural arrangement of facial features for certain emotions; for example, older adults often mislabeled the similar emotions of fear as surprise. Second, our results did not provide evidence for the dynamic formation improving emotion recognition; but, in general, the intensity of the emotion improved recognition. Lastly, we learned that emotion recognition, for older and younger adults, differed by character type, from best to worst: human, synthetic human, and then iCat. Our findings provide guidance for design, as well as the development of a framework of age-related differences in emotion recognition.  相似文献   

12.
Evaluating a Computational Model of Emotion   总被引:2,自引:0,他引:2  
Spurred by a range of potential applications, there has been a growing body of research in computational models of human emotion. To advance the development of these models, it is critical that we evaluate them against the phenomena they purport to model. In this paper, we present one method to evaluate an emotion model that compares the behavior of the model against human behavior using a standard clinical instrument for assessing human emotion and coping. We use this method to evaluate the Emotion and Adaptation (EMA) model of emotion Gratch and Marsella. The evaluation highlights strengths of the approach and identifies where the model needs further development.  相似文献   

13.
Current emotion recognition computational techniques have been successful on associating the emotional changes with the EEG signals, and so they can be identified and classified from EEG signals if appropriate stimuli are applied. However, automatic recognition is usually restricted to a small number of emotions classes mainly due to signal’s features and noise, EEG constraints and subject-dependent issues. In order to address these issues, in this paper a novel feature-based emotion recognition model is proposed for EEG-based Brain–Computer Interfaces. Unlike other approaches, our method explores a wider set of emotion types and incorporates additional features which are relevant for signal pre-processing and recognition classification tasks, based on a dimensional model of emotions: Valenceand Arousal. It aims to improve the accuracy of the emotion classification task by combining mutual information based feature selection methods and kernel classifiers. Experiments using our approach for emotion classification which combines efficient feature selection methods and efficient kernel-based classifiers on standard EEG datasets show the promise of the approach when compared with state-of-the-art computational methods.  相似文献   

14.
《Graphical Models》2008,70(4):57-75
This paper studies the inside looking out camera pose estimation for the virtual studio. The camera pose estimation process, the process of estimating a camera’s extrinsic parameters, is based on closed-form geometrical approaches which use the benefit of simple corner detection of 3D cubic-like virtual studio landmarks. We first look at the effective parameters of the camera pose estimation process for the virtual studio. Our studies include all characteristic landmark parameters like landmark lengths, landmark corner angles and their installation position errors and some camera parameters like lens focal length and CCD resolution. Through computer simulation we investigate and analyze all these parameters’ efficiency in camera extrinsic parameters, including camera rotation and position matrixes. Based on this work, we found that the camera translation vector is affected more than other camera extrinsic parameters because of the noise of effective camera pose estimation parameters. Therefore, we present a novel iterative geometrical noise cancellation method for the closed-form camera pose estimation process. This is based on the collinearity theory that reduces the estimation error of the camera translation vector, which plays a major role in camera extrinsic parameters estimation errors. To validate our method, we test it in a complete virtual studio simulation. Our simulation results show that they are in the same order as those of some commercial systems, such as the BBC and InterSense IS-1200 VisTracker.  相似文献   

15.
Recent trends towards an e-Science offer us the opportunity to think about the specific epistemological changes created by computational empowerment in scientific practices. In fact, we can say that a computational epistemology exists that requires our attention. By ‘computational epistemology’ I mean the computational processes implied or required to achieve human knowledge. In that category we can include AI, supercomputers, expert systems, distributed computation, imaging technologies, virtual instruments, middleware, robotics, grids or databases. Although several authors talk about the extended mind and computational extensions of the human body, most of these proposals don’t analyze the deep epistemological implications of computer empowerment in scientific practices. At the same time, we must identify the principal concept for e-Science: Information. Why should we think about a new epistemology for e-Science? Because several processes exist around scientific information that require a good epistemological model to be understood.  相似文献   

16.
Human–human interaction consists of various nonverbal behaviors that are often emotion-related. To establish rapport, it is essential that the listener respond to reactive emotion in a way that makes sense given the speaker's emotional state. However, human–robot interactions generally fail in this regard because most spoken dialogue systems play only a question-answer role. Aiming for natural conversation, we examine an emotion processing module that consists of a user emotion recognition function and a reactive emotion expression function for a spoken dialogue system to improve human–robot interaction. For the emotion recognition function, we propose a method that combines valence from prosody and sentiment from text by decision-level fusion, which considerably improves the performance. Moreover, this method reduces fatal recognition errors, thereby improving the user experience. For the reactive emotion expression function, the system's emotion is divided into emotion category and emotion level, which are predicted using the parameters estimated by the recognition function on the basis of distributions inferred from human–human dialogue data. As a result, the emotion processing module can recognize the user's emotion from his/her speech, and expresses a reactive emotion that matches. Evaluation with ten participants demonstrated that the system enhanced by this module is effective to conduct natural conversation.  相似文献   

17.
虚拟人情绪行为动画模型   总被引:6,自引:0,他引:6       下载免费PDF全文
近年来 ,虚拟人行为动画已经成为计算机动画一个新的分枝 ,已往的研究多集中在虚拟人局部的情绪表达动画方面 ,如脸部动画 ,而对于给定的虚拟场景 ,则尚未考虑情绪产生的原因 .情绪是虚拟人和虚拟环境交互作用的结果 ,然而在计算机动画领域 ,虚拟人的情绪至今尚未得到清楚的描述 .为此 ,依据心理学的理论 ,提出了虚拟人情绪行为的动画模型 ,即 ,首先提出了情绪集合和情绪表达集合的概念 ,并建立了从情绪状态到情绪表达之间的映射 ;其次 ,着重分析了情绪产生的原因 ,并引入了情绪源的概念 ,如果一种情绪刺激的强度大于情绪的抵抗强度 ,那么这种情绪就会产生 ;此外 ,情绪状态可以采用有限状态机来描述 ,据此提出了情绪的变化流程 ;最后 ,在微机上通过调用 Microsoft Direct3D API,实现了虚拟人的情绪行为动画  相似文献   

18.
In this paper, we present a system that visualizes the expressive quality of a music performance using a virtual head. We provide a mapping through several parameter spaces: on the input side, we have elaborated a mapping between values of acoustic cues and emotion as well as expressivity parameters; on the output side, we propose a mapping between these parameters and the behaviors of the virtual head. This mapping ensures a coherency between the acoustic source and the animation of the virtual head. After presenting some background information on behavior expressivity of humans, we introduce our model of expressivity. We explain how we have elaborated the mapping between the acoustic and the behavior cues. Then, we describe the implementation of a working system that controls the behavior of a human-like head that varies depending on the emotional and acoustic characteristics of the musical execution. Finally, we present the tests we conducted to validate our mapping between the emotive content of the music performance and the expressivity parameters.  相似文献   

19.
This paper introduces OpenPsi, a computational model for emotion generation and function by formalizing part of Dörner's PSI theory, which is an extensive psychological model of human brains, including knowledge representation, perception and bounded rationality. We also borrowed some technical ideas from MicroPsi, one of the concrete implementations of PSI theory by Joscha Bach. The proposed emotional model is then applied to control a virtual robot living in a game world inspired by Minecraft. Simulation experiments have been performed and evaluated for three different scenarios. The emergent emotions fit quite well with these circumstances. The dynamics of this affective model are also analyzed using Lewis's dynamic theory of emotions. Evidences of phase transitions suggested by Lewis are observed in simulations, including trigger, self-amplification and self-stabilization phases. These experiment results show that the proposed model is a quite promising approach of modeling both emotion emergence and dynamics.  相似文献   

20.
Extracting and understanding of emotion is of high importance for the interaction between human and machine communication systems. The most expressive way to display the human’s emotion is through facial expression analysis. This paper proposes a multiple emotion recognition system that can recognize combinations of up to a maximum of three different emotions using an active appearance model (AAM), the proposed classification standard, and a k-nearest neighbor (k-NN) classifier in mobile environments. AAM can take the expression of variations that are calculated by the proposed classification standard according to changes in human expressions in real time. The proposed k-NN can classify basic emotions (normal, happy, sad, angry, surprise) as well as more ambiguous emotions by combining the basic emotions in real time, and each recognized emotion that can be subdivided has strength. Whereas most previous methods of emotion recognition recognize various kind of a single emotion, this paper recognizes various emotions with a combination of the five basic emotions. To be easily understood, the recognized result is presented in three ways on a mobile camera screen. The result of the experiment was an average 85 % recognition rate and a 40 % performance showed optimized emotions. The implemented system can be represented by one of the example for augmented reality on displaying combination of real face video and virtual animation with user’s avatar.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号