首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Studied the development of the recognition of emotional facial expressions in children and of the factors influencing recognition accuracy. 80 elementary school students (aged 5–8 yrs) were asked to identify the emotions expressed in a series of facial photographs. Recognition performances were analyzed in relation to the type of emotion expressed (i.e., happiness, fear, anger, surprise, sadness, or disgust) and the intensity of the emotional expression. Age differences were determined. (English abstract) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
If emotions guide consciousness, people may recognize degraded objects in center view more accurately if they either fear the objects or are disgusted by them. Therefore, we studied whether recognition of spiders and snakes correlates with individual differences in spider fear, snake fear, and disgust sensitivity. Female students performed a recognition task with pictures of spiders, snakes, flowers, and mushrooms as well as blanks. Pictures were backward masked to reduce picture visibility. Signal detection analyses showed that recognition of spiders and snakes was correlated with disgust sensitivity but not with fear of spiders or snakes. Further, spider fear correlated with the tendency to misinterpret blanks as threatening (response bias). These findings suggest that effects on recognition and response biases to emotional pictures vary for different emotions and emotional traits. Whereas fear may induce response biases, disgust may facilitate recognition. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
Preschool children, 2 to 5 years of age, and adults posed the six facial expressions of happiness, surprise, anger, fear, sadness, and disgust before a videotape camera. Their poses were scored subsequently using the MAX system. The number of poses that included all components of the target expression (complete expressions) as well as the frequency of those that included only some of the components of the target expressions (partial expressions) were analyzed. Results indicated that 2-year-olds as a group failed to pose any face. Three-year-olds were a transitional group, posing happiness and surprise expressions but none of the remaining faces to any degree. Four- and 5-year-olds were similar to one another and differed from adults only on surprise and anger expressions. Adults were able to pose both these expressions. No group, including adults, posed fear and disgust well. Posing of happiness showed no change after 3 years of age. Consistent differences between partial and complete poses were observed particularly for the negative expressions of sadness, fear, and disgust. Implications of these results for socialization theories of emotion are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
We investigated adults' voluntary control of 20 facial action units theoretically associated with 6 basic emotions (happiness, fear, anger, surprise, sadness, and disgust). Twenty young adults were shown video excerpts of facial action units and asked to reproduce them as accurately as possible. Facial Action Coding System (FACS; Ekman & Friesen, 1978a) coding of the facial productions showed that young adults succeeded in activating 18 of the 20 target actions units, although they often coactivated other action units. Voluntary control was clearly better for some action units than for others, with a pattern of differences between action units consistent with previous work in children and adolescents. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
The authors compared the accuracy of emotion decoding for nonlinguistic affect vocalizations, speech-embedded vocal prosody, and facial cues representing 9 different emotions. Participants (N = 121) decoded 80 stimuli from 1 of the 3 channels. Accuracy scores for nonlinguistic affect vocalizations and facial expressions were generally equivalent, and both were higher than scores for speech-embedded prosody. In particular, affect vocalizations showed superior decoding over the speech stimuli for anger, contempt, disgust, fear, joy, and sadness. Further, specific emotions that were decoded relatively poorly through speech-embedded prosody were more accurately identified through affect vocalizations, suggesting that emotions that are difficult to communicate in running speech can still be expressed vocally through other means. Affect vocalizations also showed superior decoding over faces for anger, contempt, disgust, fear, sadness, and surprise. Facial expressions showed superior decoding scores over both types of vocal stimuli for joy, pride, embarrassment, and “neutral” portrayals. Results are discussed in terms of the social functions served by various forms of nonverbal emotion cues and the communicative advantages of expressing emotions through particular channels. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
Language and music are closely related in our minds. Does musical expertise enhance the recognition of emotions in speech prosody? Forty highly trained musicians were compared with 40 musically untrained adults (controls) in the recognition of emotional prosody. For purposes of generalization, the participants were from two age groups, young (18–30 years) and middle adulthood (40–60 years). They were presented with short sentences expressing six emotions—anger, disgust, fear, happiness, sadness, surprise—and neutrality, by prosody alone. In each trial, they performed a forced-choice identification of the expressed emotion (reaction times, RTs, were collected) and an intensity judgment. General intelligence, cognitive control, and personality traits were also assessed. A robust effect of expertise was found: musicians were more accurate than controls, similarly across emotions and age groups. This effect cannot be attributed to socioeducational background, general cognitive or personality characteristics, because these did not differ between musicians and controls; perceived intensity and RTs were also similar in both groups. Furthermore, basic acoustic properties of the stimuli like fundamental frequency and duration were predictive of the participants' responses, and musicians and controls were similarly efficient in using them. Musical expertise was thus associated with cross-domain benefits to emotional prosody. These results indicate that emotional processing in music and in language engages shared resources. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

7.
Studied differences between dimensional and categorical judgments of static and dynamic spontaneous facial expressions of emotion. In the 1st part of the study, 25 university students presented with either static or dynamic facial expressions of emotions (i.e., joy, fear, anger, surprise, disgust, and sadness) and asked to evaluate the similarity of 21 pairs of stimuli on a 7-point scale. Results were analyzed using a multidimensional scaling procedure. In the 2nd part of the study, Ss were asked to categorize the expressed emotions according to their intensity. Differences in the categorization of static and dynamic stimuli were analyzed. Results from the similarity rating task and the categorization task were compared. (English abstract) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
The ability to perceive and interpret facial expressions of emotion improves throughout childhood. Although newborns have rudimentary perceptive abilities allowing them to distinguish several facial expressions, it is only at the end of the first year that infants seem to be able to assign meaning to emotional signals. The meaning infants assign to facial expressions is very broad, as it is limited to the judgment of emotional valence. Meaning becomes more specific between the second and the third year of life, as children begin to categorize facial signals in terms of discrete emotions. While the facial expressions of happiness, anger and sadness are accurately categorized by the third year, the categorization of expressions of fear, surprise and disgust shows a much slower developmental pattern. Moreover, the ability to judge the sincerity of facial expressions shows a slower developmental pattern, probably because of the subtle differences between genuine and non-genuine expressions. The available evidence indicates that school age children can distinguish genuine smiles from masked smiles and false smiles. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
Neuropsychological studies report more impaired responses to facial expressions of fear than disgust in people with amygdala lesions, and vice versa in people with Huntington's disease. Experiments using functional magnetic resonance imaging (fMRI) have confirmed the role of the amygdala in the response to fearful faces and have implicated the anterior insula in the response to facial expressions of disgust. We used fMRI to extend these studies to the perception of fear and disgust from both facial and vocal expressions. Consistent with neuropsychological findings, both types of fearful stimuli activated the amygdala. Facial expressions of disgust activated the anterior insula and the caudate-putamen; vocal expressions of disgust did not significantly activate either of these regions. All four types of stimuli activated the superior temporal gyrus. Our findings therefore (i) support the differential localization of the neural substrates of fear and disgust; (ii) confirm the involvement of the amygdala in the emotion of fear, whether evoked by facial or vocal expressions; (iii) confirm the involvement of the anterior insula and the striatum in reactions to facial expressions of disgust; and (iv) suggest a possible general role for the perception of emotional expressions for the superior temporal gyrus.  相似文献   

10.
The ability to identify facial expressions of happiness, sadness, anger, surprise,fear, and disgust was studied in 48 nondisabled children and 76 children with learning disabilities aged 9 through 12. On the basis of their performance on the Rey Auditory-Verbal Learning Test and the Benton Visual Retention Test, the LD group was divided into three subgroups: those with verbal deficits (VD), nonverbal deficits (NVD), and both verbal and nonverbal (BD) deficits. The measure of ability to interpret facial expressions of affect was a shortened version of Ekman and Friesen's Pictures of Facial Affect. Overall, the nondisabled group had better interpretive ability than the three learning disabled groups and the VD group had better ability than the NVD and BD groups. Although the identification level of the nondisabled group differed from that of the VD group only for surprise, it was superior to that of the NVD and BD groups for four of the six emotions. Happiness was the easiest to identify, and the remaining emotions in ascending order of difficulty were anger, surprise, sadness, fear, and disgust. Older subjects did better than younger ones only for fear and disgust, and boys and girls did not differ in interpretive ability. These findings are discussed in terms of the need to take note of the heterogeneity of the learning disabled population and the particular vulnerability to social imperception of children with nonverbal deficits.  相似文献   

11.
This study examined the recognition of the facial prototypes comprised in the expressive emotional repertory proposed by Ekman and Friesen (1978a) and by Wiggers (1982). The prototypes were shown to 74 decoders who had to rate the intensity of the emotion or emotions being portrayed. The results indicated that the majority of the prototypes, except those of fear and disgust, clearly signaled the predicted emotion. The various prototypes related to the same emotion were found to differ in their signal value, some of them being better recognized and more specific than others. Some prototypes of fear and disgust were found to signal mixed rather than pure emotions. The results also revealed that the level of recognition of emotional expression varies according to the encoder which suggests that interindividual differences in facial anatomy influence the perception of emotion.  相似文献   

12.
This study examined the recognition of the facial prototypes comprised in the expressive emotional repertory proposed by Ekman and Friesen (1978a) and by Wiggers (1982). The prototypes were shown to 74 decoders who had to rate the intensity of the emotion or emotions being portrayed. The results indicated that the majority of the prototypes, except those of fear and disgust, clearly signaled the predicted emotion. The various prototypes related to the same emotion were found to differ in their signal value, some of them being better recognized and more specific than others. Some prototypes of fear and disgust were found to signal mixed rather than pure emotions. The results also revealed that the level of recognition of emotional expression varies according to the encoder which suggests that interindividual differences in facial anatomy influence the perception of emotion. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
Emotion theorists assume certain facial displays to convey information about the expresser's emotional state. In contrast, behavioral ecologists assume them to indicate behavioral intentions or action requests. To test these contrasting positions, over 2,000 online participants were presented with facial expressions and asked what they revealed--feeling states, behavioral intentions, or action requests. The majority of the observers chose feeling states as the message of facial expressions of disgust, fear, sadness, happiness, and surprise, supporting the emotions view. Only the anger display tended to elicit more choices of behavioral intention or action request, partially supporting the behavioral ecology view. The results support the view that facial expressions communicate emotions, with emotions being multicomponential phenomena that comprise feelings, intentions, and wishes. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
Emotion identification appears to decline with age, and deficient visual scanning may contribute to this effect. Eye movements of 20 older adults (OAs) and 20 younger adults (YAs) with normal saccades were recorded while viewing facial expressions. OAs made fewer fixations overall, and they made a higher proportion of fixations to the lower halves of faces. Topographical distribution of fixations predicted better OA accuracy for identifying disgust than other negative emotions. Impaired OA accuracy for fear and anger was specific to vision, with normal identification of these emotions in the auditory domain. Age-related frontal-lobe atrophy may affect the integrity of the frontal eye fields, with consequent scanning abnormalities that contribute to difficulties in identifying certain emotions. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
Little research has focused on children's decoding of emotional meaning in expressive body movement: none has considered which movement cues children use to detect emotional meaning. The current study investigated the general ability to decode happiness, sadness, anger, and fear in dance forms of expressive body movement and the specific ability to detect differences in the intensity of anger and happiness when the relative amount of movement cue specifying each emotion was systematically varied. Four-year-olds (n = 25), 5-year-olds (n = 25), 8-year-olds (n = 29), and adults (n = 24) completed an emotion contrast task and 2 emotion intensity tasks. Decoding ability exceeding chance levels was demonstrated for sadness by 4-year-olds; for sadness, fear, and happiness by 5-year-olds: and for all emotions by 8-year-olds and adults. Children as young as 5 years were shown to rely on emotion-specific movement cues in their decoding of anger and happiness intensity. The theoretical significance of these effects across development is discussed.  相似文献   

16.
This exploratory study aims at investigating the effects of terrorism on children’s ability to recognize emotions. A sample of 101 exposed and 102 nonexposed children (mean age = 11 years), balanced for age and gender, were assessed 20 months after a terrorist attack in Beslan, Russia. Two trials controlled for children’s ability to match a facial emotional stimulus with an emotional label and their ability to match an emotional label with an emotional context. The experimental trial evaluated the relation between exposure to terrorism and children’s free labeling of mixed emotion facial stimuli created by morphing between 2 prototypical emotions. Repeated measures analyses of covariance revealed that exposed children correctly recognized pure emotions. Four log-linear models were performed to explore the association between exposure group and category of answer given in response to different mixed emotion facial stimuli. Model parameters indicated that, compared with nonexposed children, exposed children (a) labeled facial expressions containing anger and sadness significantly more often than expected as anger, and (b) produced fewer correct answers in response to stimuli containing sadness as a target emotion. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
18.
The communication of emotion via touch.   总被引:1,自引:0,他引:1  
The study of emotional communication has focused predominantly on the facial and vocal channels but has ignored the tactile channel. Participants in the current study were allowed to touch an unacquainted partner on the whole body to communicate distinct emotions. Of interest was how accurately the person being touched decoded the intended emotions without seeing the tactile stimulation. The data indicated that anger, fear, disgust, love, gratitude, and sympathy were decoded at greater than chance levels, as well as happiness and sadness, 2 emotions that have not been shown to be communicated by touch to date. Moreover, fine-grained coding documented specific touch behaviors associated with different emotions. The findings are discussed in terms of their contribution to the study of emotion-related communication. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
Little research has focused on children's decoding of emotional meaning in expressive body movement; none has considered which movement cues children use to detect emotional meaning. The current study investigated the general ability to decode happiness, sadness, anger, and fear in dance forms of expressive body movement and the specific ability to detect differences in the intensity of anger and happiness when the relative amount of movement cue specifying each emotion was systematically varied. Four-year-olds (n?=?25), 5-year-olds (n?=?25), 8-year-olds (n?=?29), and adults (n?=?24) completed an emotion contrast task and 2 emotion intensity tasks. Decoding ability exceeding chance levels was demonstrated for sadness by 4-year-olds; for sadness, fear, and happiness by 5-year-olds; and for all emotions by 8-year-olds and adults. Children as young as 5 years were shown to rely on emotion-specific movement cues in their decoding of anger and happiness intensity. The theoretical significance of these effects across development is discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
Very few large-scale studies have focused on emotional facial expression recognition (FER) in 3-year-olds, an age of rapid social and language development. We studied FER in 808 healthy 3-year-olds using verbal and nonverbal computerized tasks for four basic emotions (happiness, sadness, anger, and fear). Three-year-olds showed differential performance on the verbal and nonverbal FER tasks, especially with respect to fear. That is to say, fear was one of the most accurately recognized facial expressions as matched nonverbally and the least accurately recognized facial expression as labeled verbally. Sex did not influence emotion-matching nor emotion-labeling performance after adjusting for basic matching or labeling ability. Three-year-olds made systematic errors in emotion-labeling. Namely, happy expressions were often confused with fearful expressions, whereas negative expressions were often confused with other negative expressions. Together, these findings suggest that 3-year-olds' FER skills strongly depend on task specifications. Importantly, fear was the most sensitive facial expression in this regard. Finally, in line with previous studies, we found that recognized emotion categories are initially broad, including emotions of the same valence, as reflected in the nonrandom errors of 3-year-olds. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号