首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Age differences in emotion recognition from lexical stimuli and facial expressions were examined in a cross-sectional sample of adults aged 18 to 85 (N = 357). Emotion-specific response biases differed by age: Older adults were disproportionately more likely to incorrectly label lexical stimuli as happiness, sadness, and surprise and to incorrectly label facial stimuli as disgust and fear. After these biases were controlled, findings suggested that older adults were less accurate at identifying emotions than were young adults, but the pattern differed across emotions and task types. The lexical task showed stronger age differences than the facial task, and for lexical stimuli, age groups differed in accuracy for all emotional states except fear. For facial stimuli, in contrast, age groups differed only in accuracy for anger, disgust, fear, and happiness. Implications for age-related changes in different types of emotional processing are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
Photographs (study 1) or line-drawing representations (study 2) of posed facial expressions and a list of emotion words (happiness, surprise, fear, disgust, anger, sadness, neutral) were presented to two groups of observers who were asked to match the photographs or line drawings, respectively, with the emotion categories provided. A multidimensional-scaling procedure was applied to the judgment data. Two dimensions were revealed; pleasantness--unpleasantness and upper-face--lower-face dominance. Furthermore, the similarity shown by the two-dimensional structures derived first from the judgments of photographs and second from the line drawings suggests that line drawings are a viable alternative to photographs in facial-expression research.  相似文献   

3.
The ability to identify facial expressions of happiness, sadness, anger, surprise,fear, and disgust was studied in 48 nondisabled children and 76 children with learning disabilities aged 9 through 12. On the basis of their performance on the Rey Auditory-Verbal Learning Test and the Benton Visual Retention Test, the LD group was divided into three subgroups: those with verbal deficits (VD), nonverbal deficits (NVD), and both verbal and nonverbal (BD) deficits. The measure of ability to interpret facial expressions of affect was a shortened version of Ekman and Friesen's Pictures of Facial Affect. Overall, the nondisabled group had better interpretive ability than the three learning disabled groups and the VD group had better ability than the NVD and BD groups. Although the identification level of the nondisabled group differed from that of the VD group only for surprise, it was superior to that of the NVD and BD groups for four of the six emotions. Happiness was the easiest to identify, and the remaining emotions in ascending order of difficulty were anger, surprise, sadness, fear, and disgust. Older subjects did better than younger ones only for fear and disgust, and boys and girls did not differ in interpretive ability. These findings are discussed in terms of the need to take note of the heterogeneity of the learning disabled population and the particular vulnerability to social imperception of children with nonverbal deficits.  相似文献   

4.
In 2 studies, the authors developed and validated of a new set of standardized emotion expressions, which they referred to as the University of California, Davis, Set of Emotion Expressions (UCDSEE). The precise components of each expression were verified using the Facial Action Coding System (FACS). The UCDSEE is the first FACS-verified set to include the three “self-conscious” emotions known to have recognizable expressions (embarrassment, pride, and shame), as well as the 6 previously established “basic” emotions (anger, disgust, fear, happiness, sadness, and surprise), all posed by the same 4 expressers (African and White males and females). This new set has numerous potential applications in future research on emotion and related topics. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
Little research has focused on children's decoding of emotional meaning in expressive body movement: none has considered which movement cues children use to detect emotional meaning. The current study investigated the general ability to decode happiness, sadness, anger, and fear in dance forms of expressive body movement and the specific ability to detect differences in the intensity of anger and happiness when the relative amount of movement cue specifying each emotion was systematically varied. Four-year-olds (n = 25), 5-year-olds (n = 25), 8-year-olds (n = 29), and adults (n = 24) completed an emotion contrast task and 2 emotion intensity tasks. Decoding ability exceeding chance levels was demonstrated for sadness by 4-year-olds; for sadness, fear, and happiness by 5-year-olds: and for all emotions by 8-year-olds and adults. Children as young as 5 years were shown to rely on emotion-specific movement cues in their decoding of anger and happiness intensity. The theoretical significance of these effects across development is discussed.  相似文献   

6.
Little research has focused on children's decoding of emotional meaning in expressive body movement; none has considered which movement cues children use to detect emotional meaning. The current study investigated the general ability to decode happiness, sadness, anger, and fear in dance forms of expressive body movement and the specific ability to detect differences in the intensity of anger and happiness when the relative amount of movement cue specifying each emotion was systematically varied. Four-year-olds (n?=?25), 5-year-olds (n?=?25), 8-year-olds (n?=?29), and adults (n?=?24) completed an emotion contrast task and 2 emotion intensity tasks. Decoding ability exceeding chance levels was demonstrated for sadness by 4-year-olds; for sadness, fear, and happiness by 5-year-olds; and for all emotions by 8-year-olds and adults. Children as young as 5 years were shown to rely on emotion-specific movement cues in their decoding of anger and happiness intensity. The theoretical significance of these effects across development is discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
We investigated adults' voluntary control of 20 facial action units theoretically associated with 6 basic emotions (happiness, fear, anger, surprise, sadness, and disgust). Twenty young adults were shown video excerpts of facial action units and asked to reproduce them as accurately as possible. Facial Action Coding System (FACS; Ekman & Friesen, 1978a) coding of the facial productions showed that young adults succeeded in activating 18 of the 20 target actions units, although they often coactivated other action units. Voluntary control was clearly better for some action units than for others, with a pattern of differences between action units consistent with previous work in children and adolescents. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
Recent research has shown that pride, like the "basic" emotions of anger, disgust, fear, happiness, sadness, and surprise, has a distinct, nonverbal expression that can be recognized by adults (J. L. Tracy & R. W. Robins, 2004b). In 2 experiments, the authors examined whether young children can identify the pride expression and distinguish it from expressions of happiness and surprise. Results suggest that (a) children can recognize pride at above-chance levels by age 4 years; (b) children recognize pride as well as they recognize happiness; (c) pride recognition, like happiness and surprise recognition, improves from age 3 to 7 years; and (d) children's ability to recognize pride cannot be accounted for by the use of a process of elimination (i.e., an exclusion rule) to identify an unknown entity. These findings have implications for the development of emotion recognition and children's ability to perceive and communicate pride. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
Studied the development of the recognition of emotional facial expressions in children and of the factors influencing recognition accuracy. 80 elementary school students (aged 5–8 yrs) were asked to identify the emotions expressed in a series of facial photographs. Recognition performances were analyzed in relation to the type of emotion expressed (i.e., happiness, fear, anger, surprise, sadness, or disgust) and the intensity of the emotional expression. Age differences were determined. (English abstract) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
Emotion theorists assume certain facial displays to convey information about the expresser's emotional state. In contrast, behavioral ecologists assume them to indicate behavioral intentions or action requests. To test these contrasting positions, over 2,000 online participants were presented with facial expressions and asked what they revealed--feeling states, behavioral intentions, or action requests. The majority of the observers chose feeling states as the message of facial expressions of disgust, fear, sadness, happiness, and surprise, supporting the emotions view. Only the anger display tended to elicit more choices of behavioral intention or action request, partially supporting the behavioral ecology view. The results support the view that facial expressions communicate emotions, with emotions being multicomponential phenomena that comprise feelings, intentions, and wishes. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

11.
The processing of emotional expressions is fundamental for normal socialization and interaction. Reduced responsiveness to the expressions of sadness and fear has been implicated in the development of psychopathy (R. J. R. Blair, 1995). The current study investigates the ability of adult psychopathic individuals to process vocal affect. Psychopathic and nonpsychopathic adults, defined by the Hare Psychopathy Checklist-Revised (PCL-R; R. D. Hare, 1991), were presented with neutral words spoken with intonations conveying happiness, disgust, anger, sadness, and fear and were asked to identify the emotion of the speaker on the basis of prosody. The results indicated that psychopathic inmates were particularly impaired in the recognition of fearful vocal affect. These results are interpreted with reference to the low-fear and violence inhibition mechanism models of psychopathy. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

12.
Two studies provided direct support for a recently proposed dialect theory of communicating emotion, positing that expressive displays show cultural variations similar to linguistic dialects, thereby decreasing accurate recognition by out-group members. In Study 1, 60 participants from Quebec and Gabon posed facial expressions. Dialects, in the form of activating different muscles for the same expressions, emerged most clearly for serenity, shame, and contempt and also for anger, sadness, surprise, and happiness, but not for fear, disgust, or embarrassment. In Study 2, Quebecois and Gabonese participants judged these stimuli and stimuli standardized to erase cultural dialects. As predicted, an in-group advantage emerged for nonstandardized expressions only and most strongly for expressions with greater regional dialects, according to Study 1. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
Investigated the degree to which 4–5 yr olds (n?=?48) can enact expressions of emotion recognizable by peers and adults; the study also examined whether accuracy of recognition was a function of age and whether the expression was posed or spontaneous. Adults (n?=?103) were much more accurate than children in recognizing neutral states, slightly more accurate in recognizing happiness and anger, and equally accurate in recognizing sadness. Children's spontaneous displays of happiness were more recognizable than posed displays, but for other emotions there was no difference between the recognizability of posed and spontaneous expressions. Children were highly accurate in identifying the facial expressions of happiness, sadness, and anger displayed by their peers. Sex and ethnicity of the child whose emotion was displayed interacted to influence only adults' recognizability of anger. Results are discussed in terms of the social learning and cognitive developmental factors influencing (a) adults' and children's decoding (recognition) of emotional expressions in young children and (b) encoding (posing) of emotional expressions by young children. (20 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
The ability to perceive and interpret facial expressions of emotion improves throughout childhood. Although newborns have rudimentary perceptive abilities allowing them to distinguish several facial expressions, it is only at the end of the first year that infants seem to be able to assign meaning to emotional signals. The meaning infants assign to facial expressions is very broad, as it is limited to the judgment of emotional valence. Meaning becomes more specific between the second and the third year of life, as children begin to categorize facial signals in terms of discrete emotions. While the facial expressions of happiness, anger and sadness are accurately categorized by the third year, the categorization of expressions of fear, surprise and disgust shows a much slower developmental pattern. Moreover, the ability to judge the sincerity of facial expressions shows a slower developmental pattern, probably because of the subtle differences between genuine and non-genuine expressions. The available evidence indicates that school age children can distinguish genuine smiles from masked smiles and false smiles. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
In an experiment with 20 undergraduates, video recordings of actors' faces covered with black makeup and white spots were played back to the Ss so that only the white spots were visible. The results demonstrate that moving displays of happiness, sadness, fear, surprise, anger, and disgust were recognized more accurately than static displays of the white spots at the apex of the expressions. This indicates that facial motion, in the absence of information about the shape and position of facial features, is informative about these basic emotions. Normally illuminated dynamic displays of these expressions, however, were recognized more accurately than displays of moving spots. The relative effectiveness of upper and lower facial areas for the recognition of the 6 emotions was also investigated using normally illuminated and spots-only displays. In both instances, the results indicate that different facial regions are more informative for different emotions. (20 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
Evolutionary accounts of emotion typically assume that humans evolved to quickly and efficiently recognize emotion expressions because these expressions convey fitness-enhancing messages. The present research tested this assumption in 2 studies. Specifically, the authors examined (a) how quickly perceivers could recognize expressions of anger, contempt, disgust, embarrassment, fear, happiness, pride, sadness, shame, and surprise; (b) whether accuracy is improved when perceivers deliberate about each expression's meaning (vs. respond as quickly as possible); and (c) whether accurate recognition can occur under cognitive load. Across both studies, perceivers quickly and efficiently (i.e., under cognitive load) recognized most emotion expressions, including the self-conscious emotions of pride, embarrassment, and shame. Deliberation improved accuracy in some cases, but these improvements were relatively small. Discussion focuses on the implications of these findings for the cognitive processes underlying emotion recognition. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
The authors compared the accuracy of emotion decoding for nonlinguistic affect vocalizations, speech-embedded vocal prosody, and facial cues representing 9 different emotions. Participants (N = 121) decoded 80 stimuli from 1 of the 3 channels. Accuracy scores for nonlinguistic affect vocalizations and facial expressions were generally equivalent, and both were higher than scores for speech-embedded prosody. In particular, affect vocalizations showed superior decoding over the speech stimuli for anger, contempt, disgust, fear, joy, and sadness. Further, specific emotions that were decoded relatively poorly through speech-embedded prosody were more accurately identified through affect vocalizations, suggesting that emotions that are difficult to communicate in running speech can still be expressed vocally through other means. Affect vocalizations also showed superior decoding over faces for anger, contempt, disgust, fear, sadness, and surprise. Facial expressions showed superior decoding scores over both types of vocal stimuli for joy, pride, embarrassment, and “neutral” portrayals. Results are discussed in terms of the social functions served by various forms of nonverbal emotion cues and the communicative advantages of expressing emotions through particular channels. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

18.
Facial autonomic responses may contribute to emotional communication and reveal individual affective style. In this study, the authors examined how observed pupillary size modulates processing of facial expression, extending the finding that incidentally perceived pupils influence ratings of sadness but not those of happy, angry, or neutral facial expressions. Healthy subjects rated the valence and arousal of photographs depicting facial muscular expressions of sadness, surprise, fear, and disgust. Pupil sizes within the stimuli were experimentally manipulated. Subjects themselves were scored with an empathy questionnaire. Diminishing pupil size linearly enhanced intensity and valence judgments of sad expressions (but not fear, surprise, or disgust). At debriefing, subjects were unaware of differences in pupil size across stimuli. These observations complement an earlier study showing that pupil size directly influences processing of sadness but not other basic emotional facial expressions. Furthermore, across subjects, the degree to which pupil size influenced sadness processing correlated with individual differences in empathy score. Together, these data demonstrate a central role of sadness processing in empathetic emotion and highlight the salience of implicit autonomic signals in affective communication. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
This study compared young and older adults’ ability to recognize bodily and auditory expressions of emotion and to match bodily and facial expressions to vocal expressions. Using emotion discrimination and matching techniques, participants assessed emotion in voices (Experiment 1), point-light displays (Experiment 2), and still photos of bodies with faces digitally erased (Experiment 3). Older adults’ were worse at least some of the time in recognition of anger, sadness, fear, and happiness in bodily expressions and of anger in vocal expressions. Compared with young adults, older adults also found it more difficult to match auditory expressions to facial expressions (5 of 6 emotions) and bodily expressions (3 of 6 emotions). (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
Studies of emotion signaling inform claims about the taxonomic structure, evolutionary origins, and physiological correlates of emotions. Emotion vocalization research has tended to focus on a limited set of emotions: anger, disgust, fear, sadness, surprise, happiness, and for the voice, also tenderness. Here, we examine how well brief vocal bursts can communicate 22 different emotions: 9 negative (Study 1) and 13 positive (Study 2), and whether prototypical vocal bursts convey emotions more reliably than heterogeneous vocal bursts (Study 3). Results show that vocal bursts communicate emotions like anger, fear, and sadness, as well as seldom-studied states like awe, compassion, interest, and embarrassment. Ancillary analyses reveal family-wise patterns of vocal burst expression. Errors in classification were more common within emotion families (e.g., ’self-conscious,’ ’pro-social’) than between emotion families. The three studies reported highlight the voice as a rich modality for emotion display that can inform fundamental constructs about emotion. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号