共查询到20条相似文献,搜索用时 0 毫秒
1.
Hunter Edyta Monika; Phillips Louise H.; MacPherson Sarah E. 《Canadian Metallurgical Quarterly》2010,25(4):779
Efficient navigation of our social world depends on the generation, interpretation, and combination of social signals within different sensory systems. However, the influence of healthy adult aging on multisensory integration of emotional stimuli remains poorly explored. This article comprises 2 studies that directly address issues of age differences on cross-modal emotional matching and explicit identification. The first study compared 25 younger adults (19–40 years) and 25 older adults (60–80 years) on their ability to match cross-modal congruent and incongruent emotional stimuli. The second study looked at performance of 20 younger (19–40) and 20 older adults (60–80) on explicit emotion identification when information was presented congruently in faces and voices or only in faces or in voices. In Study 1, older adults performed as well as younger adults on tasks in which congruent auditory and visual emotional information were presented concurrently, but there were age-related differences in matching incongruent cross-modal information. Results from Study 2 indicated that though older adults were impaired at identifying emotions from 1 modality (faces or voices alone), they benefited from congruent multisensory information as age differences were eliminated. The findings are discussed in relation to social, emotional, and cognitive changes with age. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
2.
The ability of the human face to communicate emotional states via facial expressions is well known, and past research has established the importance and universality of emotional facial expressions. However, recent evidence has revealed that facial expressions of emotion are most accurately recognized when the perceiver and expresser are from the same cultural ingroup. The current research builds on this literature and extends this work. Specifically, we find that mere social categorization, using a minimal-group paradigm, can create an ingroup emotion–identification advantage even when the culture of the target and perceiver is held constant. Follow-up experiments show that this effect is supported by differential motivation to process ingroup versus outgroup faces and that this motivational disparity leads to more configural processing of ingroup faces than of outgroup faces. Overall, the results point to distinct processing modes for ingroup and outgroup faces, resulting in differential identification accuracy for facial expressions of emotion. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
3.
Despite the fact that facial expressions of emotion have signal value, there is surprisingly little research examining how that signal can be detected under various conditions, because most judgment studies utilize full-face, frontal views. We remedy this by obtaining judgments of frontal and profile views of the same expressions displayed by the same expressors. We predicted that recognition accuracy when viewing faces in profile would be lower than when judging the same faces from the front. Contrarily, there were no differences in recognition accuracy as a function of view, suggesting that emotions are judged equally well regardless of from what angle they are viewed. (PsycINFO Database Record (c) 2011 APA, all rights reserved) 相似文献
4.
Reports an error in "Facial expressions of emotion influence memory for facial identity in an automatic way" by Arnaud D'Argembeau and Martial Van der Linden (Emotion, 2007[Aug], Vol 7[3], 507-515). The image printed for Figure 3 was incorrect. The correct image is provided in the erratum. (The following abstract of the original article appeared in record 2007-11660-005.) Previous studies indicate that the encoding of new facial identities in memory is influenced by the type of expression displayed by the faces. In the current study, the authors investigated whether or not this influence requires attention to be explicitly directed toward the affective meaning of facial expressions. In a first experiment, the authors found that facial identity was better recognized when the faces were initially encountered with a happy rather than an angry expression, even when attention was oriented toward facial features other than expression. Using the Remember/Know/Guess paradigm in a second experiment, the authors found that the influence of facial expressions on the conscious recollection of facial identity was even more pronounced when participants' attention was not directed toward expressions. It is suggested that the affective meaning of facial expressions automatically modulates the encoding of facial identity in memory. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
5.
Three experiments tested the hypothesis that explaining emotional expressions using specific emotion concepts at encoding biases perceptual memory for those expressions. In Experiment 1, participants viewed faces expressing blends of happiness and anger and created explanations of why the target people were expressing one of the two emotions, according to concepts provided by the experimenter. Later, participants attempted to identify the facial expressions in computer movies, in which the previously seen faces changed continuously from anger to happiness. Faces conceptualized in terms of anger were remembered as angrier than the same faces conceptualized in terms of happiness, regardless of whether the explanations were told aloud or imagined. Experiments 2 and 3 showed that explanation is necessary for the conceptual biases to emerge fully and extended the finding to anger-sad expressions, an emotion blend more common in real life. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
6.
Masuda Takahiko; Ellsworth Phoebe C.; Mesquita Batja; Leu Janxin; Tanida Shigehito; Van de Veerdonk Ellen 《Canadian Metallurgical Quarterly》2008,94(3):365
Two studies tested the hypothesis that in judging people's emotions from their facial expressions, Japanese, more than Westerners, incorporate information from the social context. In Study 1, participants viewed cartoons depicting a happy, sad, angry, or neutral person surrounded by other people expressing the same emotion as the central person or a different one. The surrounding people's emotions influenced Japanese but not Westerners' perceptions of the central person. These differences reflect differences in attention, as indicated by eye-tracking data (Study 2): Japanese looked at the surrounding people more than did Westerners. Previous findings on East-West differences in contextual sensitivity generalize to social contexts, suggesting that Westerners see emotions as individual feelings, whereas Japanese see them as inseparable from the feelings of the group. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
7.
Facial expressions of emotion are key cues to deceit (M. G. Frank & P. Ekman, 1997). Given that the literature on aging has shown an age-related decline in decoding emotions, we investigated (a) whether there are age differences in deceit detection and (b) if so, whether they are related to impairments in emotion recognition. Young and older adults (N = 364) were presented with 20 interviews (crime and opinion topics) and asked to decide whether each interview subject was lying or telling the truth. There were 3 presentation conditions: visual, audio, or audiovisual. In older adults, reduced emotion recognition was related to poor deceit detection in the visual condition for crime interviews only. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
8.
Phillips Louise H.; Channon Shelley; Tunstall Mary; Hedenstrom Anna; Lyons Kathryn 《Canadian Metallurgical Quarterly》2008,8(2):184
Decoding facial expressions of emotion is an important aspect of social communication that is often impaired following psychiatric or neurological illness. However, little is known of the cognitive components involved in perceiving emotional expressions. Three dual task studies explored the role of verbal working memory in decoding emotions. Concurrent working memory load substantially interfered with choosing which emotional label described a facial expression (Experiment 1). A key factor in the magnitude of interference was the number of emotion labels from which to choose (Experiment 2). In contrast the ability to decide that two faces represented the same emotion in a discrimination task was relatively unaffected by concurrent working memory load (Experiment 3). Different methods of assessing emotion perception make substantially different demands on working memory. Implications for clinical disorders which affect both working memory and emotion perception are considered. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
9.
Fugate Jennifer M. B.; Gouzoules Harold; Barrett Lisa Feldman 《Canadian Metallurgical Quarterly》2010,10(4):544
Categorical perception (CP) occurs when continuously varying stimuli are perceived as belonging to discrete categories. Thereby, perceivers are more accurate at discriminating between stimuli of different categories than between stimuli within the same category (Harnad, 1987; Goldstone, 1994). The current experiments investigated whether the structural information in the face is sufficient for CP to occur. Alternatively, a perceiver's conceptual knowledge, by virtue of expertise or verbal labeling, might contribute. In two experiments, people who differed in their conceptual knowledge (in the form of expertise, Experiment 1; or verbal label learning, Experiment 2) categorized chimpanzee facial expressions. Expertise alone did not facilitate CP. Only when perceivers first explicitly learned facial expression categories with a label were they more likely to show CP. Overall, the results suggest that the structural information in the face alone is often insufficient for CP; CP is facilitated by verbal labeling. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
10.
In this study I used a temporal bisection task to test if greater overestimation of time due to negative emotion is moderated by individual differences in negative emotionality. The effects of fearful facial expressions on time perception were also examined. After a training phase, participants estimated the duration of facial expressions (anger, happiness, fearfulness) and a neutral-baseline facial expression. In accordance to the operation of an arousal-based process, the duration of angry expressions was consistently overestimated relative to other expressions and the baseline condition. In support of a role for individual differences in negative emotionality on time perception, temporal bias due to angry and fearful expressions was positively correlated to individual differences in self-reported negative emotionality. The results are discussed in relation both to the literature on attentional bias to facial expressions in anxiety and fearfulness and also, to the hypothesis that angry expressions evoke a fear-specific response. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
11.
Lynch Thomas R.; Rosenthal M. Zachary; Kosson David S.; Cheavens Jennifer S.; Lejuez C. W.; Blair R. J. R. 《Canadian Metallurgical Quarterly》2006,6(4):647
Individuals with borderline personality disorder (BPD) have been hypothesized to exhibit significant problems associated with emotional sensitivity. The current study examined emotional sensitivity (i.e., low threshold for recognition of emotional stimuli) in BPD by comparing 20 individuals with BPD and 20 normal controls on their accuracy in identifying emotional expressions. Results demonstrated that, as facial expressions morphed from neutral to maximum intensity, participants with BPD correctly identified facial affect at an earlier stage than did healthy controls. Participants with BPD were more sensitive than healthy controls in identifying emotional expressions in general, regardless of valence. These findings could not be explained by participants with BPD responding faster with more errors. Overall, results appear to support the contention that heightened emotional sensitivity may be a core feature of BPD. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
12.
Infants' responsiveness to others' affective expressions was investigated in the context of a peekaboo game. Forty 4-month-olds participated in a peekaboo game in which the typical happy/surprised expression was systematically replaced with a different emotion, depending on group assignment. Infants viewed three typical peekaboo trials followed by a change (anger, fear, or sadness) or no-change (happiness/surprise) trial, repeated over two blocks. Infants' looking time and affective responsiveness were measured. Results revealed differential patterns of visual attention and affective responsiveness to each emotion. These results underscore the importance of contextual information for facilitating recognition of emotion expressions as well as the efficacy of using converging measures to assess such understanding. Infants as young as 4 months appear to discriminate and respond in meaningful ways to others' emotion expressions. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
13.
Two studies provide evidence for the role of cultural familiarity in recognizing facial expressions of emotion. For Chinese located in China and the United States, Chinese Americans, and non-Asian Americans, accuracy and speed in judging Chinese and American emotions was greater with greater participant exposure to the group posing the expressions. Likewise, Tibetans residing in China and Africans residing in the United States were faster and more accurate when judging emotions expressed by host versus nonhost society members. These effects extended across generations of Chinese Americans, seemingly independent of ethnic or biological ties. Results suggest that the universal affect system governing emotional expression may be characterized by subtle differences in style across cultures, which become more familiar with greater cultural contact. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
14.
Tracy Jessica L.; Robins Richard W.; Schriber Roberta A. 《Canadian Metallurgical Quarterly》2009,9(4):554
In 2 studies, the authors developed and validated of a new set of standardized emotion expressions, which they referred to as the University of California, Davis, Set of Emotion Expressions (UCDSEE). The precise components of each expression were verified using the Facial Action Coding System (FACS). The UCDSEE is the first FACS-verified set to include the three “self-conscious” emotions known to have recognizable expressions (embarrassment, pride, and shame), as well as the 6 previously established “basic” emotions (anger, disgust, fear, happiness, sadness, and surprise), all posed by the same 4 expressers (African and White males and females). This new set has numerous potential applications in future research on emotion and related topics. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
15.
In his book The Expression of the Emotions in Man and Animals, Charles Darwin (1872/1965) defended the argument that emotion expressions are evolved and adaptive (at least at some point in the past) and serve an important communicative function. The ideas he developed in his book had an important impact on the field and spawned rich domains of inquiry. This article presents Darwin's three principles in this area and then discusses some of the research topics that developed out of his theoretical vision. In particular, the focus is on five issues--(a) the question of what emotion expressions express, (b) the notion of basic emotions, (c) the universality of emotion expressions, (d) the question of emotion prototypes, and (e) the issue of animal emotions--all of which trace their roots to Darwin's discussion of his first two principles. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
16.
[Correction Notice: An erratum for this article was reported in Vol 7(4) of Emotion (see record 2007-17748-022). The image printed for Figure 3 was incorrect. The correct image is provided in the erratum.] Previous studies indicate that the encoding of new facial identities in memory is influenced by the type of expression displayed by the faces. In the current study, the authors investigated whether or not this influence requires attention to be explicitly directed toward the affective meaning of facial expressions. In a first experiment, the authors found that facial identity was better recognized when the faces were initially encountered with a happy rather than an angry expression, even when attention was oriented toward facial features other than expression. Using the Remember/Know/Guess paradigm in a second experiment, the authors found that the influence of facial expressions on the conscious recollection of facial identity was even more pronounced when participants' attention was not directed toward expressions. It is suggested that the affective meaning of facial expressions automatically modulates the encoding of facial identity in memory. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
17.
The ability to allocate attention to emotional cues in the enviromnent is an important feature of adaptive self-regulation. Existing data suggest that physically abused children overattend to angry expressions, but the attentional mechanisms underlying such behavior are unknown. The authors tested 8-11-year-old physically abused children to determine whether they displayed specific information-processing problems in a selective attention paradigm using emotional faces as cues. Physically abused children demonstrated delayed disengagement when angry faces served as invalid cues. Abused children also demonstrated increased attentional benefits on valid angry trials. Results are discussed in terms of the influence of early adverse experience on children's selective attention to threat-related signals as a mechanism in the development of psychopathology. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
18.
Experimental studies indicate that recognition of emotions, particularly negative emotions, decreases with age. However, there is no consensus at which age the decrease in emotion recognition begins, how selective this is to negative emotions, and whether this applies to both facial and vocal expression. In the current cross-sectional study, 607 participants ranging in age from 18 to 84 years (mean age = 32.6 ± 14.9 years) were asked to recognize emotions expressed either facially or vocally. In general, older participants were found to be less accurate at recognizing emotions, with the most distinctive age difference pertaining to a certain group of negative emotions. Both modalities revealed an age-related decline in the recognition of sadness and—to a lesser degree—anger, starting at about 30 years of age. Although age-related differences in the recognition of expression of emotion were not mediated by personality traits, 2 of the Big 5 traits, openness and conscientiousness, made an independent contribution to emotion-recognition performance. Implications of age-related differences in facial and vocal emotion expression and early onset of the selective decrease in emotion recognition are discussed in terms of previous findings and relevant theoretical models. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
19.
Dailey Matthew N.; Joyce Carrie; Lyons Michael J.; Kamachi Miyuki; Ishi Hanae; Gyoba Jiro; Cottrell Garrison W. 《Canadian Metallurgical Quarterly》2010,10(6):874
Facial expressions are crucial to human social communication, but the extent to which they are innate and universal versus learned and culture dependent is a subject of debate. Two studies explored the effect of culture and learning on facial expression understanding. In Experiment 1, Japanese and U.S. participants interpreted facial expressions of emotion. Each group was better than the other at classifying facial expressions posed by members of the same culture. In Experiment 2, this reciprocal in-group advantage was reproduced by a neurocomputational model trained in either a Japanese cultural context or an American cultural context. The model demonstrates how each of us, interacting with others in a particular cultural context, learns to recognize a culture-specific facial expression dialect. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
20.
P. Rozin and A. B. Cohen (see record 2003-02341-009) contend that confusion is an emotion because it is valenced, it has a distinct facial expression, and it has a distinct internal state. On the basis of these criteria, they call for further study of this unstudied state and challenge emotion researchers to consider "confusion" to be an emotion. The author agrees with Rozin and Cohen (2003) that confusion is an affective state, is valenced, has an (internal) object, may be expressed facially, and that laypersons may, under certain circumstances, consider it an emotion. However, its expression is likely to be an expressive component of emotions for which goal obstruction is central. Further, confusion may also not be as commonly considered an emotion by laypersons, as Rozin and Cohen contend. Finally, confusion is not unstudied, only most of the time it is not emotion researchers who do the researching. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献