共查询到20条相似文献,搜索用时 31 毫秒
1.
Despite the fact that facial expressions of emotion have signal value, there is surprisingly little research examining how that signal can be detected under various conditions, because most judgment studies utilize full-face, frontal views. We remedy this by obtaining judgments of frontal and profile views of the same expressions displayed by the same expressors. We predicted that recognition accuracy when viewing faces in profile would be lower than when judging the same faces from the front. Contrarily, there were no differences in recognition accuracy as a function of view, suggesting that emotions are judged equally well regardless of from what angle they are viewed. (PsycINFO Database Record (c) 2011 APA, all rights reserved) 相似文献
2.
Gaze direction influences younger adults' perception of emotional expressions, with direct gaze enhancing the perception of anger and joy, while averted gaze enhances the perception of fear. Age-related declines in emotion recognition and eye-gaze processing have been reported, indicating that there may be age-related changes in the ability to integrate these facial cues. As there is evidence of a positivity bias with age, age-related difficulties integrating these cues may be greatest for negative emotions. The present research investigated age differences in the extent to which gaze direction influenced explicit perception (e.g., anger, fear and joy; Study 1) and social judgments (e.g., of approachability; Study 2) of emotion faces. Gaze direction did not influence the perception of fear in either age group. In both studies, age differences were found in the extent to which gaze direction influenced judgments of angry and joyful faces, with older adults showing less integration of gaze and emotion cues than younger adults. Age differences were greatest when interpreting angry expressions. Implications of these findings for older adults' social functioning are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
3.
Isaacowitz Derek M.; L?ckenhoff Corinna E.; Lane Richard D.; Wright Ron; Sechrest Lee; Riedel Robert; Costa Paul T. 《Canadian Metallurgical Quarterly》2007,22(1):147
Age differences in emotion recognition from lexical stimuli and facial expressions were examined in a cross-sectional sample of adults aged 18 to 85 (N = 357). Emotion-specific response biases differed by age: Older adults were disproportionately more likely to incorrectly label lexical stimuli as happiness, sadness, and surprise and to incorrectly label facial stimuli as disgust and fear. After these biases were controlled, findings suggested that older adults were less accurate at identifying emotions than were young adults, but the pattern differed across emotions and task types. The lexical task showed stronger age differences than the facial task, and for lexical stimuli, age groups differed in accuracy for all emotional states except fear. For facial stimuli, in contrast, age groups differed only in accuracy for anger, disgust, fear, and happiness. Implications for age-related changes in different types of emotional processing are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
4.
de Jong Peter J.; Koster Ernst H. W.; van Wees Rineke; Martens Sander 《Canadian Metallurgical Quarterly》2010,10(5):727
There is considerable evidence indicating that people are primed to monitor social signals of disapproval. Thus far, studies on selective attention have concentrated predominantly on the spatial domain, whereas the temporal consequences of identifying socially threatening information have received only scant attention. Therefore, this study focused on temporal attention costs and examined how the presentation of emotional expressions affects subsequent identification of task-relevant information. High (n = 30) and low (n = 31) socially anxious women were exposed to a dual-target rapid serial visual presentation (RSVP) paradigm. Emotional faces (neutral, happy, angry) were presented as the first target (T1) and neutral letter stimuli (p, q, d, b) as the second target (T2). Irrespective of social anxiety, the attentional blink was relatively large when angry faces were presented as T1. This apparent prioritized processing of angry faces is consistent with evolutionary models, stressing the importance of being especially attentive to potential signals of social threat. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
5.
Masuda Takahiko; Ellsworth Phoebe C.; Mesquita Batja; Leu Janxin; Tanida Shigehito; Van de Veerdonk Ellen 《Canadian Metallurgical Quarterly》2008,94(3):365
Two studies tested the hypothesis that in judging people's emotions from their facial expressions, Japanese, more than Westerners, incorporate information from the social context. In Study 1, participants viewed cartoons depicting a happy, sad, angry, or neutral person surrounded by other people expressing the same emotion as the central person or a different one. The surrounding people's emotions influenced Japanese but not Westerners' perceptions of the central person. These differences reflect differences in attention, as indicated by eye-tracking data (Study 2): Japanese looked at the surrounding people more than did Westerners. Previous findings on East-West differences in contextual sensitivity generalize to social contexts, suggesting that Westerners see emotions as individual feelings, whereas Japanese see them as inseparable from the feelings of the group. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
6.
Moulson Margaret C.; Fox Nathan A.; Zeanah Charles H.; Nelson Charles A. 《Canadian Metallurgical Quarterly》2009,45(1):17
To examine the neurobiological consequences of early institutionalization, the authors recorded event-related potentials (ERPs) from 3 groups of Romanian children--currently institutionalized, previously institutionalized but randomly assigned to foster care, and family-reared children--in response to pictures of happy, angry, fearful, and sad facial expressions of emotion. At 3 assessments (baseline, 30 months, and 42 months), institutionalized children showed markedly smaller amplitudes and longer latencies for the occipital components P1, N170, and P400 compared to family-reared children. By 42 months, ERP amplitudes and latencies of children placed in foster care were intermediate between the institutionalized and family-reared children, suggesting that foster care may be partially effective in ameliorating adverse neural changes caused by institutionalization. The age at which children were placed into foster care was unrelated to their ERP outcomes at 42 months. Facial emotion processing was similar in all 3 groups of children; specifically, fearful faces elicited larger amplitude and longer latency responses than happy faces for the frontocentral components P250 and Nc. These results have important implications for understanding of the role that experience plays in shaping the developing brain. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
7.
Two studies (N = 68, ages 2;0–3;11; N = 80, ages 2;6–4;11) explore the idea that, rather than starting with a separate mental category for each discrete emotion, children start with two broad categories (positive and negative) and then differentiate within each until adult-like categories form. Children generated emotion labels for (a) facial expressions or (b) stories about an emotion's cause and consequence. Emotions included were happiness, anger, fear, sadness, and disgust. Both conditions yielded the predicted pattern of differentiation. These studies of younger children found the face more powerful in eliciting correct emotion labels than had prior research, which typically relied on older preschoolers. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
8.
Three experiments tested the hypothesis that explaining emotional expressions using specific emotion concepts at encoding biases perceptual memory for those expressions. In Experiment 1, participants viewed faces expressing blends of happiness and anger and created explanations of why the target people were expressing one of the two emotions, according to concepts provided by the experimenter. Later, participants attempted to identify the facial expressions in computer movies, in which the previously seen faces changed continuously from anger to happiness. Faces conceptualized in terms of anger were remembered as angrier than the same faces conceptualized in terms of happiness, regardless of whether the explanations were told aloud or imagined. Experiments 2 and 3 showed that explanation is necessary for the conceptual biases to emerge fully and extended the finding to anger-sad expressions, an emotion blend more common in real life. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
9.
In 2 experiments, the authors tested predictions from cognitive models of social anxiety regarding attentional biases for social and nonsocial cues by monitoring eye movements to pictures of faces and objects in high social anxiety (HSA) and low social anxiety (LSA) individuals. Under no-stress conditions (Experiment 1), HSA individuals initially directed their gaze toward neutral faces, relative to objects, more often than did LSA participants. However, under social-evaluative stress (Experiment 2), HSA individuals showed reduced biases in initial orienting and maintenance of gaze on faces (cf. objects) compared with the LSA group. HSA individuals were also relatively quicker to look at emotional faces than neutral faces but looked at emotional faces for less time, compared with LSA individuals, consistent with a vigilant-avoidant pattern of bias. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
10.
The type of visual information needed for categorizing faces and nonface objects was investigated by manipulating spatial frequency scales available in the image during a category verification task addressing basic and subordinate levels. Spatial filtering had opposite effects on faces and airplanes that were modulated by categorization level. The absence of low frequencies impaired the categorization of faces similarly at both levels, whereas the absence of high frequencies was inconsequential throughout. In contrast, basic-level categorization of airplanes was equally impaired by the absence of either low or high frequencies, whereas at the subordinate level, the absence of high frequencies had more deleterious effects. These data suggest that categorization of faces either at the basic level or by race is based primarily on their global shape but also on the configuration of details. By contrast, basic-level categorization of objects is based on their global shape, whereas category-specific diagnostic details determine the information needed for their subordinate categorization. The authors conclude that the entry point in visual recognition is flexible and determined conjointly by the stimulus category and the level of categorization, which reflects the observer’s recognition goal. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
11.
Dotsch Ron; Wigboldus Dani?l H. J.; van Knippenberg Ad 《Canadian Metallurgical Quarterly》2011,100(6):999
Three studies show that social categorization is biased at the level of category allocation. In all studies, participants categorized faces. In Studies 1 and 2, participants overallocated faces with criminal features—a stereotypical negative trait—to the stigmatized Moroccan category, especially if they were prejudiced. On the contrary, the stereotype-irrelevant negative trait stupid did not lead to overallocation to the Moroccan category. In Study 3, using the stigmatized category homosexual, the previously used negative trait criminal—irrelevant to the homosexual stereotype—did not lead to overallocation, but the stereotype-relevant positive trait femininity did. These results demonstrate that normative fit is higher for faces with stereotype-relevant features regardless of valence. Moreover, individual differences in implicit prejudice predicted the extent to which stereotype-relevant traits elicited overallocation: Whereas more negatively prejudiced people showed greater overallocation of faces associated with negative stereotype-relevant traits, they showed less overallocation of faces associated with positive stereotype-relevant traits. These results support our normative fit hypothesis: In general, normative fit is better for faces with stereotypical features. Moreover, normative fit is enhanced for prejudiced individuals when these features are evaluatively congruent. Social categorization thus may be biased in itself. (PsycINFO Database Record (c) 2011 APA, all rights reserved) 相似文献
12.
There is evidence that specific regions of the face such as the eyes are particularly relevant for the decoding of emotional expressions, but it has not been examined whether scan paths of observers vary for facial expressions with different emotional content. In this study, eye-tracking was used to monitor scanning behavior of healthy participants while looking at different facial expressions. Locations of fixations and their durations were recorded, and a dominance ratio (i.e., eyes and mouth relative to the rest of the face) was calculated. Across all emotional expressions, initial fixations were most frequently directed to either the eyes or the mouth. Especially in sad facial expressions, participants more frequently issued the initial fixation to the eyes compared with all other expressions. In happy facial expressions, participants fixated the mouth region for a longer time across all trials. For fearful and neutral facial expressions, the dominance ratio indicated that both the eyes and mouth are equally important. However, in sad and angry facial expressions, the eyes received more attention than the mouth. These results confirm the relevance of the eyes and mouth in emotional decoding, but they also demonstrate that not all facial expressions with different emotional content are decoded equally. Our data suggest that people look at regions that are most characteristic for each emotion. (PsycINFO Database Record (c) 2011 APA, all rights reserved) 相似文献
13.
The present study was designed to examine the operation of depression-specific biases in the identification or labeling of facial expression of emotions. Participants diagnosed with major depression and social phobia and control participants were presented with faces that expressed increasing degrees of emotional intensity, slowly changing from a neutral to a full-intensity happy, sad, or angry expression. The authors assessed individual differences in the intensity of facial expression of emotion that was required for the participants to accurately identify the emotion being expressed. The depressed participants required significantly greater intensity of emotion than did the social phobic and the control participants to correctly identify happy expressions and less intensity to identify sad than angry expressions. In contrast, social phobic participants needed less intensity to correctly identify the angry expressions than did the depressed and control participants and less intensity to identify angry than sad expressions. Implications of these results for interpersonal functioning in depression and social phobia are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
14.
The claim that specific discrete emotions can be universally recognized from human facial expressions is based mainly on the study of expressions that were posed. The current study (N=50) examined recognition of emotion from 20 spontaneous expressions from Papua New Guinea photographed, coded, and labeled by P. Ekman (1980). For the 16 faces with a single predicted label, endorsement of that label ranged from 4.2% to 45.8% (mean 24.2%). For 4 faces with 2 predicted labels (blends), endorsement of one or the other ranged from 6.3% to 66.6% (mean 38.8%). Of the 24 labels Ekman predicted, 11 were endorsed at an above-chance level, and 13 were not. Spontaneous expressions do not achieve the level of recognition achieved by posed expressions. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
15.
Lipp Ottmar V.; Price Sarah M.; Tellegen Cassandra L. 《Canadian Metallurgical Quarterly》2009,9(2):248
The decrease in recognition performance after face inversion has been taken to suggest that faces are processed holistically. Three experiments, 1 with schematic and 2 with photographic faces, were conducted to assess whether face inversion also affected visual search for and implicit evaluation of facial expressions of emotion. The 3 visual search experiments yielded the same differences in detection speed between different facial expressions of emotion for upright and inverted faces. Threat superiority effects, faster detection of angry than of happy faces among neutral background faces, were evident in 2 experiments. Face inversion did not affect explicit or implicit evaluation of face stimuli as assessed with verbal ratings and affective priming. Happy faces were evaluated as more positive than angry, sad, or fearful/scheming ones regardless of orientation. Taken together these results seem to suggest that the processing of facial expressions of emotion is not impaired if holistic processing is disrupted. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
16.
Seemingly trivial social talk provides fertile ground for emotion sharing (a narrator and audience's realization that they experience the same emotional response toward a target), which in turn creates a coalition between the narrator and the audience, configures the narrator and audience's relationship with the target, and coordinates their target-directed action. In this article, the authors use 4 studies to investigate this thesis. In Studies 1 and 2--where participants rated scenarios in which narrators told them anecdotes--the authors found that when there was emotion sharing (a) participants were more bonded with narrators, (b) the narrator and audience's relationship with the target (as reflected in action tendencies) was determined by the emotionality of the anecdotes, and (c) they coordinated their target-directed actions. Study 3 demonstrated that this effect was indeed due to emotion sharing. Study 4 provided behavioral evidence for the effects of emotion sharing using a 2-person trust game. Together, these studies reveal that the everyday act of social talk is a powerful act that is able to shape the social triad of the narrator, the audience, and the social target, with powerful consequences for social structure and group action. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
17.
Two experiments competitively test 3 potential mechanisms (negativity inhibiting responses, feature-based accounts, and evaluative context) for the response latency advantage for recognizing happy expressions by investigating how the race of a target can moderate the strength of the effect. Both experiments indicate that target race modulates the happy face advantage, such that European American participants displayed the happy face advantage for White target faces, but displayed a response latency advantage for angry (Experiments 1 and 2) and sad (Experiment 2) Black target faces. This pattern of findings is consistent with an evaluative context mechanism and inconsistent with negativity inhibition and feature-based accounts of the happy face advantage. Thus, the race of a target face provides an evaluative context in which facial expressions are categorized. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
18.
Lynch Thomas R.; Rosenthal M. Zachary; Kosson David S.; Cheavens Jennifer S.; Lejuez C. W.; Blair R. J. R. 《Canadian Metallurgical Quarterly》2006,6(4):647
Individuals with borderline personality disorder (BPD) have been hypothesized to exhibit significant problems associated with emotional sensitivity. The current study examined emotional sensitivity (i.e., low threshold for recognition of emotional stimuli) in BPD by comparing 20 individuals with BPD and 20 normal controls on their accuracy in identifying emotional expressions. Results demonstrated that, as facial expressions morphed from neutral to maximum intensity, participants with BPD correctly identified facial affect at an earlier stage than did healthy controls. Participants with BPD were more sensitive than healthy controls in identifying emotional expressions in general, regardless of valence. These findings could not be explained by participants with BPD responding faster with more errors. Overall, results appear to support the contention that heightened emotional sensitivity may be a core feature of BPD. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
19.
This paper examines the relationship between degree of handedness and degree of cerebral lateralization on a task of processing positive facial emotion in right-handed individuals. Three hundred and thirteen right-handed participants (157 women) were given two behavioral tests of lateralization: a handedness questionnaire and a chimeric faces test. Two further handedness measures were taken: familial lefthandedness and writing posture. Regression analysis showed that both degree of handedness and sex were predictive of degree of lateralization. Individuals who were strongly right-handed were also more strongly lateralized to the right hemisphere for the task. Men were more strongly lateralized than women. Data were reanalyzed for men and women separately. The relationship between handedness and lateralization remained for men only. Neither familial left-handedness nor writing posture were associated with cerebral lateralization for men or women. The results suggest a positive relationship between degree of handedness and degree of cerebral lateralization, and further that there is a sex difference in this relationship. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
20.
Parr Lisa A.; Waller Bridget M.; Vick Sarah J.; Bard Kim A. 《Canadian Metallurgical Quarterly》2007,7(1):172
The Chimpanzee Facial Action Coding System (ChimpFACS) is an objective, standardized observational tool for measuring facial movement in chimpanzees based on the well-known human Facial Action Coding System (FACS; P. Ekman & W. V. Friesen, 1978). This tool enables direct structural comparisons of facial expressions between humans and chimpanzees in terms of their common underlying musculature. Here the authors provide data on the first application of the ChimpFACS to validate existing categories of chimpanzee facial expressions using discriminant functions analyses. The ChimpFACS validated most existing expression categories (6 of 9) and, where the predicted group memberships were poor, the authors discuss potential problems with ChimpFACS and/or existing categorizations. The authors also report the prototypical movement configurations associated with these 6 expression categories. For all expressions, unique combinations of muscle movements were identified, and these are illustrated as peak intensity prototypical expression configurations. Finally, the authors suggest a potential homology between these prototypical chimpanzee expressions and human expressions based on structural similarities. These results contribute to our understanding of the evolution of emotional communication by suggesting several structural homologies between the facial expressions of chimpanzees and humans and facilitating future research. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献