首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
[Correction Notice: An erratum for this article was reported in Vol 9(4) of Emotion (see record 2009-11528-009). In this article a symbol was incorrectly omitted from Figure 1, part C. To see the complete article with the corrected figure, please go to http://dx.doi.org/10.1037/a0014681.] People make trait inferences based on facial appearance despite little evidence that these inferences accurately reflect personality. The authors tested the hypothesis that these inferences are driven in part by structural resemblance to emotional expressions. The authors first had participants judge emotionally neutral faces on a set of trait dimensions. The authors then submitted the face images to a Bayesian network classifier trained to detect emotional expressions. By using a classifier, the authors can show that neutral faces perceived to possess various personality traits contain objective resemblance to emotional expression. In general, neutral faces that are perceived to have positive valence resemble happiness, faces that are perceived to have negative valence resemble disgust and fear, and faces that are perceived to be threatening resemble anger. These results support the idea that trait inferences are in part the result of an overgeneralization of emotion recognition systems. Under this hypothesis, emotion recognition systems, which typically extract accurate information about a person's emotional state, are engaged during the perception of neutral faces that bear subtle resemblance to emotional expressions. These emotions could then be misattributed as traits. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
A new model of mental representation is applied to social cognition: the attractor field model. Using the model, the authors predicted and found a perceptual advantage but a memory disadvantage for faces displaying evaluatively congruent expressions. In Experiment 1, participants completed a same/different perceptual discrimination task involving morphed pairs of angry-to-happy Black and White faces. Pairs of faces displaying evaluatively incongruent expressions (i.e., happy Black, angry White) were more likely to be labeled as similar and were less likely to be accurately discriminated from one another than faces displaying evaluatively congruent expressions (i.e., angry Black, happy White). Experiment 2 replicated this finding and showed that objective discriminability of stimuli moderated the impact of attractor field effects on perceptual discrimination accuracy. In Experiment 3, participants completed a recognition task for angry and happy Black and White faces. Consistent with the attractor field model, memory accuracy was better for faces displaying evaluatively incongruent expressions. Theoretical and practical implications of these findings are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
A central question in perception is how stimuli are selected for access to awareness. This study investigated the impact of emotional meaning on detection of faces using the attention blink paradigm. Experiment 1 showed that fearful faces were detected more frequently than neutral faces, and Experiment 2 revealed preferential detection of fearful faces compared with happy faces. To rule out image artifacts as a cause for these results, Experiment 3 manipulated the emotional meaning of neutral faces through fear conditioning and showed a selective increase in detection of conditioned faces. These results extend previous reports of preferential detection of emotional words or schematic objects and suggest that fear conditioning can modulate detection of formerly neutral stimuli. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
When 2 different visual targets presented among different distracters in a rapid serial visual presentation (RSVP) are separated by 400 ms or less, detection and identification of the 2nd targets are reduced relative to longer time intervals. This phenomenon, termed the attentional blink (AB), is attributed to the temporary engagement of a limited-capacity attentional system by the 1st target, which reduces resources available for processing the 2nd target. Although AB has been reliably obtained with many stimulus types, it has not been found for faces (E. Awh et al., 2004). In the present study, the authors investigate the underpinnings of this immunity. Unveiling circumstances in which AB occurs within and across faces and other categories, the authors demonstrate that a multichannel model cannot account for the absence of AB effects on faces. The authors suggest instead that perceptual salience of the face within the distracters' series as well as the available resources determine whether or not faces are blinked in RSVP. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
Although trustworthiness judgments based on a stranger's face occur rapidly (Willis & Todorov, 2006), their accuracy is unknown. We examined the accuracy of trustworthiness judgments of the faces of 2 groups differing in trustworthiness (Nobel Peace Prize recipients/humanitarians vs. America's Most Wanted criminals). Participants viewed 34 faces each for 100 ms or 30 s and rated their trustworthiness. Subsequently, participants were informed about the nature of the 2 groups and estimated group membership for each face. Judgments formed with extremely brief exposure were similar in accuracy and confidence to those formed after a long exposure. However, initial judgments of untrustworthy (criminals') faces were less accurate (M=48.8%) than were those of trustworthy faces (M=62.7%). Judgment accuracy was above chance for trustworthy targets only at Time 1 and slightly above chance for both target types at Time 2. Participants relied on perceived kindness and aggressiveness to inform their rapidly formed intuitive decisions. Thus, intuition plays a minor facilitative role in reading faces. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
Do threatening or negative faces capture attention? The authors argue that evidence from visual search, spatial cuing, and flanker tasks is equivocal and that perceptual differences may account for effects attributed to emotional categories. Empirically, the authors examine the flanker task. Although they replicate previous results in which a positive face flanked by negative faces suffers more interference than a negative face flanked by positive faces, further results indicate that face perception is not necessary for the flanker-effect asymmetry and that the asymmetry also occurs with nonemotional stimuli. The authors conclude that the flanker-effect asymmetry with affective faces cannot be unambiguously attributed to emotional differences and may well be due to purely perceptual differences between the stimuli. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
Despite the fact that faces are typically seen in the context of dynamic events, there is little research on infants' perception of moving faces. L. E. Bahrick, L. J. Gogate, and I. Ruiz (2002) demonstrated that 5-month-old infants discriminate and remember repetitive actions but not the faces of the women performing the actions. The present research tested an attentional salience explanation for these findings: that dynamic faces are discriminable to infants, but more salient actions compete for attention. Results demonstrated that 5-month-old infants discriminated faces in the context of actions when they had longer familiarization time (Experiment 1) and following habituation to a single person performing 3 different activities (Experiment 2). Further, 7-month-old infants who have had more experience with social events also discriminated faces in the context of actions. Overall, however, discrimination of actions was more robust and occurred earlier in processing time than discrimination of dynamic faces. These findings support an attentional salience hypothesis and indicate that faces are not special in the context of actions in early infancy. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
The authors previously reported that normal subjects are better at discriminating happy from neutral faces when the happy face is located to the viewer's right of the neutral face; conversely, discrimination of sad from neutral faces is better when the sad face is shown to the left, supporting a role for the left hemisphere in processing positive valence and for the right hemisphere in processing negative valence. Here, the authors extend this same task to subjects with unilateral cerebral damage (31 right, 28 left). Subjects with right damage performed worse when discriminating sad faces shown on the left, consistent with the prior findings. However, subjects with either left or right damage actually performed superior to normal controls when discriminating happy faces shown on the left. The authors suggest that perception of negative valence relies preferentially on the right hemisphere, whereas perception of positive valence relies on both left and right hemispheres. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
Two studies compared hemispatial bias for perceiving chimeric faces in patients having either atypical or typical depression and healthy controls. A total of 245 patients having major depressive disorder (MDD) or dysthymia (164 with atypical features) and 115 controls were tested on the Chimeric Faces Test. Atypical depression differed from typical depression and controls in showing abnormally large right hemisphere bias. This was present in patients having either MDD or dysthymia and was not related to anxiety, physical anhedonia, or vegetative symptoms. In contrast, patients having MDD with melancholia showed essentially no right hemisphere bias. This is further evidence that atypical depression is a biologically distinct subtype and underscores the importance of this diagnostic distinction for neurophysiologic studies. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
Three experiments tested the hypothesis that explaining emotional expressions using specific emotion concepts at encoding biases perceptual memory for those expressions. In Experiment 1, participants viewed faces expressing blends of happiness and anger and created explanations of why the target people were expressing one of the two emotions, according to concepts provided by the experimenter. Later, participants attempted to identify the facial expressions in computer movies, in which the previously seen faces changed continuously from anger to happiness. Faces conceptualized in terms of anger were remembered as angrier than the same faces conceptualized in terms of happiness, regardless of whether the explanations were told aloud or imagined. Experiments 2 and 3 showed that explanation is necessary for the conceptual biases to emerge fully and extended the finding to anger-sad expressions, an emotion blend more common in real life. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

11.
Prior studies of emotion suggest that young adults should have enhanced memory for negative faces and that this enhancement should be reduced in older adults. Several studies have not shown these effects but were conducted with procedures different from those used with other emotional stimuli. In this study, researchers examined age differences in recognition of faces with emotional or neutral expressions, using trial-unique stimuli, as is typically done with other types of emotional stimuli. They also assessed the influence of personality traits and mood on memory. Enhanced recognition for negative faces was found in young adults but not in older adults. Recognition of faces was not influenced by mood or personality traits in young adults, but lower levels of extraversion and better emotional sensitivity predicted better negative face memory in older adults. These results suggest that negative expressions enhance memory for faces in young adults, as negative valence enhances memory for words and scenes. This enhancement is absent in older adults, but memory for emotional faces is modulated in older adults by personality traits that are relevant to emotional processing. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

12.
We frequently encounter groups of similar objects in our visual environment: a bed of flowers, a basket of oranges, a crowd of people. How does the visual system process such redundancy? Research shows that rather than code every element in a texture, the visual system favors a summary statistical representation of all the elements. The authors demonstrate that although it may facilitate texture perception, ensemble coding also occurs for faces—a level of processing well beyond that of textures. Observers viewed sets of faces varying in emotionality (e.g., happy to sad) and assessed the mean emotion of each set. Although observers retained little information about the individual set members, they had a remarkably precise representation of the mean emotion. Observers continued to discriminate the mean emotion accurately even when they viewed sets of 16 faces for 500 ms or less. Modeling revealed that perceiving the average facial expression in groups of faces was not due to noisy representation or noisy discrimination. These findings support the hypothesis that ensemble coding occurs extremely fast at multiple levels of visual analysis. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
Researchers have used several composite face paradigms to assess holistic processing of faces. In the selective attention paradigm, participants decide whether one face part (e.g., top) is the same as a previously seen face part. Their judgment is affected by whether the irrelevant part of the test face is the same as or different than the relevant part of the study face. This failure of selective attention implies holistic processing. However, the authors show that this task alone cannot distinguish between perceptual and decisional sources of holism. The distinction can be addressed by the complete identification paradigm, in which both face parts are judged to be same or different, combined with analyses based on general recognition theory (F. G. Ashby & J. T. Townsend, 1986). The authors used a different paradigm, sequential responses, to relate these 2 paradigms empirically and theoretically. Sequential responses produced the same results as did selective attention and complete identification. Moreover, disruptions of holistic processing by systematic misalignment of the faces corresponded with systematic and significant changes in the decisional components, but not in the perceptual components, that were extracted using general recognition theory measures. This finding suggests a significant decisional component of holistic face processing in the composite face task. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
Immediate repetition priming for faces was examined across a range of prime durations in a threshold identification task. Similar to word repetition priming results, short duration face primes produced positive priming whereas long duration face primes eliminated or reversed this effect. A habituation model of such priming effects predicted that the speed of identification should relate to the prime duration needed to achieve negative priming. We used face priming to test this prediction in two ways. First, we examined the relationship between priming effects and individual differences in the target duration needed for threshold performance. Second, we compared priming of upright and inverted faces. As predicted, the transition from positive to negative priming as a function of prime duration occurred more slowly for inverted faces and for individuals with longer threshold target durations. Additional experiments ruled out alternative explanations. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
Problems with face recognition are frequent in older adults. However, the mechanisms involved have only been partially discovered. In particular, it is unknown to what extent these problems may be related to changes in configural face processing. Here, we investigated the face inversion effect (FIE) together with the ability to detect modifications in the vertical or horizontal second-order relations between facial features. We used a same/different unfamiliar face discrimination task with 33 young and 33 older adults. The results showed dissociations in the performances of older versus younger adults. There was a lack of inversion effect during the recognition of original faces by older adults. However, for modified faces, older adults showed a pattern of performance similar to that of young participants, with preserved FIE for vertically modified faces and no detectable FIE for horizontally modified faces. Most importantly, the detection of vertical modifications was preserved in older relative to young adults whereas the detection of horizontal modifications was markedly diminished. We conclude that age has dissociable effects on configural face-encoding processes, with a relative preservation of vertical compared to horizontal second-order relations processing. These results help to understand some divergent results in the literature and may explain the spared familiar face identification abilities in the daily lives of older adults. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

16.
Although lightness perception is clearly influenced by contextual factors, it is not known whether knowledge about the reflectance of specific objects also affects their lightness. Recent research by O. H. MacLin and R. Malpass (2003) suggests that subjects label Black faces as darker than White faces, so in the current experiments, an adjustment methodology was used to test the degree to which expectations about the relative skin tone associated with faces of varying races affect the perceived lightness of those faces. White faces were consistently judged to be relatively lighter than Black faces, even for racially ambiguous faces that were disambiguated by labels. Accordingly, relatively abstract expectations about the relative reflectance of objects can affect their perceived lightness. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
From birth, infants are exposed to a wealth of emotional information in their interactions. Much research has been done to investigate the development of emotion perception, and factors influencing that development. The current study investigates the role of familiarity on 3.5-month-old infants' generalization of emotional expressions. Infants were assigned to one of two habituation sequences: in one sequence, infants were visually habituated to parental expressions of happy or sad. At test, infants viewed either a continuation of the habituation sequence, their mother depicting a novel expression, an unfamiliar female depicting the habituated expression, or an unfamiliar female depicting a novel expression. In the second sequence, a new sample of infants was matched to the infants in the first sequence. These infants viewed the same habituation and test sequences, but the actors were unfamiliar to them. Only those infants who viewed their own mothers and fathers during the habituation sequence increased looking. They dishabituated looking to maternal novel expressions, the unfamiliar female's novel expression, and the unfamiliar female depicting the habituated expression, especially when sad parental expressions were followed by an expression change to happy or to a change in person. Infants are guided in their recognition of emotional expressions by the familiarity of their parents, before generalizing to others. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

18.
The authors used connectionist modeling to extend previous research on emotion overgeneralization effects. Study 1 demonstrated that neutral expression male faces objectively resemble angry expressions more than female faces do, female faces objectively resemble surprise expressions more than male faces do, White faces objectively resemble angry expressions more than Black or Korean faces do, and Black faces objectively resemble happy and surprise expressions more than White faces do. Study 2 demonstrated that objective resemblance to emotion expressions influences trait impressions even when statistically controlling possible confounding influences of attractiveness and babyfaceness. It further demonstrated that emotion overgeneralization is moderated by face race and that racial differences in emotion resemblance contribute to White perceivers’ stereotypes of Blacks and Asians. These results suggest that intergroup relations may be strained not only by cultural stereotypes but also by adaptive responses to emotion expressions that are overgeneralized to groups whose faces subtly resemble particular emotions. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
It has been claimed that exposure to distorted faces of one sex induces perceptual aftereffects for test faces that are of the same sex, but not for test faces of the other sex (A. C. Little, L. M. DeBruine, & B. C. Jones, 2005). This result suggests that male and female faces have separate neural coding. Given the high degree of visual similarity between faces of different sexes, this result is surprising. The authors reinvestigated male and female face coding using a different face distortion. In Experiment 1, participants adapted to distorted faces from one sex (e.g., male contracted faces) and were tested with faces of both sexes. Aftereffects were found for both male and female faces, suggesting the existence of common coding mechanisms. In Experiments 2 and 3, participants adapted to oppositely distorted faces from both sexes (male contracted and female expanded faces). Weak opposite aftereffects were found for male and female faces, suggesting the existence of sex-selective face coding mechanisms. Taken together, these results indicate that both common and sex-selective mechanisms code male and female faces. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
The most familiar emotional signals consist of faces, voices, and whole-body expressions, but so far research on emotions expressed by the whole body is sparse. The authors investigated recognition of whole-body expressions of emotion in three experiments. In the first experiment, participants performed a body expression-matching task. Results indicate good recognition of all emotions, with fear being the hardest to recognize. In the second experiment, two alternative forced choice categorizations of the facial expression of a compound face-body stimulus were strongly influenced by the bodily expression. This effect was a function of the ambiguity of the facial expression. In the third experiment, recognition of emotional tone of voice was similarly influenced by task irrelevant emotional body expressions. Taken together, the findings illustrate the importance of emotional whole-body expressions in communication either when viewed on their own or, as is often the case in realistic circumstances, in combination with facial expressions and emotional voices. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号