首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this study, the authors investigated how salient visual features capture attention and facilitate detection of emotional facial expressions. In a visual search task, a target emotional face (happy, disgusted, fearful, angry, sad, or surprised) was presented in an array of neutral faces. Faster detection of happy and, to a lesser extent, surprised and disgusted faces was found both under upright and inverted display conditions. Inversion slowed down the detection of these faces less than that of others (fearful, angry, and sad). Accordingly, the detection advantage involves processing of featural rather than configural information. The facial features responsible for the detection advantage are located in the mouth rather than the eye region. Computationally modeled visual saliency predicted both attentional orienting and detection. Saliency was greatest for the faces (happy) and regions (mouth) that were fixated earlier and detected faster, and there was close correspondence between the onset of the modeled saliency peak and the time at which observers initially fixated the faces. The authors conclude that visual saliency of specific facial features--especially the smiling mouth--is responsible for facilitated initial orienting, which thus shortens detection. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
The “face in the crowd effect” refers to the finding that threatening or angry faces are detected more efficiently among a crowd of distractor faces than happy or nonthreatening faces. Work establishing this effect has primarily utilized schematic stimuli and efforts to extend the effect to real faces have yielded inconsistent results. The failure to consistently translate the effect from schematic to human faces raises questions about its ecological validity. The present study assessed the face in the crowd effect using a visual search paradigm that placed veridical faces, verified to exemplify prototypical emotional expressions, within heterogeneous crowds. Results confirmed that angry faces were found more quickly and accurately than happy expressions in crowds of both neutral and emotional distractors. These results are the first to extend the face in the crowd effect beyond homogenous crowds to more ecologically valid conditions and thus provide compelling evidence for its legitimacy as a naturalistic phenomenon. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
In 2 experiments, participants were presented schematic faces with emotional expressions (threatening, friendly) in a neutral-faces context or neutral expressions in an emotional-faces context. These conditions were compared with detection performance in displays containing key features of emotional faces not forming the perceptual gestalt of a face. Supporting the notion of a threat detection advantage, Experiment 1 found that threatening faces were faster detected than friendly faces, whereas no difference emerged between the corresponding feature conditions. Experiment 2 increased task difficulty with a backward masking procedure and found corresponding results. In neither of the studies was the threat detection advantage associated with reduced accuracy. However, features were, in general, detected faster than faces when task difficulty was high. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
The anger-superiority hypothesis states that angry faces are detected more efficiently than friendly faces. Previously research used schematized stimuli, which minimizes perceptual confounds, but violates ecological validity. The authors argue that a confounding of appearance and meaning is unavoidable and even unproblematic if real faces are presented. Four experiments tested carefully controlled photos in a search-asymmetry design. Experiments 1 and 2 revealed more efficient detection of an angry face among happy faces than vice versa. Experiment 3 indicated that the advantage was due to the mouth, but not to the eyes, and Experiment 4, using upright and inverted thatcherized faces, suggests a perceptual basis. The results are in line with a sensory-bias hypothesis that facial expressions evolved to exploit extant capabilities of the visual system. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
Threatening, friendly, and neutral faces were presented to test the hypothesis of the facilitated perceptual processing of threatening faces. Dense sensor event-related brain potentials were measured while subjects viewed facial stimuli. Subjects had no explicit task for emotional categorization of the faces. Assessing early perceptual stimulus processing, threatening faces elicited an early posterior negativity compared with nonthreatening neutral or friendly expressions. Moreover, at later stages of stimulus processing, facial threat also elicited augmented late positive potentials relative to the other facial expressions, indicating the more elaborate perceptual analysis of these stimuli. Taken together, these data demonstrate the facilitated perceptual processing of threatening faces. Results are discussed within the context of an evolved module of fear (A. Ohman & S. Mineka, 2001). (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
In a face-in-the-crowd setting, the authors examined visual search for photographically reproduced happy, angry, and fearful target faces among neutral distractor faces in 3 separate experiments. Contrary to the hypothesis, happy targets were consistently detected more quickly and accurately than angry and fearful targets, as were directed compared with averted targets. There was no consistent effect of social anxiety. A facial emotion recognition experiment suggested that the happy search advantage could be due to the ease of processing happy faces. In the final experiment with perceptually controlled schematic faces, the authors reported more effective detection of angry than happy faces. This angry advantage was most obvious for highly socially anxious individuals when their social fear was experimentally enhanced. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
The present study investigated age-related variations in judgments of the duration of angry facial expressions compared with neutral facial expressions. Children aged 3, 5, and 8 years were tested on a temporal bisection task using angry and neutral female faces. Results revealed that, in all age groups, children judged the duration of angry faces to be longer than that of neutral faces. Findings are discussed in the framework of internal clock models and the adaptive function of emotion. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
Two studies examined the information that defines a threatening facial display. The first study identified those facial characteristics that distinguish between representations of threatening and nonthreatening facial displays. Masks that presented either threatening or nonthreatening facial displays were obtained from a number of non-Western cultures and scored for the presence of those facial features that discriminated between such displays in the drawings of two American samples. Threatening masks contained a significantly higher number of these characteristics across all cultures examined. The second study determined whether the information provided by the facial display might be more primary nonrepresentational visual patterns than facial features with obvious denotative meaning (e.g., diagonal lines rather than downturned eyebrows). The subjective response to sets of diagonal, angular, and curvilinear visual stimuli revealed that the nonrepresentational features of angularity and diagonality in the visual stimulus appeared to have the ability to evoke the subjective responses that convey the meaning of threat. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
Rapid evaluation of ecologically relevant stimuli may lead to their preferential access to awareness. Continuous flash suppression allows assessment of affective processing under conditions in which stimuli have been rendered invisible due to the strongly suppressive nature of dynamic noise relative to static images. The authors investigated whether fearful expressions emerge from suppression into awareness more quickly than images of neutral or happy expressions. Fearful faces were consistently detected faster than neutral or happy faces. Responses to inverted faces were slower than those to upright faces but showed the same effect of emotional expression, suggesting that some key feature or features in the inverted faces remained salient. When using stimuli solely representing the eyes, a similar bias for detecting fear emerged, implicating the importance of information from the eyes in the preconscious processing of fear expressions. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
J. B. Halberstadt and P. M. Niedenthal (2001) reported that explanations of target individuals' emotional states biased memory for their facial expressions in the direction of the explanation. The researchers argued for, but did not test, a 2-stage model of the explanation effect, such that verbal explanation increases attention to facial features at the expense of higher level featural configuration, making the faces vulnerable to conceptual reintegration in terms of available emotion categories. The current 4 experiments provided convergent evidence for the "featural shift" hypothesis by examining memory for both faces and facial features following verbal explanation. Featural attention was evidenced by verbalizers' better memory for features relative to control participants and reintegration by a weaker explanation bias for features and configurally altered faces than for whole, unaltered faces. The results have implications for emotion, attribution, language, and the interaction of implicit and explicit processing. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

11.
Research has shown that neutral faces are better recognized when they had been presented with happy rather than angry expressions at study, suggesting that emotional signals conveyed by facial expressions influenced the encoding of novel facial identities in memory. An alternative explanation, however, would be that the influence of facial expression resulted from differences in the visual features of the expressions employed. In this study, this possibility was tested by manipulating facial expression at study versus test. In line with earlier studies, we found that neutral faces were better recognized when they had been previously encountered with happy rather than angry expressions. On the other hand, when neutral faces were presented at study and participants were later asked to recognize happy or angry faces of the same individuals, no influence of facial expression was detected. As the two experimental conditions involved exactly the same amount of changes in the visual features of the stimuli between study and test, the results cannot be simply explained by differences in the visual properties of different facial expressions and may instead reside in their specific emotional meaning. The findings further suggest that the influence of facial expression is due to disruptive effects of angry expressions rather than facilitative effects of happy expressions. This study thus provides additional evidence that facial identity and facial expression are not processed completely independently. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

12.
Identification of other people's emotion from quickly presented stimuli, including facial expressions, is fundamental to many social processes, including rapid mimicry and empathy. This study examined extraction of valence from brief emotional expressions in adults with autism spectrum disorder (ASD), a condition characterized by impairments in understanding and sharing of emotions. Control participants were individuals with reading disability and typical individuals. Participants were shown images for durations in the range of microexpressions (15 ms and 30 ms), thus reducing the reliance on higher level cognitive skills. Participants detected whether (a) emotional faces were happy or angry, (b) neutral faces were male or female, and (c) neutral images were animals or objects. Individuals with ASD performed selectively worse on emotion extraction, with no group differences for gender or animal?object tasks. The emotion extraction deficit remains even when controlling for gender, verbal ability, and age and is not accounted for by speed-accuracy tradeoffs. The deficit in rapid emotional processing may contribute to ASD difficulties in mimicry, empathy, and related processes. The results highlight the role of rapid early emotion processing in adaptive social?emotional functioning. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
Theoretical models of attention for affective information have assigned a special status to the cognitive processing of emotional facial expressions. One specific claim in this regard is that emotional faces automatically attract visual attention. In three experiments, the authors investigated attentional cueing by angry, happy, and neutral facial expressions that were presented under conditions of limited awareness. In these experiments, facial expressions were presented in a masked (14 ms or 34 ms, masked by a neutral face) and unmasked fashion (34 ms or 100 ms). Compared with trials containing neutral cues, delayed responding was found on trials with emotional cues in the unmasked, 100-ms condition, suggesting stronger allocation of cognitive resources to emotional faces. However, in both masked and unmasked conditions, the hypothesized cueing of visual attention to the location of emotional facial expression was not found. In contrary, attentional cueing by emotional faces was less strong compared with neutral faces in the unmasked, 100-ms condition. These data suggest that briefly presented emotional faces influence cognitive processing but do not automatically capture visual attention. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
Adult-like attentional biases toward fearful faces can be observed in 7-month-old infants. It is possible, however, that infants merely allocate attention to simple features such as enlarged fearful eyes. In the present study, 7-month-old infants (n = 15) were first shown individual emotional faces to determine their visual scanning patterns of the expressions. Second, an overlap task was used to examine the latency of attention disengagement from centrally presented faces. In both tasks, the stimuli were fearful, happy, and neutral facial expressions, and a neutral face with fearful eyes. Eye-tracking data from the first task showed that infants scanned the eyes more than other regions of the face; however, there were no differences in scanning patterns across expressions. In the overlap task, infants were slower in disengaging attention from fearful as compared to happy and neutral faces and also to neutral faces with fearful eyes. Together, these results provide evidence that threat-related stimuli tend to hold attention preferentially in 7-month-old infants and that the effect does not reflect a simple response to differentially salient eyes in fearful faces. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
The decrease in recognition performance after face inversion has been taken to suggest that faces are processed holistically. Three experiments, 1 with schematic and 2 with photographic faces, were conducted to assess whether face inversion also affected visual search for and implicit evaluation of facial expressions of emotion. The 3 visual search experiments yielded the same differences in detection speed between different facial expressions of emotion for upright and inverted faces. Threat superiority effects, faster detection of angry than of happy faces among neutral background faces, were evident in 2 experiments. Face inversion did not affect explicit or implicit evaluation of face stimuli as assessed with verbal ratings and affective priming. Happy faces were evaluated as more positive than angry, sad, or fearful/scheming ones regardless of orientation. Taken together these results seem to suggest that the processing of facial expressions of emotion is not impaired if holistic processing is disrupted. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
Fearful faces receive privileged access to awareness relative to happy and nonemotional faces. We investigated whether this advantage depends on currently available attentional resources. In an attentional blink paradigm, observers detected faces presented during the attentional blink period that could depict either a fearful or a happy expression. Perceptual load of the blink-inducing target was manipulated by increasing flanker interference. For the low-load condition, fearful faces were detected more often than happy faces, replicating previous reports. More important, this advantage for fearful faces disappeared for the high-load condition, during which fearful and happy faces were detected equally often. These results suggest that the privileged access of fearful faces to awareness does not occur mandatorily, but instead depends on attentional resources. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
The ability of the human face to communicate emotional states via facial expressions is well known, and past research has established the importance and universality of emotional facial expressions. However, recent evidence has revealed that facial expressions of emotion are most accurately recognized when the perceiver and expresser are from the same cultural ingroup. The current research builds on this literature and extends this work. Specifically, we find that mere social categorization, using a minimal-group paradigm, can create an ingroup emotion–identification advantage even when the culture of the target and perceiver is held constant. Follow-up experiments show that this effect is supported by differential motivation to process ingroup versus outgroup faces and that this motivational disparity leads to more configural processing of ingroup faces than of outgroup faces. Overall, the results point to distinct processing modes for ingroup and outgroup faces, resulting in differential identification accuracy for facial expressions of emotion. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

18.
The present study was designed to examine the operation of depression-specific biases in the identification or labeling of facial expression of emotions. Participants diagnosed with major depression and social phobia and control participants were presented with faces that expressed increasing degrees of emotional intensity, slowly changing from a neutral to a full-intensity happy, sad, or angry expression. The authors assessed individual differences in the intensity of facial expression of emotion that was required for the participants to accurately identify the emotion being expressed. The depressed participants required significantly greater intensity of emotion than did the social phobic and the control participants to correctly identify happy expressions and less intensity to identify sad than angry expressions. In contrast, social phobic participants needed less intensity to correctly identify the angry expressions than did the depressed and control participants and less intensity to identify angry than sad expressions. Implications of these results for interpersonal functioning in depression and social phobia are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
Three experiments evaluated whether facial expression can modulate the allocation of focused attention. Identification of emotionally expressive target faces was typically faster when they were flanked by identical (compatible) faces compared with when they were flanked by different (incompatible) faces. This flanker compatibility effect was significantly smaller when target faces expressed negative compared with positive emotion (see Experiment 1A); however, when the faces were altered to disrupt emotional expression, yet retain feature differences, equal flanker compatibility effects were observed (see Experiment 1B). The flanker-compatibility effect was also found to be smaller for negative target faces compared compatibility with neutral target faces, and for both negative and neutral target faces compared with positive target faces (see Experiment 2). These results suggest that the constriction of attention is influenced by facial expressions of emotion. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
This study compared young and older adults’ ability to recognize bodily and auditory expressions of emotion and to match bodily and facial expressions to vocal expressions. Using emotion discrimination and matching techniques, participants assessed emotion in voices (Experiment 1), point-light displays (Experiment 2), and still photos of bodies with faces digitally erased (Experiment 3). Older adults’ were worse at least some of the time in recognition of anger, sadness, fear, and happiness in bodily expressions and of anger in vocal expressions. Compared with young adults, older adults also found it more difficult to match auditory expressions to facial expressions (5 of 6 emotions) and bodily expressions (3 of 6 emotions). (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号