首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The present study was designed to examine the operation of depression-specific biases in the identification or labeling of facial expression of emotions. Participants diagnosed with major depression and social phobia and control participants were presented with faces that expressed increasing degrees of emotional intensity, slowly changing from a neutral to a full-intensity happy, sad, or angry expression. The authors assessed individual differences in the intensity of facial expression of emotion that was required for the participants to accurately identify the emotion being expressed. The depressed participants required significantly greater intensity of emotion than did the social phobic and the control participants to correctly identify happy expressions and less intensity to identify sad than angry expressions. In contrast, social phobic participants needed less intensity to correctly identify the angry expressions than did the depressed and control participants and less intensity to identify angry than sad expressions. Implications of these results for interpersonal functioning in depression and social phobia are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
Two experiments competitively test 3 potential mechanisms (negativity inhibiting responses, feature-based accounts, and evaluative context) for the response latency advantage for recognizing happy expressions by investigating how the race of a target can moderate the strength of the effect. Both experiments indicate that target race modulates the happy face advantage, such that European American participants displayed the happy face advantage for White target faces, but displayed a response latency advantage for angry (Experiments 1 and 2) and sad (Experiment 2) Black target faces. This pattern of findings is consistent with an evaluative context mechanism and inconsistent with negativity inhibition and feature-based accounts of the happy face advantage. Thus, the race of a target face provides an evaluative context in which facial expressions are categorized. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
An information-processing paradigm was used to examine attentional biases in clinically depressed participants, participants with generalized anxiety disorder (GAD), and nonpsychiatric control participants for faces expressing sadness, anger, and happiness. Faces were presented for 1,000 ms, at which point depressed participants had directed their attention selectively to depression-relevant (i.e., sad) faces. This attentional bias was specific to the emotion of sadness; the depressed participants did not exhibit attentional biases to the angry or happy faces. This bias was also specific to depression; at 1,000 ms, participants with GAD were not attending selectively to sad, happy, or anxiety-relevant (i.e., angry) faces. Implications of these findings for both the cognitive and the interpersonal functioning of depressed individuals are discussed and directions for future research are advanced. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
Studies have found that older compared with young adults are less able to identify facial expressions and have worse memory for negative than for positive faces, but those studies have used only young faces. Studies finding that both age groups are more accurate at recognizing faces of their own than other ages have used mostly neutral faces. Thus, age differences in processing faces may not extend to older faces, and preferential memory for own age faces may not extend to emotional faces. To investigate these possibilities, young and older participants viewed young and older faces presented either with happy, angry, or neutral expressions; participants identified the expressions displayed and then completed a surprise face recognition task. Older compared with young participants were less able to identify expressions of angry young and older faces and (based on participants’ categorizations) remembered angry faces less well than happy faces. There was no evidence of an own age bias in memory, but self-reported frequency of contact with young and older adults and awareness of own emotions played a role in expression identification of and memory for young and older faces. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
We investigated age differences in biased recognition of happy, neutral, or angry faces in 4 experiments. Experiment 1 revealed increased true and false recognition for happy faces in older adults, which persisted even when changing each face’s emotional expression from study to test in Experiment 2. In Experiment 3, we examined the influence of reduced memory capacity on the positivity-induced recognition bias, which showed the absence of emotion-induced memory enhancement but a preserved recognition bias for positive faces in patients with amnestic mild cognitive impairment compared with older adults with normal memory performance. In Experiment 4, we used semantic differentials to measure the connotations of happy and angry faces. Younger and older participants regarded happy faces as more familiar than angry faces, but the older group showed a larger recognition bias for happy faces. This finding indicates that older adults use a gist-based memory strategy based on a semantic association between positive emotion and familiarity. Moreover, older adults’ judgments of valence were more positive for both angry and happy faces, supporting the hypothesis of socioemotional selectivity. We propose that the positivity-induced recognition bias might be based on fluency, which in turn is based on both positivity-oriented emotional goals and on preexisting semantic associations. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
The authors previously reported that normal subjects are better at discriminating happy from neutral faces when the happy face is located to the viewer's right of the neutral face; conversely, discrimination of sad from neutral faces is better when the sad face is shown to the left, supporting a role for the left hemisphere in processing positive valence and for the right hemisphere in processing negative valence. Here, the authors extend this same task to subjects with unilateral cerebral damage (31 right, 28 left). Subjects with right damage performed worse when discriminating sad faces shown on the left, consistent with the prior findings. However, subjects with either left or right damage actually performed superior to normal controls when discriminating happy faces shown on the left. The authors suggest that perception of negative valence relies preferentially on the right hemisphere, whereas perception of positive valence relies on both left and right hemispheres. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
The decrease in recognition performance after face inversion has been taken to suggest that faces are processed holistically. Three experiments, 1 with schematic and 2 with photographic faces, were conducted to assess whether face inversion also affected visual search for and implicit evaluation of facial expressions of emotion. The 3 visual search experiments yielded the same differences in detection speed between different facial expressions of emotion for upright and inverted faces. Threat superiority effects, faster detection of angry than of happy faces among neutral background faces, were evident in 2 experiments. Face inversion did not affect explicit or implicit evaluation of face stimuli as assessed with verbal ratings and affective priming. Happy faces were evaluated as more positive than angry, sad, or fearful/scheming ones regardless of orientation. Taken together these results seem to suggest that the processing of facial expressions of emotion is not impaired if holistic processing is disrupted. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
Neuroimaging data suggest that emotional information, especially threatening faces, automatically captures attention and receives rapid processing. While this is consistent with the majority of behavioral data, behavioral studies of the attentional blink (AB) additionally reveal that aversive emotional first target (T1) stimuli are associated with prolonged attentional engagement or “dwell” time. One explanation for this difference is that few AB studies have utilized manipulations of facial emotion as the T1. To address this, schematic faces varying in expression (neutral, angry, happy) served as the T1 in the current research. Results revealed that the blink associated with an angry T1 face was, primarily, of greater magnitude than that associated with either a neutral or happy T1 face, and also that initial recovery from this processing bias was faster following angry, compared with happy, T1 faces. The current data therefore provide important information regarding the time-course of attentional capture by angry faces: Angry faces are associated with both the rapid capture and rapid release of attention. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

9.
This study was designed to examine attentional biases in the processing of emotional faces in currently and formerly depressed participants and healthy controls. Using a dot-probe task, the authors presented faces expressing happy or sad emotions paired with emotionally neutral faces. Whereas both currently and formerly depressed participants selectively attended to the sad faces, the control participants selectively avoided the sad faces and oriented toward the happy faces, a positive bias that was not observed for either of the depressed groups. These results indicate that attentional biases in the processing of emotional faces are evident even after individuals have recovered from a depressive episode. Implications of these findings for understanding the roles of cognitive and interpersonal functioning in depression are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
An angry face is expected to be detected faster than a happy face because of an early, stimulus-driven analysis of threat-related properties. However, it is unclear to what extent results from the visual search approach—the face-in-the-crowd task—mirror this automatic analysis. The paper outlines a model of automatic threat detection that combines the assumption of a neuronal system for threat detection with contemporary theories of visual search. The model served as a guideline for the development of a new face-in-the-crowd task. The development involved three preliminary studies that provided a basis for the selection of angry and happy facial stimuli resembling each other in respect to perceptibility, homogeneity, and intensity. With these stimuli a signal detection version of the search task was designed and tested. For crowds composed of neutral faces, the sensitivity measure d′ proved the expected detection advantage of angry faces compared to happy faces. However, the emotional expression made no difference if a neutral face had to be detected in crowd composed of either angry or happy faces. Results are in line with the assumption of a stimulus-driven shift of attention giving rise to the superior detection of angry target faces. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

11.
12.
Emotion is usually not discussed as a relevant variable in rational models of decision making—but may be one. The present electroencephalographic study demonstrates the influence of emotional primes (angry, happy faces) on purchase decisions. In a within-subject design, pictures of an apartment were shown to participants who then had to make Go/NoGo decisions on whether to rent it. Their decision should be based either on its price or on its brightness. In two thirds of the trials, emotional prime pictures of happy versus unhappy faces preceded the purchase target (apartment); in one third of the trials no prime was given. Response certainty was evaluated by means of reaction times (RT) and peak amplitude of the event-related potential N200. Facial primes accelerated decisions (RT) irrespective of affective expression. Positive face primes elicited larger N200 amplitudes during purchase decision compared to negative ones. Price-based decisions were made faster and elicited larger N200 than brightness-based decisions. These results support the cognitive-tuning model of decision making and validate the N200 as sensitive measure for the interplay of cognitive and affective aspects in decision making. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
Cognitive models of psychopathology posit that the content or focus of information-processing biases (e.g., attentional biases) is disorder specific: Depression is hypothesized to be characterized by attentional biases specifically for depression-relevant stimuli (e.g., sad facial expressions), whereas anxiety should relate particularly to attentional biases to threat-relevant stimuli (e.g., angry faces). However, little research has investigated this specificity hypothesis and none with a sample of youths. The present study examined attentional biases to emotional faces (sad, angry, and happy compared with neutral) in groups of pure depressed, pure anxious, comorbid depressed and anxious, and control youths (ages 9–17 years; N = 161). Consistent with cognitive models, pure depressed and pure anxious youths exhibited attentional biases specifically to sad and angry faces, respectively, whereas comorbid youths exhibited attentional biases to both facial expressions. In addition, control youths exhibited attentional avoidance of sad faces, and comorbid boys avoided happy faces. Overall, findings suggest that cognitive biases and processing of particular emotional information are specific to pure clinical depression and anxiety, and results inform etiological models of potentially specific processes that are associated with internalizing disorders among youths. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
In this study, the authors investigated how salient visual features capture attention and facilitate detection of emotional facial expressions. In a visual search task, a target emotional face (happy, disgusted, fearful, angry, sad, or surprised) was presented in an array of neutral faces. Faster detection of happy and, to a lesser extent, surprised and disgusted faces was found both under upright and inverted display conditions. Inversion slowed down the detection of these faces less than that of others (fearful, angry, and sad). Accordingly, the detection advantage involves processing of featural rather than configural information. The facial features responsible for the detection advantage are located in the mouth rather than the eye region. Computationally modeled visual saliency predicted both attentional orienting and detection. Saliency was greatest for the faces (happy) and regions (mouth) that were fixated earlier and detected faster, and there was close correspondence between the onset of the modeled saliency peak and the time at which observers initially fixated the faces. The authors conclude that visual saliency of specific facial features--especially the smiling mouth--is responsible for facilitated initial orienting, which thus shortens detection. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
Difficulties in the ability to update stimuli in working memory (WM) may underlie the problems with regulating emotions that lead to the development and perpetuation of mood disorders such as depression. To examine the ability to update affective material in WM, the authors had diagnosed depressed and never-disordered control participants perform an emotion 2-back task in which participants were presented with a series of happy, sad, and neutral faces and were asked to indicate whether the current face had the same (match-set) or different (break-set or no-set) emotional expression as that presented 2 faces earlier. Participants also performed a 0-back task with the same emotional stimuli to serve as a control for perceptual processing. After transforming reaction times to control for baseline group differences, depressed and nondepressed participants exhibited biases in updating emotional content that reflects the tendency to keep negative information and positive information, respectively, active in WM. Compared with controls, depressed participants were both slower to disengage from sad stimuli and faster to disengage from happy facial expressions. In contrast, nondepressed controls took longer to disengage from happy stimuli than from neutral or sad stimuli. These group differences in reaction times may reflect both protective and maladaptive biases in WM that underlie the ability to effectively regulate negative affect. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
This study tested the hypothesis that interhemispheric communication about emotional stimuli is influenced by situational factors that alter emotional relevance. Under evaluative or nonevaluative conditions, participants matched angry and happy faces within a single visual field or across opposite visual fields. An overall across-field advantage (AFA) reflected the benefit of sharing information between the hemispheres. The AFA was greater for angry than for happy faces in the evaluation condition but did not differ for angry and happy faces in the no-evaluation condition. Examination of individual differences indicated that high trait evaluation levels of worry were associated with poorer interhemispheric communication of angry faces, supporting a threat-avoidance conception of worry. Thus, both situational factors and individual differences affected interhemispheric communication about emotional faces. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
Facial expressions serve as cues that encourage viewers to learn about their immediate environment. In studies assessing the influence of emotional cues on behavior, fearful and angry faces are often combined into one category, such as “threat-related,” because they share similar emotional valence and arousal properties. However, these expressions convey different information to the viewer. Fearful faces indicate the increased probability of a threat, whereas angry expressions embody a certain and direct threat. This conceptualization predicts that a fearful face should facilitate processing of the environment to gather information to disambiguate the threat. Here, we tested whether fearful faces facilitated processing of neutral information presented in close temporal proximity to the faces. In Experiment 1, we demonstrated that, compared with neutral faces, fearful faces enhanced memory for neutral words presented in the experimental context, whereas angry faces did not. In Experiment 2, we directly compared the effects of fearful and angry faces on subsequent memory for emotional faces versus neutral words. We replicated the findings of Experiment 1 and extended them by showing that participants remembered more faces from the angry face condition relative to the fear condition, consistent with the notion that anger differs from fear in that it directs attention toward the angry individual. Because these effects cannot be attributed to differences in arousal or valence processing, we suggest they are best understood in terms of differences in the predictive information conveyed by fearful and angry facial expressions. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

18.
This study investigated the identification of facial expressions of emotion in currently nondepressed participants who had a history of recurrent depressive episodes (recurrent major depression; RMD) and never-depressed control participants (CTL). Following a negative mood induction, participants were presented with faces whose expressions slowly changed from neutral to full intensity. Identification of facial expressions was measured by the intensity of the expression at which participants could accurately identify whether faces expressed happiness, sadness, or anger. There were no group differences in the identification of sad or angry expressions. Compared with CTL participants, however, RMD participants required significantly greater emotional intensity in the faces to correctly identify happy expressions. These results indicate that biases in the processing of emotional facial expressions are evident even after individuals have recovered from a depressive episode. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
A new model of mental representation is applied to social cognition: the attractor field model. Using the model, the authors predicted and found a perceptual advantage but a memory disadvantage for faces displaying evaluatively congruent expressions. In Experiment 1, participants completed a same/different perceptual discrimination task involving morphed pairs of angry-to-happy Black and White faces. Pairs of faces displaying evaluatively incongruent expressions (i.e., happy Black, angry White) were more likely to be labeled as similar and were less likely to be accurately discriminated from one another than faces displaying evaluatively congruent expressions (i.e., angry Black, happy White). Experiment 2 replicated this finding and showed that objective discriminability of stimuli moderated the impact of attractor field effects on perceptual discrimination accuracy. In Experiment 3, participants completed a recognition task for angry and happy Black and White faces. Consistent with the attractor field model, memory accuracy was better for faces displaying evaluatively incongruent expressions. Theoretical and practical implications of these findings are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
Findings of 7 studies suggested that decisions about the sex of a face and the emotional expressions of anger or happiness are not independent: Participants were faster and more accurate at detecting angry expressions on male faces and at detecting happy expressions on female faces. These findings were robust across different stimulus sets and judgment tasks and indicated bottom-up perceptual processes rather than just top-down conceptually driven ones. Results from additional studies in which neutrally expressive faces were used suggested that the connections between masculine features and angry expressions and between feminine features and happy expressions might be a property of the sexual dimorphism of the face itself and not merely a result of gender stereotypes biasing the perception. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号