首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
[Correction Notice: An erratum for this article was reported in Vol 11(4) of Emotion (see record 2011-13790-001). Figure 1 should have been printed in color.] [Correction Notice: An erratum for this article was reported in Emotion July 4 2011 (see record 2011-13790-001). Figure 1, which should have been printed in color, was inadvertently printed in black and white. The online version has been corrected.] Faces of unknown persons are processed to infer the intentions of these persons not only when they depict full-blown emotions, but also at rest, or when these faces do not signal any strong feelings. We explored the brain processes involved in these inferences to test whether they are similar to those found when judging full-blown emotions. We recorded the event-related brain potentials (ERPs) elicited by faces of unknown persons who, when they were photographed, were not asked to adopt any particular expression. During the ERP recording, participants had to decide whether each face appeared to be that of a positively, negatively, ambiguously, or neutrally intentioned person. The early posterior negativity, the EPN, was found smaller for neutrally categorized faces than for the other faces, suggesting that the automatic processes it indexes are similar to those evoked by full-blown expressions and thus that these processes might be involved in the decoding of intentions. In contrast, in the same 200–400 ms time window, ERPs were not more negative at anterior sites for neutrally intentioned faces. Second, the peaks of the late positive potentials (LPPs) maximal at parietal sites around 700 ms postonset were not significantly smaller for neutrally intentioned faces. Third, the slow positive waves that followed the LPP were larger for faces that took more time to categorize, that is, for ambiguously intentioned faces. These three series of unexpected results may indicate processes similar to those triggered by full-blown emotions studies, but they question the characteristics of these processes. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

2.
Seeck et al. found that event-related potentials (ERPs) evoked by repeated and non-repeated face photographs differ as early as 50-70ms post-onset. They thus suggested that faces are recognized at these latencies, in contrast with current opinions in ERP literature. However, the similar latencies obtained by George et al. for stimuli not perceived as faces suggest that Seeck et al.'s differences could index repetition rather than face recognition per se. To address this issue, we used matched faces of known and unknown persons. We found the earliest differences between the ERPs to these faces between 76 and 130 ms. These results, which are consistent with other data, suggest that the differentiation of faces takes approximately 100 ms of processing time in humans.  相似文献   

3.
Age-related slowing in recognizing famous names and faces was investigated with event-related brain potentials (ERPs). In a group of young adults, item repetition induced early (220-340 ms) and late (400-700 ms) ERP modulations, apparently signaling the access to, respectively, domain-specific representations of faces and names and domain-general semantic knowledge about the persons. These repetition effects and other ERP components were then used as process-specific time markers in middle-aged and elderly participants. For both faces and names, the elderly participants' responses were slowed, but repetition priming in reaction times was not. The ERP latencies suggested that most of the age-related slowing occurred in the access to domain-specific representations and during response decision, whereas sensory and perceptual processing was largely spared. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
In this study, participants recognized target faces or names that were preceded by prime faces or names of related or unrelated persons. Associative priming effects on reaction times (RTs) and event-related brain potentials (ERPs) were equivalent for within-domain (e.g., face–face) and cross-domain (e.g., name–face) priming. Moreover, the ERP modulation due to priming was topographically equivalent for face and name targets. This suggests that priming of faces and names modulated the activity of the same brain sources, arguing for a postperceptual locus of priming. When combined effects of priming and target degradation were investigated, ERP priming effects were confined to time segments beyond 300 ms, whereas degradation effects started within the first 150 ms and were independent of priming. The findings suggest that associative priming in person recognition acts on domain-independent representations of person identity but not on perceptual processing. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
Findings of 7 studies suggested that decisions about the sex of a face and the emotional expressions of anger or happiness are not independent: Participants were faster and more accurate at detecting angry expressions on male faces and at detecting happy expressions on female faces. These findings were robust across different stimulus sets and judgment tasks and indicated bottom-up perceptual processes rather than just top-down conceptually driven ones. Results from additional studies in which neutrally expressive faces were used suggested that the connections between masculine features and angry expressions and between feminine features and happy expressions might be a property of the sexual dimorphism of the face itself and not merely a result of gender stereotypes biasing the perception. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

6.
This study investigated the maternal concerns and emotions that may regulate one form of sensitive parenting, support for children's immediate desires or intentions. While reviewing a videotape of interactions with their 1-year-olds, mothers who varied on depressive symptoms reported concerns and emotions they had during the interaction. Emotions reflected outcomes either to children (child-oriented concerns) or to mothers themselves (parent-oriented concerns). Child-oriented concerns were associated with fewer negative emotions and more supportive behavior. Supportive parenting was high among mothers who experienced high joy and worry and low anger, sadness, and guilt. However, relations depended on whether emotions were child or parent oriented: Supportive behavior occurred more when emotions were child oriented. In addition, as depressive symptoms increased, mothers reported fewer child-oriented concerns, fewer child-oriented positive emotions, and more parent-oriented negative emotions. They also displayed less supportive behavior. Findings suggest that support for children's immediate intentions may be regulated by parents' concerns, immediate emotions, and depressive symptoms. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
Emotion theorists assume certain facial displays to convey information about the expresser's emotional state. In contrast, behavioral ecologists assume them to indicate behavioral intentions or action requests. To test these contrasting positions, over 2,000 online participants were presented with facial expressions and asked what they revealed--feeling states, behavioral intentions, or action requests. The majority of the observers chose feeling states as the message of facial expressions of disgust, fear, sadness, happiness, and surprise, supporting the emotions view. Only the anger display tended to elicit more choices of behavioral intention or action request, partially supporting the behavioral ecology view. The results support the view that facial expressions communicate emotions, with emotions being multicomponential phenomena that comprise feelings, intentions, and wishes. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
Identification of other people's emotion from quickly presented stimuli, including facial expressions, is fundamental to many social processes, including rapid mimicry and empathy. This study examined extraction of valence from brief emotional expressions in adults with autism spectrum disorder (ASD), a condition characterized by impairments in understanding and sharing of emotions. Control participants were individuals with reading disability and typical individuals. Participants were shown images for durations in the range of microexpressions (15 ms and 30 ms), thus reducing the reliance on higher level cognitive skills. Participants detected whether (a) emotional faces were happy or angry, (b) neutral faces were male or female, and (c) neutral images were animals or objects. Individuals with ASD performed selectively worse on emotion extraction, with no group differences for gender or animal?object tasks. The emotion extraction deficit remains even when controlling for gender, verbal ability, and age and is not accounted for by speed-accuracy tradeoffs. The deficit in rapid emotional processing may contribute to ASD difficulties in mimicry, empathy, and related processes. The results highlight the role of rapid early emotion processing in adaptive social?emotional functioning. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
10.
Event-related brain potentials (ERPs) were recorded during the test phases of two experiments. In experiment 1, subjects first studied two consecutively presented word lists. At test, they were presented with pairs of words, and were required to judge which word had been presented most recently. The test pairs were composed of two previously studied words, one drawn from each list, (Old+Old pairs), one previously studied and one new word (Old+New pairs), or two unstudied words (New+New pairs). At temporo-parietal electrodes, ERPs to Old+Old and Old+New pairs were both reliably more positive-going than those to New+New pairs. At electrode sites overlying prefrontal cortex, ERPs to Old+Old pairs attracting correct recency judgements were more positive, from around 300 ms onwards, than those elicited by the other classes of item, which did not differ from one another. In experiment 2, the test task was changed to one that required discrimination between Old+New items on the one hand, and Old+Old and New+New pairs on the other. ERPs to Old+Old and Old+New pairs once again differed from those to New+New pairs at temporo-parietal sites, but no differences were evident between the ERPs from frontal electrode sites. In line with the evidence from lesion studies, these findings suggest that judgements of relative recency depend upon processes, supported by the prefrontal cortex, additional to those that are necessary for recognition memory. They further suggest that these processes are activated rapidly and selectively in response to pairs of studied items when these must be discriminated on the basis of their relative recency of occurrence.  相似文献   

11.
This study investigated age differences in cognitive and affective facets of empathy: the ability to perceive another's emotions accurately, the capacity to share another's emotions, and the ability to behaviorally express sympathy in an empathic episode. Participants, 80 younger (Mage = 32 years) and 73 older (Mage = 59 years) adults, viewed eight film clips, each portraying a younger or an older adult thinking-aloud about an emotionally engaging topic that was relevant to either younger adults or older adults. In comparison to their younger counterparts, older adults generally reported and expressed greater sympathy while observing the target persons; and they were better able to share the emotions of the target persons who talked about a topic that was relevant to older adults. Age-related deficits in the cognitive ability to accurately perceive another's emotions were only evident when the target person talked about a topic of little relevance to older adults. In sum, the present performance-based evidence speaks for multidirectional age differences in empathy. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

12.
Cognitive and neurobiological accounts of clinical anxiety and depression were examined via event-related brain potentials (ERPs) recorded from patients with panic disorder and healthy controls as they performed an old/new recognition memory task with emotionally negative and neutral words. The emotive connotation of words systematically influenced control subjects'--but not patients'--ERP effects at prefrontal sites in a latency range (–300-500 ms) generally assumed to reflect greater contribution of automatic than controlled memory processes. This provides evidence for dysfunctional inhibitory modulation of affective information processing in panic disorder. The ERP effects after 700 ms, however, suggest that some patients may adopt conscious strategies to minimize the impact of these early processing abnormalities on overt behaviors. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
Despite the fact that facial expressions of emotion have signal value, there is surprisingly little research examining how that signal can be detected under various conditions, because most judgment studies utilize full-face, frontal views. We remedy this by obtaining judgments of frontal and profile views of the same expressions displayed by the same expressors. We predicted that recognition accuracy when viewing faces in profile would be lower than when judging the same faces from the front. Contrarily, there were no differences in recognition accuracy as a function of view, suggesting that emotions are judged equally well regardless of from what angle they are viewed. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

14.
Our purpose in the present meta-analysis was to examine the extent to which discrete emotions elicit changes in cognition, judgment, experience, behavior, and physiology; whether these changes are correlated as would be expected if emotions organize responses across these systems; and which factors moderate the magnitude of these effects. Studies (687; 4,946 effects, 49,473 participants) were included that elicited the discrete emotions of happiness, sadness, anger, and anxiety as independent variables with adults. Consistent with discrete emotion theory, there were (a) moderate differences among discrete emotions; (b) differences among discrete negative emotions; and (c) correlated changes in behavior, experience, and physiology (cognition and judgment were mostly not correlated with other changes). Valence, valence–arousal, and approach–avoidance models of emotion were not as clearly supported. There was evidence that these factors are likely important components of emotion but that they could not fully account for the pattern of results. Most emotion elicitations were effective, although the efficacy varied with the emotions being compared. Picture presentations were overall the most effective elicitor of discrete emotions. Stronger effects of emotion elicitations were associated with happiness versus negative emotions, self-reported experience, a greater proportion of women (for elicitations of happiness and sadness), omission of a cover story, and participants alone versus in groups. Conclusions are limited by the inclusion of only some discrete emotions, exclusion of studies that did not elicit discrete emotions, few available effect sizes for some contrasts and moderators, and the methodological rigor of included studies. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

15.
The role of gender categories in prototype formation during face recognition was investigated in 2 experiments. The participants were asked to learn individual faces and then to recognize them. During recognition, individual faces were mixed with faces, which were blended faces of same or different genders. The results of the 2 experiments showed that blended faces made with learned individual faces were recognized, even though they had never been seen before. In Experiment 1, this effect was stronger when faces belonged to the same gender category (same-sex blended faces), but it also emerged across gender categories (cross-sex blended faces). Experiment 2 further showed that this prototype effect was not affected by the presentation order for same-sex blended faces: The effect was equally strong when the faces were presented one after the other during learning or alternated with faces of the opposite gender. By contrast, the prototype effect across gender categories was highly sensitive to the temporal proximity of the faces blended into the blended faces and almost disappeared when other faces were intermixed. These results indicate that distinct neural populations code for female and male faces. However, the formation of a facial representation can also be mediated by both neural populations. The implications for face-space properties and face-encoding processes are discussed. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

16.
Tested the generality of G. A. Marlatt and J. R. Gordon's (1980, 1985) model of dietary lapse. 46 adults with insulin-dependent diabetes mellitus (IDDM) and 43 obese adults with non-IDDM were interviewed regarding dietary violations. Most episodes occurred in a limited range of high-risk situations. Although the 2 groups lapsed in similar situations, there was a tendency for the IDDM Ss to report a larger proportion of lapses in situations characterized by negative emotions. Approximately 27% of the lapses occurred when the S was busy with a competing activity or had no choice, and these lapses did not fit into Marlatt and Gordon's coding schema. Violations usually were errors of omission rather than errors of commission. Results suggest that most instances of nonadherence were intermittent lapses that did not develop into full-blown relapses. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
In previous research, the emotions associated with repressors' memorial representations were found to be more discrete than those associated with nonrepressors'. In each of the 3 experiments reported here, repressive discreteness was apparent in repressors' appraisals of emotional stimuli at the time they were encoded. In 1 experiment, Ss appraised individual facial expressions of emotion. Repressors judged the dominant emotions in these faces as no less intense than did nonrepressors, but they appraised the blend of nondominant emotions as less intense than did nonrepressors. In the remaining 2 experiments, Ss appraised crowds of emotional faces as well as crowds of geometric shapes. In both crowd experiments, the repressive discreteness was evident in appraisals of crowds of emotional faces but not in appraisals of crowds of geometric shapes. The repressive discreteness effect did not appear to reflect a general repressor–nonrepressor difference in the appraisal of stimulus features. Rather, the results suggested that repressive discreteness may be constrained to appraisals of emotions. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

18.
A sudden visual onset is thought to 'attract attention to its location' within less than 100 ms. We attempted to measure the effect of this attentional process on the event-related potential (ERP) to a probe presented about 140 ms after the onset, and to delineate the spatiotemporal characteristics of such an effect, if any. ERPs were recorded from 30 channels from 6 subjects while they performed a target detection task. Both targets and probes could be located in each of the 4 quadrants (eccentricities 6.1 degrees and 7 degrees, respectively). For a given single target, the subsequent probe was either presented near the location of the target ('valid target') or at the diagonal opposite ('invalid target'). Appropriate 'neutral' conditions (probes preceded by no target, or by simultaneous targets in all quadrants) were applied, and ERPs to probes were corrected for the contribution of the ERPs to targets. The earliest effect of (in)validity was found at about 120 ms after probe onset for lower field probes. This effect consisted of enhanced posterior positivity for valid relative to neutral relative to invalid conditions. This positivity was superposed on a contralateral, extrastriate negative ongoing wave peaking at about 150 ms ('N150'). Source localization suggested that the (in)validity effects originate from deep medial parietal areas. The source corresponding to the N150 activity was not influenced by (in)validity. An earlier deflection to the probe at 80 ms ('NP80') depended on location, but not on (in)validity, and seemed to be of striate origin. Results are discussed in terms of a model postulating an attention-independent 'input module' from which activation is fed to a 'location module' embodying the actual attention mechanism.  相似文献   

19.
We frequently encounter groups of similar objects in our visual environment: a bed of flowers, a basket of oranges, a crowd of people. How does the visual system process such redundancy? Research shows that rather than code every element in a texture, the visual system favors a summary statistical representation of all the elements. The authors demonstrate that although it may facilitate texture perception, ensemble coding also occurs for faces—a level of processing well beyond that of textures. Observers viewed sets of faces varying in emotionality (e.g., happy to sad) and assessed the mean emotion of each set. Although observers retained little information about the individual set members, they had a remarkably precise representation of the mean emotion. Observers continued to discriminate the mean emotion accurately even when they viewed sets of 16 faces for 500 ms or less. Modeling revealed that perceiving the average facial expression in groups of faces was not due to noisy representation or noisy discrimination. These findings support the hypothesis that ensemble coding occurs extremely fast at multiple levels of visual analysis. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
Na?ve theories of behavior hold that actions are caused by an agent's intentions, and the subsequent success of an action is measured by the satisfaction of those intentions. However, when an action is not as successful as intended, the expected causal link between intention and action may distort perception of the action itself. Four studies found evidence of an intention bias in perceptions of action. Actors perceived actions to be more successful when given a prior choice (e.g., choose between 2 words to type) and also when they felt greater motivation for the action (e.g., hitting pictures of disliked people). When the intent was to fail (e.g., singing poorly), choice led to worse estimates of performance. A final experiment suggested that intention bias works independent from self-enhancement motives. In observing another actor hit pictures of Hillary Clinton and Barack Obama, shots were distorted to match the actor's intentions, even when it opposed personal wishes. Together these studies indicate that judgments of action may be automatically distorted and that these inferences arise from the expected consistency between intention and action in agency. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号