首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 441 毫秒
1.
This research examined the relationship between individual differences in working memory capacity and the self-regulation of emotional expression and emotional experience. Four studies revealed that people higher in working memory capacity suppressed expressions of negative emotion (Study 1) and positive emotion (Study 2) better than did people lower in working memory capacity. Furthermore, compared to people lower in working memory capacity, people higher in capacity more capably appraised emotional stimuli in an unemotional manner and thereby experienced (Studies 3 and 4) and expressed (Study 4) less emotion in response to those stimuli. These findings indicate that cognitive ability contributes to the control of emotional responding. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
Extant research suggests that targets' emotion expressions automatically evoke similar affect in perceivers. The authors hypothesized that the automatic impact of emotion expressions depends on group membership. In Experiments 1 and 2, an affective priming paradigm was used to measure immediate and preconscious affective responses to same-race or other-race emotion expressions. In Experiment 3, spontaneous vocal affect was measured as participants described the emotions of an ingroup or outgroup sports team fan. In these experiments, immediate and spontaneous affective responses depended on whether the emotional target was ingroup or outgroup. Positive responses to fear expressions and negative responses to joy expressions were observed in outgroup perceivers, relative to ingroup perceivers. In Experiments 4 and 5, discrete emotional responses were examined. In a lexical decision task (Experiment 4), facial expressions of joy elicited fear in outgroup perceivers, relative to ingroup perceivers. In contrast, facial expressions of fear elicited less fear in outgroup than in ingroup perceivers. In Experiment 5, felt dominance mediated emotional responses to ingroup and outgroup vocal emotion. These data support a signal-value model in which emotion expressions signal environmental conditions. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
Drawing upon the literatures on beliefs about magical contagion and property transmission, we examined people's belief in a novel mechanism of human-to-human contagion, emotional residue. This is the lay belief that people's emotions leave traces in the physical environment, which can later influence others or be sensed by others. Studies 1–4 demonstrated that Indians are more likely than Americans to endorse a lay theory of emotions as substances that move in and out of the body, and to claim that they can sense emotional residue. However, when the belief in emotional residue is measured implicitly, both Indians and American believe to a similar extent that emotional residue influences the moods and behaviors of those who come into contact with it (Studies 5–7). Both Indians and Americans also believe that closer relationships and a larger number of people yield more detectable residue (Study 8). Finally, Study 9 demonstrated that beliefs about emotional residue can influence people's behaviors. Together, these finding suggest that emotional residue is likely to be an intuitive concept, one that people in different cultures acquire even without explicit instruction. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

4.
One reason for the universal appeal of music lies in the emotional rewards that music offers to its listeners. But what makes these rewards so special? The authors addressed this question by progressively characterizing music-induced emotions in 4 interrelated studies. Studies 1 and 2 (n = 354) were conducted to compile a list of music-relevant emotion terms and to study the frequency of both felt and perceived emotions across 5 groups of listeners with distinct music preferences. Emotional responses varied greatly according to musical genre and type of response (felt vs. perceived). Study 3 (n = 801)--a field study carried out during a music festival--examined the structure of music-induced emotions via confirmatory factor analysis of emotion ratings, resulting in a 9-factorial model of music-induced emotions. Study 4 (n = 238) replicated this model and found that it accounted for music-elicited emotions better than the basic emotion and dimensional emotion models. A domain-specific device to measure musically induced emotions is introduced--the Geneva Emotional Music Scale. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
Three experiments examined the impact of incidental emotions on implicit intergroup evaluations. Experiment 1 demonstrated that for unknown social groups, two negative emotions that are broadly applicable to intergroup conflict (anger and disgust) both created implicit bias where none had existed before. However, for known groups about which perceivers had prior knowledge, emotions increased implicit prejudice only if the induced emotion was applicable to the outgroup stereotype. Disgust increased bias against disgust-relevant groups (e.g., homosexuals) but anger did not (Experiment 2); anger increased bias against anger-relevant groups (e.g., Arabs) but disgust did not (Experiment 3). Consistent with functional theories of emotion, these findings suggest that negative intergroup emotions signal specific types of threat. If the emotion-specific threat is applicable to prior expectations of a group, the emotion ratchets up implicit prejudice toward that group. However, if the emotion-specific threat is not applicable to the target group, evaluations remain unchanged. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
Studies of emotion signaling inform claims about the taxonomic structure, evolutionary origins, and physiological correlates of emotions. Emotion vocalization research has tended to focus on a limited set of emotions: anger, disgust, fear, sadness, surprise, happiness, and for the voice, also tenderness. Here, we examine how well brief vocal bursts can communicate 22 different emotions: 9 negative (Study 1) and 13 positive (Study 2), and whether prototypical vocal bursts convey emotions more reliably than heterogeneous vocal bursts (Study 3). Results show that vocal bursts communicate emotions like anger, fear, and sadness, as well as seldom-studied states like awe, compassion, interest, and embarrassment. Ancillary analyses reveal family-wise patterns of vocal burst expression. Errors in classification were more common within emotion families (e.g., ’self-conscious,’ ’pro-social’) than between emotion families. The three studies reported highlight the voice as a rich modality for emotion display that can inform fundamental constructs about emotion. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
Decoding facial expressions of emotion is an important aspect of social communication that is often impaired following psychiatric or neurological illness. However, little is known of the cognitive components involved in perceiving emotional expressions. Three dual task studies explored the role of verbal working memory in decoding emotions. Concurrent working memory load substantially interfered with choosing which emotional label described a facial expression (Experiment 1). A key factor in the magnitude of interference was the number of emotion labels from which to choose (Experiment 2). In contrast the ability to decide that two faces represented the same emotion in a discrimination task was relatively unaffected by concurrent working memory load (Experiment 3). Different methods of assessing emotion perception make substantially different demands on working memory. Implications for clinical disorders which affect both working memory and emotion perception are considered. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
Efficient navigation of our social world depends on the generation, interpretation, and combination of social signals within different sensory systems. However, the influence of healthy adult aging on multisensory integration of emotional stimuli remains poorly explored. This article comprises 2 studies that directly address issues of age differences on cross-modal emotional matching and explicit identification. The first study compared 25 younger adults (19–40 years) and 25 older adults (60–80 years) on their ability to match cross-modal congruent and incongruent emotional stimuli. The second study looked at performance of 20 younger (19–40) and 20 older adults (60–80) on explicit emotion identification when information was presented congruently in faces and voices or only in faces or in voices. In Study 1, older adults performed as well as younger adults on tasks in which congruent auditory and visual emotional information were presented concurrently, but there were age-related differences in matching incongruent cross-modal information. Results from Study 2 indicated that though older adults were impaired at identifying emotions from 1 modality (faces or voices alone), they benefited from congruent multisensory information as age differences were eliminated. The findings are discussed in relation to social, emotional, and cognitive changes with age. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
This exploratory study aims at investigating the effects of terrorism on children’s ability to recognize emotions. A sample of 101 exposed and 102 nonexposed children (mean age = 11 years), balanced for age and gender, were assessed 20 months after a terrorist attack in Beslan, Russia. Two trials controlled for children’s ability to match a facial emotional stimulus with an emotional label and their ability to match an emotional label with an emotional context. The experimental trial evaluated the relation between exposure to terrorism and children’s free labeling of mixed emotion facial stimuli created by morphing between 2 prototypical emotions. Repeated measures analyses of covariance revealed that exposed children correctly recognized pure emotions. Four log-linear models were performed to explore the association between exposure group and category of answer given in response to different mixed emotion facial stimuli. Model parameters indicated that, compared with nonexposed children, exposed children (a) labeled facial expressions containing anger and sadness significantly more often than expected as anger, and (b) produced fewer correct answers in response to stimuli containing sadness as a target emotion. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
Our purpose in the present meta-analysis was to examine the extent to which discrete emotions elicit changes in cognition, judgment, experience, behavior, and physiology; whether these changes are correlated as would be expected if emotions organize responses across these systems; and which factors moderate the magnitude of these effects. Studies (687; 4,946 effects, 49,473 participants) were included that elicited the discrete emotions of happiness, sadness, anger, and anxiety as independent variables with adults. Consistent with discrete emotion theory, there were (a) moderate differences among discrete emotions; (b) differences among discrete negative emotions; and (c) correlated changes in behavior, experience, and physiology (cognition and judgment were mostly not correlated with other changes). Valence, valence–arousal, and approach–avoidance models of emotion were not as clearly supported. There was evidence that these factors are likely important components of emotion but that they could not fully account for the pattern of results. Most emotion elicitations were effective, although the efficacy varied with the emotions being compared. Picture presentations were overall the most effective elicitor of discrete emotions. Stronger effects of emotion elicitations were associated with happiness versus negative emotions, self-reported experience, a greater proportion of women (for elicitations of happiness and sadness), omission of a cover story, and participants alone versus in groups. Conclusions are limited by the inclusion of only some discrete emotions, exclusion of studies that did not elicit discrete emotions, few available effect sizes for some contrasts and moderators, and the methodological rigor of included studies. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

11.
The authors used connectionist modeling to extend previous research on emotion overgeneralization effects. Study 1 demonstrated that neutral expression male faces objectively resemble angry expressions more than female faces do, female faces objectively resemble surprise expressions more than male faces do, White faces objectively resemble angry expressions more than Black or Korean faces do, and Black faces objectively resemble happy and surprise expressions more than White faces do. Study 2 demonstrated that objective resemblance to emotion expressions influences trait impressions even when statistically controlling possible confounding influences of attractiveness and babyfaceness. It further demonstrated that emotion overgeneralization is moderated by face race and that racial differences in emotion resemblance contribute to White perceivers’ stereotypes of Blacks and Asians. These results suggest that intergroup relations may be strained not only by cultural stereotypes but also by adaptive responses to emotion expressions that are overgeneralized to groups whose faces subtly resemble particular emotions. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

12.
Metacognitive emotion regulation strategies involve deliberately changing thoughts or goals to alleviate negative emotions. Adults commonly engage in this type of emotion regulation, but little is known about the developmental roots of this ability. Two studies were designed to assess whether 5- and 6-year-old children can generate such strategies and, if so, the types of metacognitive strategies they use. In Study 1, children described how story protagonists could alleviate negative emotions. In Study 2, children recalled times that they personally had felt sad, angry, and scared and described how they had regulated their emotions. In contrast to research suggesting that young children cannot use metacognitive regulation strategies, the majority of children in both studies described such strategies. Children were surprisingly sophisticated in their suggestions for how to cope with negative emotions and tailored their regulatory responses to specific emotional situations. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
Emotion theorists have long debated whether valence, which ranges from pleasant to unpleasant states, is an irreducible aspect of the experience of emotion or whether positivity and negativity are separable in experience. If valence is irreducible, it follows that people cannot feel happy and sad at the same time. Conversely, if positivity and negativity are separable, people may be able to experience such mixed emotions. The authors tested several alternative interpretations for prior evidence that happiness and sadness can co-occur in bittersweet situations (i.e., those containing both pleasant and unpleasant aspects). One possibility is that subjects who reported mixed emotions merely vacillated between happiness and sadness. The authors tested this hypothesis in Studies 1–3 by asking subjects to complete online continuous measures of happiness and sadness. Subjects reported more simultaneously mixed emotions during a bittersweet film clip than during a control clip. Another possibility is that subjects in earlier studies reported mixed emotions only because they were explicitly asked whether they felt happy and sad. The authors tested this hypothesis in Studies 4–6 with open-ended measures of emotion. Subjects were more likely to report mixed emotions after the bittersweet clip than the control clip. Both patterns occurred even when subjects were told that they were not expected to report mixed emotions (Studies 2 and 5) and among subjects who did not previously believe that people could simultaneously feel happy and sad (Studies 3 and 6). These results provide further evidence that positivity and negativity are separable in experience. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

14.
The results of 5 experiments indicate that people report more intense emotions during anticipation of, than during retrospection about, emotional events that were positive (Thanksgiving Day), negative (annoying noises, menstruation), routine (menstruation), and hypothetical (all-expenses-paid ski vacation). People's tendency to report more intense emotion during anticipation than during retrospection was associated with a slight, but only occasionally significant, tendency for people to expect future emotions to be more intense than they remembered past emotions having been. The greater evocativeness of anticipation than retrospection was also associated with and statistically mediated by participants' tendency to report mentally simulating future emotional events more extensively than they report mentally stimulating past emotional events. The conclusion that anticipation is more evocative than retrospection has implications for research methodology, clinical practice, decision making, and well-being. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
16.
The authors examined similarities and differences between (1) listeners’ perceptions of emotions conveyed by 30-s pieces of music and (2) their emotional responses to the same pieces. Using identical scales, listeners rated how happy and how sad the music made them feel, and the happiness and the sadness expressed by the music. The music was manipulated to vary in tempo (fast or slow) and mode (major or minor). Feeling and perception ratings were highly correlated but perception ratings were higher than feeling ratings, particularly for music with consistent cues to happiness (fast-major) or sadness (slow-minor), and for sad-sounding music in general. Associations between the music manipulations and listeners’ feelings were mediated by their perceptions of the emotions conveyed by the music. Happiness ratings were elevated for fast-tempo and major-key stimuli, sadness ratings were elevated for slow-tempo and minor-key stimuli, and mixed emotional responses (higher happiness and sadness ratings) were elevated for music with mixed cues to happiness and sadness (fast-minor or slow-major). Listeners also exhibited ambivalence toward sad-sounding music. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
Appraisal theories of emotion hold that it is the way a person interprets a situation--rather than the situation itself--that gives rise to one emotion rather than another emotion (or no emotion at all). Unfortunately, most prior tests of this foundational hypothesis have simultaneously varied situations and appraisals, making an evaluation of this assumption difficult. In the present study, participants responded to a standardized laboratory situation with a variety of different emotions. Appraisals predicted the intensity of individual emotions across participants. In addition, subgroups of participants with similar emotional response profiles made comparable appraisals. Together, these findings suggest that appraisals may be necessary and sufficient to determine different emotional reactions toward a particular situation. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

18.
Appraisal theories of emotion propose that the emotions people experience correspond to their appraisals of their situation. In other words, individual differences in emotional experiences reflect differing interpretations of the situation. We hypothesized that in similar situations, people in individualist and collectivist cultures experience different emotions because of culturally divergent causal attributions for success and failure (i.e., agency appraisals). In a test of this hypothesis, American and Japanese participants recalled a personal experience (Study 1) or imagined themselves to be in a situation (Study 2) in which they succeeded or failed, and then reported their agency appraisals and emotions. Supporting our hypothesis, cultural differences in emotions corresponded to differences in attributions. For example, in success situations, Americans reported stronger self-agency emotions (e.g., proud) than did Japanese, whereas Japanese reported a stronger situation-agency emotion (lucky). Also, cultural differences in attribution and emotion were largely explained by differences in self-enhancing motivation. When Japanese and Americans were induced to make the same attribution (Study 2), cultural differences in emotions became either nonsignificant or were markedly reduced. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

19.
Gaze direction influences younger adults' perception of emotional expressions, with direct gaze enhancing the perception of anger and joy, while averted gaze enhances the perception of fear. Age-related declines in emotion recognition and eye-gaze processing have been reported, indicating that there may be age-related changes in the ability to integrate these facial cues. As there is evidence of a positivity bias with age, age-related difficulties integrating these cues may be greatest for negative emotions. The present research investigated age differences in the extent to which gaze direction influenced explicit perception (e.g., anger, fear and joy; Study 1) and social judgments (e.g., of approachability; Study 2) of emotion faces. Gaze direction did not influence the perception of fear in either age group. In both studies, age differences were found in the extent to which gaze direction influenced judgments of angry and joyful faces, with older adults showing less integration of gaze and emotion cues than younger adults. Age differences were greatest when interpreting angry expressions. Implications of these findings for older adults' social functioning are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
The most familiar emotional signals consist of faces, voices, and whole-body expressions, but so far research on emotions expressed by the whole body is sparse. The authors investigated recognition of whole-body expressions of emotion in three experiments. In the first experiment, participants performed a body expression-matching task. Results indicate good recognition of all emotions, with fear being the hardest to recognize. In the second experiment, two alternative forced choice categorizations of the facial expression of a compound face-body stimulus were strongly influenced by the bodily expression. This effect was a function of the ambiguity of the facial expression. In the third experiment, recognition of emotional tone of voice was similarly influenced by task irrelevant emotional body expressions. Taken together, the findings illustrate the importance of emotional whole-body expressions in communication either when viewed on their own or, as is often the case in realistic circumstances, in combination with facial expressions and emotional voices. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号