首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Human face perception is a finely tuned, specialized process. When comparing faces between species, therefore, it is essential to consider how people make these observational judgments. Comparing facial expressions may be particularly problematic, given that people tend to consider them categorically as emotional signals, which may affect how accurately specific details are processed. The bared-teeth display (BT), observed in most primates, has been proposed as a homologue of the human smile (J. A. R. A. M. van Hooff, 1972). In this study, judgments of similarity between BT displays of chimpanzees (Pan troglodytes) and human smiles varied in relation to perceived emotional valence. When a chimpanzee BT was interpreted as fearful, observers tended to underestimate the magnitude of the relationship between certain features (the extent of lip corner raise) and human smiles. These judgments may reflect the combined effects of categorical emotional perception, configural face processing, and perceptual organization in mental imagery and may demonstrate the advantages of using standardized observational methods in comparative facial expression research. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
The Chimpanzee Facial Action Coding System (ChimpFACS) is an objective, standardized observational tool for measuring facial movement in chimpanzees based on the well-known human Facial Action Coding System (FACS; P. Ekman & W. V. Friesen, 1978). This tool enables direct structural comparisons of facial expressions between humans and chimpanzees in terms of their common underlying musculature. Here the authors provide data on the first application of the ChimpFACS to validate existing categories of chimpanzee facial expressions using discriminant functions analyses. The ChimpFACS validated most existing expression categories (6 of 9) and, where the predicted group memberships were poor, the authors discuss potential problems with ChimpFACS and/or existing categorizations. The authors also report the prototypical movement configurations associated with these 6 expression categories. For all expressions, unique combinations of muscle movements were identified, and these are illustrated as peak intensity prototypical expression configurations. Finally, the authors suggest a potential homology between these prototypical chimpanzee expressions and human expressions based on structural similarities. These results contribute to our understanding of the evolution of emotional communication by suggesting several structural homologies between the facial expressions of chimpanzees and humans and facilitating future research. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
The study of the spontaneous expressions of blind individuals offers a unique opportunity to understand basic processes concerning the emergence and source of facial expressions of emotion. In this study, the authors compared the expressions of congenitally and noncongenitally blind athletes in the 2004 Paralympic Games with each other and with those produced by sighted athletes in the 2004 Olympic Games. The authors also examined how expressions change from 1 context to another. There were no differences between congenitally blind, noncongenitally blind, and sighted athletes, either on the level of individual facial actions or in facial emotion configurations. Blind athletes did produce more overall facial activity, but these were isolated to head and eye movements. The blind athletes' expressions differentiated whether they had won or lost a medal match at 3 different points in time, and there were no cultural differences in expression. These findings provide compelling evidence that the production of spontaneous facial expressions of emotion is not dependent on observational learning but simultaneously demonstrates a learned component to the social management of expressions, even among blind individuals. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
The ability of the human face to communicate emotional states via facial expressions is well known, and past research has established the importance and universality of emotional facial expressions. However, recent evidence has revealed that facial expressions of emotion are most accurately recognized when the perceiver and expresser are from the same cultural ingroup. The current research builds on this literature and extends this work. Specifically, we find that mere social categorization, using a minimal-group paradigm, can create an ingroup emotion–identification advantage even when the culture of the target and perceiver is held constant. Follow-up experiments show that this effect is supported by differential motivation to process ingroup versus outgroup faces and that this motivational disparity leads to more configural processing of ingroup faces than of outgroup faces. Overall, the results point to distinct processing modes for ingroup and outgroup faces, resulting in differential identification accuracy for facial expressions of emotion. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
Examined intermodal perception of vocal and facial expressions in 2 experiments with 16 5- and 16 7-mo-olds. Two filmed facial expressions were presented with a single vocal expression characteristic of 1 of the facial expressions (angry or happy). The lower third of each face was obscured, so Ss could not simply match lip movements to the voice. Overall findings indicate that only 7-mo-olds increased their fixation to a facial expression when it was sound-specified. Older infants evidently detected information that was invariant across the presentations of a single affective expression, despite degradation of temporal synchrony information. The 5-mo-olds' failure to look differentially is explained by the possibilities that (1) 5-mo-olds may need to see the whole face for any discrimination of expressions to occur; (2) they cannot discriminate films of happy and angry facial expressions even with the full face available; or (3) they rely heavily on temporal information for the discrimination of facial expressions and/or the intermodal perception of bimodally presented expressions, although not for articulatory patterns. Preferences for a particular expression were not found: Infants did not look longer at the happy or the angry facial expression, independent of the sound manipulation, suggesting that preferences for happy expressions found in prior studies may rest on attention to the "toothy" smile. (25 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
Objective: Individuals with schizophrenia have difficulty interpreting social and emotional cues such as facial expression, gaze direction, body position, and voice intonation. Nonverbal cues are powerful social signals but are often processed implicitly, outside the focus of attention. The aim of this research was to assess implicit processing of social cues in individuals with schizophrenia. Method: Patients with schizophrenia or schizoaffective disorder and matched controls performed a primary task of word classification with social cues in the background. Participants were asked to classify target words (LEFT/RIGHT) by pressing a key that corresponded to the word, in the context of facial expressions with eye gaze averted to the left or right. Results: Although facial expression and gaze direction were irrelevant to the task, these facial cues influenced word classification performance. Participants were slower to classify target words (e.g., LEFT) that were incongruent to gaze direction (e.g., eyes averted to the right) compared to target words (e.g., LEFT) that were congruent to gaze direction (e.g., eyes averted to the left), but this only occurred for expressions of fear. This pattern did not differ for patients and controls. Conclusion: The results showed that threat-related signals capture the attention of individuals with schizophrenia. These data suggest that implicit processing of eye gaze and fearful expressions is intact in schizophrenia. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
Although many psychological models suggest that human beings are invariably motivated to avoid negative stimuli, more recent theories suggest that people are frequently motivated to approach angering social challenges in order to confront and overcome them. To examine these models, the current investigation sought to determine whether angry facial expressions potentiate approach-motivated motor behaviors. Across 3 studies, individuals were faster to initiate approach movements toward angry facial expressions than to initiate avoidance movements away from such facial expressions. This approach advantage differed significantly from participants’ responses to both emotionally neutral (Studies 1 & 3) and fearful (Study 2) facial expressions. Furthermore, this pattern was most apparent when physical approach appeared to be effective in overcoming the social challenge posed by angry facial expressions (Study 3). The results are discussed in terms of the processes underlying anger-related approach motivation and the conditions under which they are likely to arise. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
Impaired facial expression recognition has been associated with features of major depression, which could underlie some of the difficulties in social interactions in these patients. Patients with major depressive disorder and age- and gender-matched healthy volunteers judged the emotion of 100 facial stimuli displaying different intensities of sadness and happiness and neutral expressions presented for short (100 ms) and long (2,000 ms) durations. Compared with healthy volunteers, depressed patients demonstrated subtle impairments in discrimination accuracy and a predominant bias away from the identification as happy of mildly happy expressions. The authors suggest that, in depressed patients, the inability to accurately identify subtle changes in facial expression displayed by others in social situations may underlie the impaired interpersonal functioning. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
Very few large-scale studies have focused on emotional facial expression recognition (FER) in 3-year-olds, an age of rapid social and language development. We studied FER in 808 healthy 3-year-olds using verbal and nonverbal computerized tasks for four basic emotions (happiness, sadness, anger, and fear). Three-year-olds showed differential performance on the verbal and nonverbal FER tasks, especially with respect to fear. That is to say, fear was one of the most accurately recognized facial expressions as matched nonverbally and the least accurately recognized facial expression as labeled verbally. Sex did not influence emotion-matching nor emotion-labeling performance after adjusting for basic matching or labeling ability. Three-year-olds made systematic errors in emotion-labeling. Namely, happy expressions were often confused with fearful expressions, whereas negative expressions were often confused with other negative expressions. Together, these findings suggest that 3-year-olds' FER skills strongly depend on task specifications. Importantly, fear was the most sensitive facial expression in this regard. Finally, in line with previous studies, we found that recognized emotion categories are initially broad, including emotions of the same valence, as reflected in the nonrandom errors of 3-year-olds. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

10.
Although positive and negative images enhance the visual processing of young adults, recent work suggests that a life-span shift in emotion processing goals may lead older adults to avoid negative images. To examine this tendency for older adults to regulate their intake of negative emotional information, the current study investigated age-related differences in the perceptual boost received by probes appearing over facial expressions of emotion. Visually-evoked event-related potentials were recorded from the scalp over cortical regions associated with visual processing as a probe appeared over facial expressions depicting anger, sadness, happiness, or no emotion. The activity of the visual system in response to each probe was operationalized in terms of the P1 component of the event-related potentials evoked by the probe. For young adults, the visual system was more active (i.e., greater P1 amplitude) when the probes appeared over any of the emotional facial expressions. However, for older adults, the visual system displayed reduced activity when the probe appeared over angry facial expressions. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

11.
The present study was designed to examine the operation of depression-specific biases in the identification or labeling of facial expression of emotions. Participants diagnosed with major depression and social phobia and control participants were presented with faces that expressed increasing degrees of emotional intensity, slowly changing from a neutral to a full-intensity happy, sad, or angry expression. The authors assessed individual differences in the intensity of facial expression of emotion that was required for the participants to accurately identify the emotion being expressed. The depressed participants required significantly greater intensity of emotion than did the social phobic and the control participants to correctly identify happy expressions and less intensity to identify sad than angry expressions. In contrast, social phobic participants needed less intensity to correctly identify the angry expressions than did the depressed and control participants and less intensity to identify angry than sad expressions. Implications of these results for interpersonal functioning in depression and social phobia are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

12.
Humans have the ability to replicate the emotional expressions of others even when they undergo different emotions. Such distinct responses of expressions, especially positive expressions, play a central role in everyday social communication of humans and may give the responding individuals important advantages in cooperation and communication. The present work examined laughter in chimpanzees to test whether nonhuman primates also use their expressions in such distinct ways. The approach was first to examine the form and occurrence of laugh replications (laughter after the laughter of others) and spontaneous laughter of chimpanzees during social play and then to test whether their laugh replications represented laugh-elicited laugh responses (laughter triggered by the laughter of others) by using a quantitative method designed to measure responses in natural social settings. The results of this study indicated that chimpanzees produce laugh-elicited laughter that is distinct in form and occurrence from their spontaneous laughter. These findings provide the first empirical evidence that nonhuman primates have the ability to replicate the expressions of others by producing expressions that differ in their underlying emotions and social implications. The data further showed that the laugh-elicited laugh responses of the subjects were closely linked to play maintenance, suggesting that chimpanzees might gain important cooperative and communicative advantages by responding with laughter to the laughter of their social partners. Notably, some chimpanzee groups of this study responded more with laughter than others, an outcome that provides empirical support of a socialization of expressions in great apes similar to that of humans. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

13.
For clear and unambiguous social categories, person perception occurs quite accurately from minimal cues. This article addresses the perception of an ambiguous social category (male sexual orientation) from minimal cues. Across 5 studies, the authors examined individuals' actual and self-assessed accuracy when judging male sexual orientation from faces and facial features. Although participants were able to make accurate judgments from multiple facial features (i.e., hair, the eyes, and the mouth area), their perceived accuracy was calibrated with their actual accuracy only when making judgments based on hairstyle, a controllable feature. These findings provide evidence that suggests different processes for extracting social category information during perception: explicit judgments based on obvious cues (hairstyle) and intuitive judgments based on nonobvious cues (information from the eyes and mouth area). Differences in the accuracy of judgments based on targets' controllability and perceivers' awareness of cues provides insight into the processes underlying intuitive predictions and intuitive judgments. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
A broader understanding of the neural basis of social behavior in primates requires the use of species-specific stimuli that elicit spontaneous, but reproducible and tractable behaviors. In this context of natural behaviors, individual variation can further inform about the factors that influence social interactions. To approximate natural social interactions similar to those documented by field studies, we used unedited video footage to induce in viewer monkeys spontaneous facial expressions and looking patterns in the laboratory setting. Three adult male monkeys (Macaca mulatta), previously behaviorally and genetically (5-HTTLPR) characterized, were monitored while they watched 10 s video segments depicting unfamiliar monkeys (movie monkeys) displaying affiliative, neutral, and aggressive behaviors. The gaze and head orientation of the movie monkeys alternated between “averted” and “directed” at the viewer. The viewers were not reinforced for watching the movies, thus their looking patterns indicated their interest and social engagement with the stimuli. The behavior of the movie monkey accounted for differences in the looking patterns and facial expressions displayed by the viewers. We also found multiple significant differences in the behavior of the viewers that correlated with their interest in these stimuli. These socially relevant dynamic stimuli elicited spontaneous social behaviors, such as eye-contact induced reciprocation of facial expression, gaze aversion, and gaze following, that were previously not observed in response to static images. This approach opens a unique opportunity to understanding the mechanisms that trigger spontaneous social behaviors in humans and nonhuman primates. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

15.
We present an overview of a new multidisciplinary research program that focuses on haptic processing of human facial identity and facial expressions of emotion. A series of perceptual and neuroscience experiments with live faces and/or rigid three-dimensional facemasks is outlined. To date, several converging methodologies have been adopted: behavioural experimental studies with neurologically intact participants, neuropsychological behavioural research with prosopagnosic individuals, and neuroimaging studies using fMRI techniques. In each case, we have asked what would happen if the hands were substituted for the eyes. We confirm that humans can haptically determine both identity and facial expressions of emotion in facial displays at levels well above chance. Clearly, face processing is a bimodal phenomenon. The processes and representations that underlie such patterns of behaviour are also considered. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
The effects of focal brain lesions on the decoding of emotional concepts in facial expressions were investigated. Facial emotions are hierarchically organized patterns comprising (1) structural surface features, (2) discrete (primary) emotional categories and (3) secondary dimensions, such as valence and arousal. Categorical decoding was measured using (1) selection of category labels and selection of the named emotion category; (2) matching one facial expression with two choice expressions. Dimensional decoding was assessed by matching one face with two different expressions with regard to valence or arousal. 70 patients with well documented cerebral lesions and 15 matched hospital controls participated in the study. 27 had left brain damage (LBD; 10 frontal, 10 temporal, 7 parietal); 37 had right brain damage (RBD; 15 frontal, 11 temporal, 11 parietal). Six additional patients had lesions involving both frontal lobes. Right temporal and parietal lesioned patients were markedly impaired in the decoding of primary emotions. The same patients also showed a reduced arousal decoding. In contrast to several patients with frontal and left hemisphere lesions, emotional conceptualization and face discrimination was not independent in these groups. No group differences were observed in valence decoding. However, right frontal lesions appeared to interfere with the discrimination of negative valence. Moreover, a distraction by structural features was noted in RBD when facial identities were varied across stimulus and response pictures in matching tasks with differing conceptual load. Our results suggest that focal brain lesions differentially affect the comprehension of emotional meaning in faces depending on the level of conceptual load and interference of structural surface features.  相似文献   

17.
This study investigates the discrimination accuracy of emotional stimuli in subjects with major depression compared with healthy controls using photographs of facial expressions of varying emotional intensities. The sample included 88 unmedicated male and female subjects, aged 18–56 years, with major depressive disorder (n = 44) or no psychiatric illness (n = 44), who judged the emotion of 200 facial pictures displaying an expression between 10% (90% neutral) and 80% (nuanced) emotion. Stimuli were presented in 10% increments to generate a range of intensities, each presented for a 500-ms duration. Compared with healthy volunteers, depressed subjects showed very good recognition accuracy for sad faces but impaired recognition accuracy for other emotions (e.g., harsh, surprise, and sad expressions) of subtle emotional intensity. Recognition accuracy improved for both groups as a function of increased intensity on all emotions. Finally, as depressive symptoms increased, recognition accuracy increased for sad faces, but decreased for surprised faces. Moreover, depressed subjects showed an impaired ability to accurately identify subtle facial expressions, indicating that depressive symptoms influence accuracy of emotional recognition. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

18.
Emotion information processing may occur in 2 modes that are differently represented in conscious awareness. Fast online processing involves coarse-grained analysis of salient features and is not represented in conscious awareness; offline processing takes hundreds of milliseconds to generate fine-grained analysis and is represented in conscious awareness. These processing modes may be studied using event-related electroencephalogram theta synchronization as a marker of emotion processing. Two experiments were conducted that differed on the mode of emotional information presentation. In the explicit mode, subjects were explicitly instructed to evaluate the emotional content of presented stimuli; in the implicit mode, their attention was directed to other features of the stimulus. In the implicit mode, theta synchronization was most pronounced in the early processing stage, whereas in the explicit mode, it was more pronounced in the late processing stage. The early processing stage was more pronounced in men, whereas the late processing stage was more pronounced in women. Implications of these gender differences in emotion processing for well-documented differences in social behavior are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
In Exp I, 4 groups of 9 infants each (mean ages 2.1, 4.2, 8.1, and 19.2 mo) were videotaped as they received a diptheria–pertussis–tetanus inoculation. Their facial movements for the 1st 10 sec following needle penetration were coded and analyzed. There was no relationship between expressions of affect and sex or social class. Pain produced (a) a distinct distress expression, whose prominence as immediate response to pain decreased with age, and (b) the anger expression, whose prominence as immediate response increased with age. In Exp II, the indices of facial affect signals derived from the entire period from needle penetration to soothing were analyzed for 18 Ss from Exp I. There were no effects of sex on soothing time or total time each affect was expressed. Ss above and below the median on the ability to be soothed differed significantly in soothing time and in duration and pattern of affect expressions. In particular, slow soothers showed a proportionately greater duration of anger expression than fast soothers. The distress and anger expressions changed with age. (17 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
Fifty children and adolescents were tested for their ability to recognize the 6 basic facial expressions of emotion depicted in Ekman and Friesen's normed photographs. Subjects were presented with sets of 6 photographs of faces, each portraying a different basic emotion, and stories portraying those emotions were read to them. After each story, the subject was asked to point to the photograph in the set that depicted the emotion described. Overall, the children correctly identified the emotions on 74% of the presentations. The highest level of accuracy in recognition was for happiness, followed by sadness, with fear being the emotional expression that was mistaken most often. When compared to studies of children in the general population, children with ADHD have deficits in their ability to accurately recognize facial expressions of emotion. These findings have important implications for the remediation of social skill deficits commonly seen in children with ADHD.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号