首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
There is a long history of attempts to explain why music is perceived as expressing emotion. The relationship between pitches serves as an important cue for conveying emotion in music. The musical interval referred to as the minor third is generally thought to convey sadness. We reveal that the minor third also occurs in the pitch contour of speech conveying sadness. Bisyllabic speech samples conveying four emotions were recorded by 9 actresses. Acoustic analyses revealed that the relationship between the 2 salient pitches of the sad speech samples tended to approximate a minor third. Participants rated the speech samples for perceived emotion, and the use of numerous acoustic parameters as cues for emotional identification was modeled using regression analysis. The minor third was the most reliable cue for identifying sadness. Additional participants rated musical intervals for emotion, and their ratings verified the historical association between the musical minor third and sadness. These findings support the theory that human vocal expressions and music share an acoustic code for communicating sadness. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
Studied the degree of familiarity, the age of acquisition, and verbal associations with regard to 144 musical excerpts drawn from the repertory of tunes that is expected to be shared by all French-speaking Quebec university students. Human Ss: 60 normal male and female Canadian adults (aged 21–45 yrs) (university students) (Group 1). 60 normal male and female Canadian adults (aged 19–43 yrs) (university students) (Group 2). The excerpts were presented to all Ss in synthesized, monophonic recordings. Ss in Group 1 were asked to indicate their degree of familiarity with each excerpt and the age at which they learned the excerpt. Ss in Group 2 were asked to indicate whether the original tune was vocal or instrumental and to provide verbal associations for the excerpts. The degree of familiarity, developmental period of acquisition, frequency of verbal associations, and dominant verbal association were determined for each excerpt. (English abstract) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
Recorded 20 adult male French Canadians of various social status as they read a short passage. Their voices were played to judges who rated the speakers on semantic-differential type rating scales and estimated their social status levels. Results support earlier studies in that judges could recognize social status with a level of accuracy comparable to a correlation of .80+ between estimated and actual social status. However, judges were nearly as accurate in judging social status from vocal qualities alone as when both vocal and verbal (content) aspects of speech were free to vary. This level of accuracy was entirely attributable to the ability of judges to discriminate white-collar workers as a whole from blue-collar workers as a whole. No smaller subgroups were discriminable. In a 2nd study, the same French Canadian tapes were played to 26 English-speaking college Ss who knew no French. The accuracy of these judges was on a level comparable to a correlation of about .70, suggesting that the majority of the vocal cues by which social status is discriminated are general across at least these 2 languages. (French summary) (20 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
Judging emotion from the nonverbal properties of speech requires elimination of verbal cues. 3 methods of doing this are investigated: (a) a constant, ambiguous set of words for various emotional expressions, (b) filtering out the frequencies which permit word recognition, (c) speech in a language unknown to the listener. 7 actors portrayed the emotions, which were judged by 27 Ss, under all 3 conditions. Constant verbal content virtually requires artificially prepared situations. Filtered speech judgments depend partially on different individual differences from judgments of normal speech. Foreign speech (here, Japanese) may have different nonverbal cues from English. (16 ref.) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
Examined whether spontaneous facial expressions provide observers with sufficient information to distinguish accurately which of 7 affective states (6 emotional and 1 neutral) is being experienced by another person. Six undergraduate senders' facial expressions were covertly videotaped as they watched emotionally loaded slides. After each slide, senders nominated the emotions term that best described their affective reaction and also rated the pleasantness and strength of that reaction. Similar nominations of emotion terms and ratings were later made by 53 undergraduate receivers who viewed the senders' videotaped facial expression. The central measure of communication accuracy was the match between senders' and receivers' emotion nominations. Overall accuracy was significantly greater than chance, although it was not impressive in absolute terms. Only happy, angry, and disgusted expressions were recognized at above-chance rates, whereas surprised expressions were recognized at rates that were significantly worse than chance. Female Ss were significantly better senders than were male Ss. Although neither sex was found to be better at receiving facial expressions, female Ss were better receivers of female senders' expressions than of male senders' expressions. Female senders' neutral and surprised expressions were more accurately recognized than were those of male senders. The only sex difference found for decoding emotions was a tendency for male Ss to be more accurate at recognizing anger. (25 ref) (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

6.
Studied the influence of the value attached to aggression by a peer group on the relationship between aggression and peer status. Human Ss: 171 normal male Canadian school-age children (mean age 8 yrs) (3rd-grade students from 14 classes). 106 normal male Canadian school-age children (mean age 10 yrs) (5th-grade students from 9 classes). Ss completed interviews, questionnaires, and peer nomination inventories to assess their peer status and attitudes toward aggression (ATA). Teachers filled out a behavior problem checklist for each S. Interactions between aggressive behavior and peer status were analyzed, and for each grade level, results from the 3 classes with the highest group ATA scores were compared to results from the 3 classes with the lowest group ATA scores. (English abstract) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
Investigated a category system of emotion words used in training beginning counselors to respond appropriately to client emotions. 10 experienced counselors (mean age 45.7 yrs), 32 counseling students (mean age 30.7 yrs), and 38 noncounselors (mean age 25.7 yrs) performed a free sort of 45 emotion words selected from categories labeled "depressed," "fearful," and "angry" proposed by D. C. Hammond et al (1977). Free sorts for each group of Ss were compared with the categories proposed by Hammond et al and with each other by means of quadratic assignment. Empirically derived intensity categories used by the Ss in each group also were identified. Comparisons were statistically significant, suggesting that Ss' categories were similar to the hypothesized categories. The categories used by each group, however, were more similar to each other than to the hypothesized categories. Fewer than 20% of the words were classified reliably across the 4 category systems. Use of the proposed category system in counselor training, therefore, would not be expected to improve counselors' ability to accurately identify and label client emotions. (16 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
Two female Es and 1 male E individually made a home visit to each of 30 male and 30 female infants representing 8-, 10-, and 12-mo age groups; visits for each S occurred within a 10-day period. Recordings of Ss' visual, motor, and vocal behavior and facial expressions were used to determine a general response score (an affect category and intensity score) for each visit. In order for Ss' responses to strangers to be considered stable, the S had to receive the same category score at each of the 3 visits and behavior leading to this score had to be similar at each visit. Findings indicate that a significant number of Ss were stable in their responses to strangers; however, only positive responses were stable. Results question the assumption in developmental literature that a child's response to a stranger remains constant for a particular period; a re-evaluation of accepted guideposts of infant social and emotional development is indicated. (24 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
The authors compared the accuracy of emotion decoding for nonlinguistic affect vocalizations, speech-embedded vocal prosody, and facial cues representing 9 different emotions. Participants (N = 121) decoded 80 stimuli from 1 of the 3 channels. Accuracy scores for nonlinguistic affect vocalizations and facial expressions were generally equivalent, and both were higher than scores for speech-embedded prosody. In particular, affect vocalizations showed superior decoding over the speech stimuli for anger, contempt, disgust, fear, joy, and sadness. Further, specific emotions that were decoded relatively poorly through speech-embedded prosody were more accurately identified through affect vocalizations, suggesting that emotions that are difficult to communicate in running speech can still be expressed vocally through other means. Affect vocalizations also showed superior decoding over faces for anger, contempt, disgust, fear, sadness, and surprise. Facial expressions showed superior decoding scores over both types of vocal stimuli for joy, pride, embarrassment, and “neutral” portrayals. Results are discussed in terms of the social functions served by various forms of nonverbal emotion cues and the communicative advantages of expressing emotions through particular channels. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
Recordings were made of female teachers reciting congruent and contradictory messages. In the congruent condition, the verbal and tonal elements of the communication were consistent. In the contradictory condition, the tone of voice was incongruent with the words spoken. All the congruent and contradictory vocal expressions were arranged in random order on a single tape and played to "normal" and "disturbed" boys of different ages (Grades 2, 4, and 6). Significant effects were found for type of speech and age: Ss reacted more negatively to contradictory than to congruent speech, and younger Ss responded more negatively to contradiction than did older ones. The disturbed Ss reacted significantly more negatively than normal Ss to contradictory messages. (15 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

11.
Compared 20 thought-disordered (TD) manics and schizophrenics (mean age 31 yrs) to 20 manic and schizophrenic patients (mean age 31.25 yrs) without thought disorder (NTD) and to 10 normal Ss (mean age 30.9 yrs) normal on the rating scales of cohesion and reference performance in speech developed by S. R. Rochester and J. R. Martin (1979). TD manics and schizophrenics differed from NTD Ss and the normal group in their more frequent use of unclear references as well as in their less frequent use of effective cohesion and reference strategies. Speech elements of the TD Ss were classified into disordered and nondisordered segments, and the same natural language analysis was completed for each category of speech segments. Nondisordered speech segments of TD Ss were quite similar to the overall speech performance of NTD Ss and the normal group. There were no cohesion or reference performance differences between TD manics and TD schizophrenics in their disordered speech segments. Findings are interpreted as validation of the usefulness of the Rochester and Martin rating system for identifying aspects of speech performance that are related to clinically rated thought disorder. (22 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

12.
Five studies investigated the young infant's ability to produce identifiable emotion expressions as defined in differential emotions theory. Trained judges applied emotion-specific criteria in selecting expression stimuli from videotape recordings of 54 1–9 mo old infants' responses to a variety of incentive events, ranging from playful interactions to the pain of inoculations. Four samples of untrained Ss (130 undergraduates and 62 female health service professionals) confirmed the social validity of infants' emotion expressions by reliably identifying expressions of interest, joy, surprise, sadness, anger, disgust, contempt, and fear. Brief training resulted in significant increases in the accuracy of discrimination of infants' negative emotion expressions for low-accuracy Ss. Construct validity for the 8 emotion expressions identified by untrained Ss and for a consistent pattern of facial responses to unanticipated pain was provided by expression identifications derived from an objective, theoretically structured, anatomically based facial movement coding system. (21 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
Studied interactions between cognitive development and self-monitoring abilities. Human Ss: 19 normal male and female Canadian school-age children (aged 8 yrs). 20 normal male and female Canadian school-age children (aged 10 yrs). 21 normal male and female Canadian school-age children (aged 12 yrs). 21 normal male and female Canadian adolescents (aged 14 yrs). Ss were asked to solve 8 proportionality problems, using a balance scale. Cognitive development was determined by the strategy used and performance on each problem. Indicators of self-monitoring included amount of time spent on strategy planning, self-evaluation of expected performance, persistence, changes in strategies following failure, and verbal explanations of failure. Interactions between cognitive level and indicators of self-monitoring were analyzed, and the influence of task complexity was determined. (English abstract) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
15.
In Exp I, 4 groups of 9 infants each (mean ages 2.1, 4.2, 8.1, and 19.2 mo) were videotaped as they received a diptheria–pertussis–tetanus inoculation. Their facial movements for the 1st 10 sec following needle penetration were coded and analyzed. There was no relationship between expressions of affect and sex or social class. Pain produced (a) a distinct distress expression, whose prominence as immediate response to pain decreased with age, and (b) the anger expression, whose prominence as immediate response increased with age. In Exp II, the indices of facial affect signals derived from the entire period from needle penetration to soothing were analyzed for 18 Ss from Exp I. There were no effects of sex on soothing time or total time each affect was expressed. Ss above and below the median on the ability to be soothed differed significantly in soothing time and in duration and pattern of affect expressions. In particular, slow soothers showed a proportionately greater duration of anger expression than fast soothers. The distress and anger expressions changed with age. (17 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
Studied the extent to which a stereotype of Mexican or Chicano students as fatalistic is supported by their locus of control scores. Data came from Rotter's Internal–External Locus of Control Scale scores of male college students in 4 nations: US (86), Mexico (57), Ireland (47), and West Germany (54). Data show the Mexican Ss to be significantly more internally oriented than Ss from each of the other nations. Locus of control scores (determined with a scale developed by H. Levenson, 1974) for 151 Anglo and 95 Chicano senior high school students were also compared. Scores for Chicanos were nearly identical to those obtained from Anglo students. Only Chicano male high school students not planning to enter college showed any tendency toward a more external locus of control. It is concluded that to the extent a perceived external locus of control would be indicative of a fatalistic outlook, such perception is lacking in most data in Mexican and Chicano respondents. (22 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
Investigated the effects of vocal masking on the associative structure and content of ideas elicited in an imagery task, using 16 college graduates as Ss. When Ss were unable to hear their own voices, their images showed significantly greater indications of overall drive expression (unweighted drive), more intense drive expression (weighted drive), and a significant increase in morality references. The degree of drive expression was positively correlated with speech editing behaviors (aborted sentences, incomplete words, etc.), and negatively correlated with language editing behaviors (e.g., use of qualifying expressions). Findings are discussed in terms of the reciprocal activity of speech and language editing in relation to (a) drive expression, (b) the functional significance of hearing one's own voice, and (c) the contribution of the total experimental situation (e.g., the masking noise) to the effects obtained. (15 ref.) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

18.
Studied the dimensions of a brief French version of the WCQ (S. Folkman and R. S. Lazarus, 1988) in a sample of 506 married and cohabiting couples. Human Ss: 506 normal male Canadian adults (mean age 37 yrs). 506 normal female Canadian adults (mean age 36 yrs). Each S completed the WCQ, and data were treated with factorial analyses to identify strong and stable dimensions of marital coping. (English abstract) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
20.
This study compared young and older adults’ ability to recognize bodily and auditory expressions of emotion and to match bodily and facial expressions to vocal expressions. Using emotion discrimination and matching techniques, participants assessed emotion in voices (Experiment 1), point-light displays (Experiment 2), and still photos of bodies with faces digitally erased (Experiment 3). Older adults’ were worse at least some of the time in recognition of anger, sadness, fear, and happiness in bodily expressions and of anger in vocal expressions. Compared with young adults, older adults also found it more difficult to match auditory expressions to facial expressions (5 of 6 emotions) and bodily expressions (3 of 6 emotions). (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号