首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 791 毫秒
1.
A number of different cues allow listeners to perceive musical meter. Three experiments examined effects of melodic and temporal accents on perceived meter in excerpts from folk songs scored in 6/8 or 3/4 meter. Participants matched excerpts with 1 of 2 metrical drum accompaniments. Melodic accents included contour change, melodic leaps, registral extreme, melodic repetition, and harmonic rhythm. Two experiments with isochronous melodies showed that contour change and melodic repetition predicted judgments. For longer melodies in the 2nd experiment, variables predicted judgments best at the beginning of excerpts. The final experiment, with rhythmically varied melodies, showed that temporal accents, tempo, and contour change were the strongest predictors of meter. The authors' findings suggest that listeners combine multiple melodic and temporal features to perceive musical meter. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
Previous research indicates that temporal accents (TAs; accents due to time changes) play a strong role in meter perception, but evidence favoring a role for melodic accents (MAs; accents due to pitch changes) is mixed. The authors claim that this mixed support for MAs is the result of a failure to control for accent salience and addressed this hypothesis in Experiment 1. Listeners rated the metrical clarity of 13-tone melodies in which the magnitude and pattern of MAs and TAs were varied. Results showed that metrical clarity increased with both MA and TA magnitude. In Experiment 2, listeners were asked to rate metrical clarity in melodies with combinations of MA and TA patterns to allow the authors to ascertain whether these two accent types combined additively or interactively in meter perception. With respect to the additive or interactive debate, the findings highlighted the importance of (a) accent salience, (b) scoring methods, and (c) conceptual versus statistical interpretations of data. Implications for dynamic attending and neuropsychological investigations are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
Joint Accent Structure (JAS) is a construct that uses temporal relationships between different accents in a melodic pattern as indices of its complexity. The present study examines the role of different JASs in real time, attending to simple musical events. 39 adults with or without musical training were told to selectively attend to and synchronize finger taps with accents in 2 experiments that examined attentional tracking to musical patterns having a concordant or discordant JAS. Results indicate that tapping was more variable with discordant than with concordant JAS patterns, both with respect to produced inter-accent time periods and with respect to the phase of taps relative to accent onsets. These findings are interpreted in terms of real time attending and its control by event time structure. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
Prior knowledge shapes our experiences, but which prior knowledge shapes which experiences? This question is addressed in the domain of music perception. Three experiments were used to determine whether listeners activate specific musical memories during music listening. Each experiment provided listeners with one of two musical contexts that was presented simultaneously with a melody. After a listener was familiarized with melodies embedded in contexts, the listener heard melodies in isolation and judged the fit of a final harmonic or metrical probe event. The probe event matched either the familiar (but absent) context or an unfamiliar context. For both harmonic (Experiments 1 and 3) and metrical (Experiment 2) information, exposure to context shifted listeners' preferences toward a probe matching the context that they had been familiarized with. This suggests that listeners rapidly form specific musical memories without explicit instruction, which are then activated during music listening. These data pose an interesting challenge for models of music perception which implicitly assume that the listener's knowledge base is predominantly schematic or abstract. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

5.
The authors examined how the structural attributes of tonality and meter influence musical pitch–time relations. Listeners heard a musical context followed by probe events that varied in pitch class and temporal position. Tonal and metric hierarchies contributed additively to the goodness-of-fit of probes, with pitch class exerting a stronger influence than temporal position (Experiment 1), even when listeners attempted to ignore pitch (Experiment 2). Speeded classification tasks confirmed this asymmetry. Temporal classification was biased by tonal stability (Experiment 3), but pitch classification was unaffected by temporal position (Experiment 4). Experiments 5 and 6 ruled out explanations based on the presence of pitch classes and temporal positions in the context, unequal stimulus quantity, and discriminability. The authors discuss how typical Western music biases attention toward pitch and distinguish between dimensional discriminability and salience. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
To explore the relationship between the processing of melodic and rhythmic patterns in speech and music, we tested the prosodic and musical discrimination abilities of two "amusic" subjects who suffered from music perception deficits secondary to bilateral brain damage. Prosodic discrimination was assessed with sentence pairs where members of a pair differed by intonation or rhythm, and musical discrimination was tested using musical-phrase pairs derived from the prosody of the sentence pairs. This novel technique was chosen to make task demands as comparable as possible across domains. One amusic subject showed good performance on both linguistic and musical discrimination tasks, while the other had difficulty with both tasks. In both subjects, level of performance was statistically similar across domains, suggesting shared neural resource for prosody and music. Further tests suggested that prosody and music may overlap in the processes used to maintain auditory patterns in working memory.  相似文献   

7.
Research on reality monitoring (the process by which people distinguish memories of real events from memories of imagined events) suggests that the occurrence of imagined events can inflate the perceived frequency of corresponding real events. Two experiments examined how such failures in reality monitoring can contribute to the maintenance of social stereotypes. When subjects imagined members of occupational groups in the initial experiment, they tended to incorporate stereotyped traits into their imaginations, with specific traits determined by the contexts being imagined. This suggests that imagined events do correspond with stereotype-confirming real events. In the second experiment, subjects read sentences that presented traits (stereotyped and nonstereotyped) in association with occupations with uniform frequency. They also imagined members of each occupation in situations relevant to particular stereotypic traits. In subsequent judgments of presentation frequency, subjects overestimated their exposure to stereotypic occupation–trait combinations, which replicated earlier studies. Subjects further overestimated the presentation frequency of imagined stereotypic combinations, which indicated the failure to distinguish self-generated images from actual presentations. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
Harmonic priming studies have shown that a musical context with its tonal center influences target chord processing. In comparison with targets following baseline contexts, which do not establish a specific tonal center, processing is facilitated for a strongly related target functioning as the tonic, but inhibited for unrelated (out-of-key) and less related (subdominant) targets. This study investigated cost and benefit patterns for the processing of the 3 most important chords of the harmonic hierarchy. Response time patterns reflected the chords' ranking: Processing was fastest for the tonic, followed by the dominant, and then the subdominant. The comparison with baseline contexts replicated the benefit of processing for tonic targets (Experiments 1 and 3) and the cost of processing for subdominant targets (Experiment 3), while dominant targets were situated at baseline level (Experiments 1 to 3). Findings indicate that listeners implicitly understand fine differences in tonal stabilities and confirm the special status of the tonic being the most expected and solely facilitated chord at the end of a tonal context. Findings are discussed with references to sensory and cognitive approaches of music perception. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
This paper is a sequel to a study which showed that the dominant dimension for perceptual discrimination among normal voices was the male-female categorization and which also suggested that discrimination within the male-female categories utilized distinct dimenisons. The present study eliminates the male-female axis by treating the gender groups separately and making the within-category dimensions available for more sensitive analysis. The purpose was to determine the number and nature of perceptual parameters needed to explain judgments of voice similarity depending on talker sex and whether the stimulus sample was a sustained vowel or a short phrase. The similarity judgments were submitted to multidimensional analysis via INDSCAL and the resulting dimenisons were interpreted in terms of available acoustic measures and unidimensional voice-quality ratings of pitch, breathiness, hoarseness, nasality, and effort. The decisions of the listerners appeared to be influenced by both the sex of the speaker and the stimulus sample, although fundamental frequency (fo), was important for all judgments. Aside from the fo dimensions, judgments concerning male voices were related to vocal tract parameters, while similarity judgments of female voices were related to perceived glottal/vocal tranct differences. Formant structure was apparently important in judging the similarity of vowels for both sexes while perceptual glottal/temporal attributes may have been used as cues in the judgments of phrases.  相似文献   

10.
The effect of deviations from temporal expectations on tempo discrimination was studied in 3 experiments using isochronous auditory sequences. Temporal deviations consisted of advancing or delaying the onset of a comparison pattern relative to an "expected" onset, defined by an extension of the periodicity of a preceding standard pattern. An effect of onset condition was most apparent when responses to faster and slower comparison patterns were analyzed separately and onset conditions were mixed. Under these conditions, early onsets produced more "faster" judgments and lower thresholds for tempo increases, and late onsets produced more "slower" judgments and lower thresholds for tempo decreases. In another experiment, pattern tempo had a similar effect: Fast tempos led to lower thresholds for tempo increases and slow tempos led to lower thresholds for tempo decreases. Findings support oscillator-based approaches to time discrimination.  相似文献   

11.
The role of attributions in judgments of sex discrimination was examined in 2 laboratory experiments. In Study 1, participants read 1 of 12 brief scenarios in which limited information about the strength of evidence against a fictitious corporation and occupational gender stereotype were manipulated. Results suggested that attributions mediated the relationships between participants' gender, strength of evidence, and discrimination judgments. In Study 2, participants were provided with 1 of 3 detailed, typewritten summaries of evidence presented in a sex discrimination trial. Results indicated that jurors' gender was again significantly related to attributions and to sex discrimination judgments even in the face of substantial objective information related to the case. The variance in observers' judgments associated with gender, however, appeared to be greatest when information about the organization's guilt or innocence was equivocal. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

12.
Tested response time to alterations. Metric rhythm and harmonic rhythm of 13-note tonal sequences were either matched or mismatched. Metric rhythm (3/4 or 4/4 meter) was induced by dynamic accents. Harmonic rhythm was induced by implied chord progressions initiated on the first note and on either every third or every fourth note. Responses were not always faster for matched rhythms or for alterations occurring on the dynamic accent. Responses were consistently faster for sequences presented in 4/4 meter. Musically untrained Ss performed similarly to trained Ss, but were slower and more variable. Accuracy of recall on a music dictation task also favored 4/4 meter rather than matched rhythms. Coding of pitch content may have been facilitated by the structural framework of 4/4 meter rather than by expectancies arising from the match of temporal and pitch organization. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
This paper addresses the question of frequency discrimination of hearing for non-stationary (short) tone stimuli (duration < or = 125 ms). Shortening of the stimulus duration leads to widening of the frequency spectrum of the tone. It can be shown that for hearing no acoustical uncertainty relation holds and thus some nonlinear elements must be present in hearing physiology. We present neurophysiological and psychoacoustical findings supporting the hypothesis that frequency discrimination of non-stationary short tone stimuli is performed in neural networks of the auditory system. Neural network architectures that could process the temporal and place excitation patterns originating in the cochlea are suggested. We show how these networks (temporal coincidence network processing the temporal code and lateral inhibition network processing the place code) can be combined to show performance consistent with auditory physiology. They might explain the frequency discrimination of hearing for non-stationary short tone stimuli. We show the fitting of psychophysical relations based on these networks with the experimentally determined data.  相似文献   

14.
A temporally based theory of attending is proposed that assumes that the structure of world events affords different attending modes. Future-oriented attending supports anticipatory behaviors and occurs with highly coherent temporal events. Time judgments, given this attending mode, are influenced by the way an event's ending confirms or violates temporal expectancies. Analytic attending supports other activities (e.g., grouping, counting), and if it occurs with events of low temporal coherence, then time judgments depend on the attending levels involved. A weighted contrast model describes over- and underestimations of event durations. The model applies to comparative duration judgments of equal and unequal time intervals; its rationale extends to temporal productions/extrapolations. Two experiments compare predictions of the contrast model with those derived from other traditional approaches. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
Rhythm and pitch are the 2 primary dimensions of music. They are interesting psychologically because simple, well-defined units combine to form highly complex and varied patterns. This article brings together the major developments in research on how these dimensions are perceived and remembered, beginning with psychophysical results on time and pitch perception. Progressively larger units are considered, moving from basic psychological categories of temporal and frequency ratios, to pulse and scale, to metrical and tonal hierarchies, to the formation of musical rhythms and melodies, and finally to the cognitive representation of large-scale musical form. Interactions between the dimensions are considered, and major theoretical proposals are described. The article identifies various links between musical structure and perceptual and cognitive processes, suggesting psychological influences on how sounds are patterned in music. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
Numerosity discrimination was examined when items were varied in space-time position rather than in space only. Observers were instructed to indicate which of two adjacent streams of visual events contained more items. The precision of numerosity discrimination of dynamic events was not remarkably different from that of static patterns. Two basic numerosity biases previously found for static dot patterns--inhibitory overestimation and satellite underestimation--were demonstrated for items distributed randomly over a spatiotemporal interval. It was also demonstrated that two streams, equated in the number and luminous energy of items, are not judged equal in their visible number if items in one of these two streams have longer duration than items in the second stream. These findings can be accounted for by the occupancy model of perceived numerosity (Allik & Tuulmets, 1991a) if it is supposed that the impact that each element has on its neighborhood is spread along both spatial and temporal coordinates. Perceived numerosity decreases with both spatial and temporal proximity between the visual items. Space and time have interchangeable effects on perceived numerosity: the amount of numerosity bias caused by the spatial proximity of items can also be produced by the properly chosen temporal proximity of items.  相似文献   

17.
Expressive timing methods are described that map pianists' musical thoughts to sounded performance. In Experiment 1, 6 pianists performed the same musical excerpt on a computer-monitored keyboard. Each performance contained 3 expressive timing patterns: chord asynchronies, rubato patterns, and overlaps (staccato and legato). Each pattern was strongest in experienced pianists' performances and decreased when pianists attempted to play unmusically. In Experiment 2 pianists performed another musical excerpt and notated their musical intentions on an unedited score. The notated interpretations correlated with the presence of the 3 methods: The notated melody preceded other events in chords (chord asynchrony); events notated as phrase boundaries showed greatest tempo changes (rubato); and the notated melody showed most consistent amount of overlap between adjacent events (staccato and legato). These results suggest that the mapping of musical thought to musical action is rule-governed, and the same rules produce different interpretations. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

18.
This study is concerned with the question whether, and to what extent, listeners' previous exposure to music in everyday life, and expertise as a result of formal musical training, play a role in making expressive timing judgments in music. This was investigated by using a Web-based listening experiment in which listeners with a wide range of musical backgrounds were asked to compare 2 recordings of the same composition (15 pairs, grouped in 3 musical genres), 1 of which was tempo-transformed (manipulating the expressive timing). The results show that expressive timing judgments are not so much influenced by expertise levels, as is suggested by the expertise hypothesis, but by exposure to a certain musical idiom, as is suggested by the exposure hypothesis. As such, the current study provides evidence for the idea that some musical capabilities are acquired through mere exposure to music, and that these abilities are more likely enhanced by active listening (exposure) than by formal musical training (expertise). (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
Two experiments explored the relation between melodic expectancy and melodic memory. In Experiment 1, listeners rated the degree to which different endings confirmed their expectations for a set of melodies. After providing these expectancy ratings, listeners received a recognition memory test in which they discriminated previously heard melodies from new melodies. Recognition memory in this task positively correlated with perceived expectancy, and was related to the estimated tonal coherence of these melodies. Experiment 2 extended these results, demonstrating better recognition memory for high expectancy melodies, relative to medium and low expectancy melodies. This experiment also observed asymmetrical memory confusions as a function of perceived expectancy. These findings fit with a model of musical memory in which schematically central events are better remembered than schematically peripheral events.  相似文献   

20.
Investigated judgments of the frequency of test items (Y) that were highly similar to studied items (X) to test a prediction made by several memory models: that the judged frequency of Y should be proportional to the judged frequency of X. Whether stimuli were pictures or words, judged frequency of Y was bimodally distributed with 1 mode at zero, suggesting that frequency judgments involve a 2-stage process in which a zero judgment is made if there is a mismatch between retrieved information and the test item. Nonzero judgments, taken by themselves, were consistent with the prediction of proportionality. In 2 experiments, the percentage of zero judgments made to Y increased with repetition of X, but in 2 others the percentage did not change beyond frequency?=?1. The percentage of "new" judgments in recognition memory followed this same pattern. Because the judged frequency of X increased even as X–Y discrimination showed no improvement, the result is characterized as "registration without learning." (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号