首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
When presented with alternating low and high tones, listeners are more likely to perceive 2 separate streams of tones (“streaming”) than a single coherent stream when the frequency separation (Δ?) between tones is greater and the number of tone presentations is greater (“buildup”). However, the same large-Δ? sequence reduces streaming for subsequent patterns presented after a gap of up to several seconds. Buildup occurs at a level of neural representation with sharp frequency tuning. The authors used adaptation to demonstrate that the contextual effect of prior Δ? arose from a representation with broad frequency tuning, unlike buildup. Separate adaptation did not occur in a representation of Δ? independent of frequency range, suggesting that any frequency-shift detectors undergoing adaptation are also frequency specific. A separate effect of prior perception was observed, dissociating stimulus-related (i.e., Δ?) and perception-related (i.e., 1 stream vs. 2 streams) adaptation. Viewing a visual analogue to auditory streaming had no effect on subsequent perception of streaming, suggesting adaptation in auditory-specific brain circuits. These results, along with previous findings on buildup, suggest that processing in at least 3 levels of auditory neural representation underlies segregation and formation of auditory streams. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
The common assumption that perceptual sensitivities are related to neural representations of sensory stimuli has seldom been directly demonstrated. The authors analyzed the similarity of spike trains evoked by complex sounds in the rat auditory cortex and related cortical responses to performance in an auditory task. Rats initially learned to identify 2 highly different periodic, frequency-modulated sounds and then were tested with increasingly similar sounds. Rats correctly classified most novel sounds; their accuracy was negatively correlated with acoustic similarity. Rats discriminated novel sounds with slower modulation more accurately than sounds with faster modulation. This asymmetry was consistent with similarities in cortical representations of the sounds, demonstrating that perceptual sensitivities to complex sounds can be predicted from the cortical responses they evoke. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
We have described the acoustic pathway from the ear to the diencephalon in a sound-producing fish (Pollimyrus) based on simultaneous neurophysiological recordings from single neurons and injections of biotin pathway tracers at the recording sites. Fundamental transformations of auditory information from highly phase-locked and entrained responses in primary eighth nerve afferents and first-order medullary neurons to more weakly phase-locked responses in the auditory midbrain were revealed by physiological recordings. Anatomical pathway tracing uncovered a bilateral array of both first- and second-order medullary nuclei and a perilemniscal nucleus. Interconnections within the medullary auditory areas were extensive. Medullary nuclei projected to the auditory midbrain by means of the lateral lemniscus. Midbrain auditory areas projected to both ipsilateral and contralateral optic tecta and to an array of three nuclei in the auditory thalamus. The significance of these findings to the elucidation of mechanisms for the analysis of communication sounds and spatial hearing in this vertebrate animal is discussed.  相似文献   

4.
Change blindness, or the failure to detect (often large) changes to visual scenes, has been demonstrated in a variety of different situations. Failures to detect auditory changes are far less studied, and thus little is known about the nature of change deafness. Five experiments were conducted to explore the processes involved in change deafness by measuring explicit change detection as well as auditory object encoding. The experiments revealed that considerable change deafness occurs, even though auditory objects are encoded quite well. Familiarity with the objects did not affect detection or recognition performance. Whereas spatial location was not an effective cue, fundamental frequency and the periodicity/aperiodicity of the sounds provided important cues for the change-detection task. Implications for the mechanisms responsible for change deafness and auditory sound organization are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
An important aspect of the analysis of auditory “scenes” relates to the perceptual organization of sound sequences into auditory “streams.” In this study, we adapted two auditory perception tasks, used in recent human psychophysical studies, to obtain behavioral measures of auditory streaming in ferrets (Mustela putorius). One task involved the detection of shifts in the frequency of tones within an alternating tone sequence. The other task involved the detection of a stream of regularly repeating target tones embedded within a randomly varying multitone background. In both tasks, performance was measured as a function of various stimulus parameters, which previous psychophysical studies in humans have shown to influence auditory streaming. Ferret performance in the two tasks was found to vary as a function of these parameters in a way that is qualitatively consistent with the human data. These results suggest that auditory streaming occurs in ferrets, and that the two tasks described here may provide a valuable tool in future behavioral and neurophysiological studies of the phenomenon. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
This study explored the extent to which sequential auditory grouping affects the perception of temporal synchrony. In Experiment 1, listeners discriminated between 2 pairs of asynchronous “target” tones at different frequencies, A and B, in which the B tone either led or lagged. Thresholds were markedly higher when the target tones were temporally surrounded by “captor tones” at the A frequency than when the captor tones were absent or at a remote frequency. Experiment 2 extended these findings to asynchrony detection, revealing that the perception of synchrony, one of the most potent cues for simultaneous auditory grouping, is not immune to competing effects of sequential grouping. Experiment 3 examined the influence of ear separation on the interactions between sequential and simultaneous grouping cues. The results showed that, although ear separation could facilitate perceptual segregation and impair asynchrony detection, it did not prevent the perceptual integration of simultaneous sounds. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
The division of the auditory cortex into various fields, functional aspects of these fields, and neuronal coding in the primary auditory cortical field (AI) are reviewed with stress on features that may be common to mammals. On the basis of 14 topographies and clustered distributions of neuronal response characteristics in the primary auditory cortical field, a hypothesis is developed of how a certain complex acoustic pattern may be encoded in an equivalent spatial activity pattern in AI, generated by time-coordinated firing of groups of neurons. The auditory cortex, demonstrated specifically for AI, appears to perform sound analysis by synthesis, i.e. by combining spatially distributed coincident or time-coordinated neuronal responses. The dynamics of sounds and the plasticity of cortical responses are considered as a topic for research.  相似文献   

8.
The effect of context on the identification of common environmental sounds (e.g., dogs barking or cars honking) was tested by embedding them in familiar auditory background scenes (street ambience, restaurants). Initial results with subjects trained on both the scenes and the sounds to be identified showed a significant advantage of about five percentage points better accuracy for sounds that were contextually incongruous with the background scene (e.g., a rooster crowing in a hospital). Further studies with naive (untrained) listeners showed that this incongruency advantage (IA) is level-dependent: there is no advantage for incongruent sounds lower than a Sound/Scene ratio (So/Sc) of ?7.5 dB, but there is about five percentage points better accuracy for sounds with greater So/Sc. Testing a new group of trained listeners on a larger corpus of sounds and scenes showed that the effect is robust and not confined to a specific stimulus set. Modeling using spectral-temporal measures showed that neither analyses based on acoustic features, nor semantic assessments of sound-scene congruency can account for this difference, indicating the IA is a complex effect, possibly arising from the sensitivity of the auditory system to new and unexpected events, under particular listening conditions. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

9.
Previous findings on streaming are generalized to sequences composed of more than 2 subsequences. A new paradigm identified whether listeners perceive complex sequences as a single unit (integrative listening) or segregate them into 2 (or more) perceptual units (stream segregation). Listeners heard 2 complex sequences, each composed of 1, 2, 3, or 4 subsequences. Their task was to detect a temporal irregularity within 1 subsequence. In Experiment 1, the smallest frequency separation under which listeners were able to focus on 1 subsequence was unaffected by the number of co-occurring subsequences; nonfocused sounds were not perceptually organized into streams. In Experiment 2, detection improved progressively, not abruptly, as the frequency separation between subsequences increased from 0.25 to 6 auditory filters. The authors propose a model of perceptual organization of complex auditory sequences. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
To investigate the role of temporal processing in language lateralization, we monitored asymmetry of cerebral activation in human volunteers using positron emission tomography (PET). Subjects were scanned during passive auditory stimulation with nonverbal sounds containing rapid (40 msec) or extended (200 msec) frequency transitions. Bilateral symmetric activation was observed in the auditory cortex for slow frequency transitions. In contrast, left-biased asymmetry was observed in response to rapid frequency transitions due to reduced response of the right auditory cortex. These results provide direct evidence that auditory processing of rapid acoustic transitions is lateralized in the human brain. Such functional asymmetry in temporal processing is likely to contribute to language lateralization from the lowest levels of cortical processing.  相似文献   

11.
Psychophysical studies are reported examining how the context of recent auditory stimulation may modulate the processing of new sounds. The question posed is how recent tone stimulation may affect ongoing performance in a discrimination task. In the task, two complex sounds occurred in successive intervals. A single target component of one complex was decreased (Experiments 1 and 2) or increased (Experiments 3, 4, and 5) in intensity on half of trials: The task was simply to identify those trials. Prior to each trial, a pure tone inducer was introduced either at the same frequency as the target component or at the frequency of a different component of the complex. Consistent with a frequency-specific form of disruption, discrimination performance was impaired when the inducing tone matched the frequency of the following decrement or increment. A timbre memory model (TMM) is proposed incorporating channel-specific interference allied to inhibition of attending in the coding of sounds in the context of memory traces of recent sounds. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

12.
The authors examined the effect of preceding context on auditory stream segregation. Low tones (A), high tones (B), and silences (-) were presented in an ABA- pattern. Participants indicated whether they perceived 1 or 2 streams of tones. The A tone frequency was fixed, and the B tone was the same as the A tone or had 1 of 3 higher frequencies. Perception of 2 streams in the current trial increased with greater frequency separation between the A and B tones (Δf). Larger Δf in previous trials modified this pattern, causing less streaming in the current trial. This occurred even when listeners were asked to bias their perception toward hearing 1 stream or 2 streams. The effect of previous Δf was not due to response bias because simply perceiving 2 streams in the previous trial did not cause less streaming in the current trial. Finally, the effect of previous Δf was diminished, though still present, when the silent duration between trials was increased to 5.76 s. The time course of this context effect on streaming implicates the involvement of auditory sensory memory or neural adaptation. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
A popular theoretical account of developmental language and literacy disorders implicates poor auditory temporal processing in their etiology, but evidence from studies using behavioral measures has yielded inconsistent results. The mismatch negativity (MMN) component of the auditory event-related potential has been recommended as an alternative, relatively objective, measure of the brain's ability to discriminate sounds that is suitable for children with limited attention or motivation. A literature search revealed 26 studies of the MMN in individuals with dyslexia or specific language impairment and 4 studies of infants or children at familial risk of these disorders. Findings were highly inconsistent. Overall, attenuation of the MMN and atypical lateralization in the clinical group were most likely to be found in studies using rapidly presented stimuli, including nonverbal sounds. The MMN literature offers tentative support for the hypothesis that auditory temporal processing is impaired in language and literacy disorders, but the field is plagued by methodological inconsistencies, low reliability of measures, and low statistical power. The article concludes with recommendations for improving this state of affairs. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
The naming of manipulable objects in older and younger adults was evaluated across auditory, visual, and multisensory conditions. Older adults were less accurate and slower in naming across conditions, and all subjects were more impaired and slower to name action sounds than pictures or audiovisual combinations. Moreover, there was a sensory by age group interaction, revealing lower accuracy and increased latencies in auditory naming for older adults unrelated to hearing insensitivity but modest improvement to multisensory cues. These findings support age-related deficits in object action naming and suggest that auditory confrontation naming may be more sensitive than visual naming. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
Level-invariant detection refers to findings that thresholds in tone-in-noise detection are unaffected by roving-level procedures that degrade energy cues. Such data are inconsistent with ideas that detection is based on the energy passed by an auditory filter. A hypothesis that detection is based on a level-invariant temporal cue is advanced. Simulations of a leaky-integrator model, consisting of a bandpass filter, half-wave rectification, and a lowpass filter, account for thresholds in band-widening experiments. The decision variable is calculated from the discrete Fourier transform of the leaky-integrator output. A counterintuitive finding is the apparent disassociation of the phenomenon of critical bands estimated from band-widening experiments and the theory of auditory filters. Physiological plausibility is demonstrated by showing that a leaky integrator describes the discharge cadence of primary afferents for tone-in-noise stimuli as well as for complex periodic sounds. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
Two pairs of experiments studied the effects of attention and of unilateral neglect on auditory streaming. The first pair showed that the build up of auditory streaming in normal participants is greatly reduced or absent when they attend to a competing task in the contralateral ear. It was concluded that the effective build up of streaming depends on attention. The second pair showed that patients with an attentional deficit toward the left side of space (unilateral neglect) show less stream segregation of tone sequences presented to their left than to their right ears. Streaming in their right ears was similar to that for stimuli presented to either ear of healthy and of brain-damaged controls, who showed no across-ear asymmetry. This result is consistent with an effect of attention on streaming, constrains the neural sites involved, and reveals a qualitative difference between the perception of left- and right-sided sounds by neglect patients. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
Here we report that training-associated changes in neural activity can precede behavioral learning. This finding suggests that speech-sound learning occurs at a pre-attentive level which can be measured neurophysiologically (in the absence of a behavioral response) to assess the efficacy of training. Children with biologically based perceptual learning deficits as well as people who wear cochlear implants or hearing aids undergo various forms of auditory training. The effectiveness of auditory training can be difficult to assess using behavioral methods because these populations are communicatively impaired and may have attention and/or cognitive deficits. Based on our findings, if neurophysiological changes are seen during auditory training, then the training method is effectively altering the neural representation of the speech/sounds and changes in behavior are likely to follow.  相似文献   

18.
This study was designed to assess the potential benefits of using spatial auditory warning signals in a simulated driving task. In particular, the authors assessed the possible facilitation of responses (braking or accelerating) to potential emergency driving situations (the rapid approach of a car from the front or from behind) seen through the windshield or the rearview mirror. Across 5 experiments, the authors assessed the efficacy of nonspatial-nonpredictive (neutral), spatially nonpredictive (50% valid), and spatially predictive (80% valid) car horn sounds, as well as symbolic predictive and spatially presented symbolic predictive verbal cues (the words "front" or "back") in directing the participant's visual attention to the relevant direction. The results suggest that spatially predictive semantically meaningful auditory warning signals may provide a particularly effective means of capturing attention. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
The basic functional organization of the cat primary auditory cortex is discussed as it is revealed by electrophysiological studies of the distribution of elementary receptive field (RF) parameters. RFs of cortical neurons have been shown to vary considerably from neuron to neuron; additionally, specific RF properties vary independently. Furthermore, some of the RF properties are nonhomogeneously distributed across the auditory cortex and can be interpreted as forming "maps" that represent specific stimulus information in a topographic way. Accordingly, the functional organization of the primary auditory cortex is interpreted as a series of superimposed independent parameter maps. The consequences of such a layout for the spatial and temporal coding of pure tones and speech sounds is illustrated and ramifications for the interpretation of far-field event-related potentials are discussed.  相似文献   

20.
Activation of auditory cortex during silent lipreading   总被引:1,自引:0,他引:1  
Watching a speaker's lips during face-to-face conversation (lipreading) markedly improves speech perception, particularly in noisy conditions. With functional magnetic resonance imaging it was found that these linguistic visual cues are sufficient to activate auditory cortex in normal hearing individuals in the absence of auditory speech sounds. Two further experiments suggest that these auditory cortical areas are not engaged when an individual is viewing nonlinguistic facial movements but appear to be activated by silent meaningless speechlike movements (pseudospeech). This supports psycholinguistic evidence that seen speech influences the perception of heard speech at a prelexical stage.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号