首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Developmental dyslexia is generally believed to result from impaired linguistic processing rather than from deficits in low-level sensory function. Challenging this view, we studied the perception of non-verbal acoustic stimuli and low-level auditory evoked potentials in dyslexic adults. Compared with matched controls, dyslexics were selectively impaired in tasks (frequency discrimination and binaural unmasking) which rely on decoding neural discharges phase-locked to the fine structure of the stimulus. Furthermore, this ability to use phase-locking was related to reading ability. In addition, the evoked potential reflecting phase-locked discharges was significantly smaller in dyslexics. These results demonstrate a low-level auditory impairment in dyslexia traceable to the brainstem nuclei.  相似文献   

2.
The common assumption that perceptual sensitivities are related to neural representations of sensory stimuli has seldom been directly demonstrated. The authors analyzed the similarity of spike trains evoked by complex sounds in the rat auditory cortex and related cortical responses to performance in an auditory task. Rats initially learned to identify 2 highly different periodic, frequency-modulated sounds and then were tested with increasingly similar sounds. Rats correctly classified most novel sounds; their accuracy was negatively correlated with acoustic similarity. Rats discriminated novel sounds with slower modulation more accurately than sounds with faster modulation. This asymmetry was consistent with similarities in cortical representations of the sounds, demonstrating that perceptual sensitivities to complex sounds can be predicted from the cortical responses they evoke. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
Results from 3 auditory tasks revealed that small changes made in stimuli commonly encountered in everyday life are more easily discriminated than are the same changes made in stimuli not as commonly encountered. The tasks required discrimination of a frequency difference in 1 tone of 6-tone chords or nonchords, discrimination of a duration difference in 1 note of common tunes or nontunes, and discrimination of the deletion of a band of frequencies from speech sounds played forward or backward. Different crews of college-aged listeners served in the different tasks. If future research shows this difference in discriminability to be a general property of commonly encountered stimuli-attributable to a difference in the way they are processed in the nervous system—then discrimination tests of this sort could become useful for assessing whether stimuli have made the transition from one form of processing to the other. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
Progress in the knowledge of auditory processing of complex sounds has been made through coordinated psychophysical, physiological and theoretical studies of periodicity pitch and combination tones. Periodicity pitch is the basis for human perception of musical notes and pitch of voiced speech. The mechanism of perception involves harmonic pattern recognition on the complex Fourier frequency spectra generated by auditory frequency analysis. Combination tones are perceptible distortion tones generated within the cochlea by nonlinear interaction of component stimulus tones. Perception of periodicity pitch is quantitatively accounted for by a two-stage process of frequency analysis subject to random errors and significant nonlinearities, followed by a pattern recognizer that operates very efficiently to measure the period of musical and speech sounds. The basic characteristic of the first stage is a Gaussian standard error function that quantifies the randomness in aural estimation of frequencies of component tones in a complex tone stimulus. Efficient aural measurement of neural spike intervals from the eighth nerve provides a physiological account for the psychophysical characteristic of aural frequency analysis with complex sounds. Although cochlear filtering is an essential stage in auditory frequency analysis, neural time following, rather than details of the filter characteristics, is the decisive factor in determining the precision of aural frequency measurement. It is likely that peripheral auditory coding is similar for sounds in periodicity pitch and in speech perception, although the 'second stage' representing central processing would differ.  相似文献   

5.
Although copious research has investigated the role of phonology in reading, little research has investigated the precise nature of the entailed speech representations. The present study examined the similarity of "inner speech" in reading to overt speech. Two lexical decision experiments (in which participants gave speeded word/nonword classifications to letter strings) assessed the effects of implicit variations in vowel and word-initial consonant length. Responses were generally slower for phonetically long stimuli than for phonetically short stimuli, despite equal orthographic lengths. Moreover, the phonetic length effects displayed principled interactions with common factors known to affect lexical decisions, such as word frequency and the similarity of words to nonwords. Both phonetic length effects were stronger among slower readers. The data suggest that acoustic representations activated in silent reading are best characterized as inner speech rather than as abstract phonological codes.  相似文献   

6.
School-age children with specific language impairment (SLI) and age-matched controls were tested for immediate recall of digits presented visually, auditorily, or audiovisually. Recall tasks compared speaking and pointing response modalities. Each participant was tested at a level that was consistent with her or his auditory short-term memory span. Traditional effects of primacy, recency, and modality (an auditory recall advantage) were obtained for both groups. The groups performed similarly when audiovisual stimuli were paired with a spoken response, but children with SLI had smaller recency effects together with an unusually poor recall when visually presented items were paired with a pointing response. Such results cannot be explained on the basis of an auditory or speech deficit per se, and suggest that children with SLI have difficulty either retaining or using phonological codes, or both, during tasks that require multiple mental operations. Capacity limitations, involving the rapid decay of phonological representations and/or performance limitations related to the use of less demanding and less effective coding and retrieval strategies, could have contributed to the working memory deficiencies in the children with SLI.  相似文献   

7.
Irrelevant auditory stimuli disrupt immediate serial recall. In the equipotentiality hypothesis, D. M. Jones and W. J. Macken (see record 1993-20312-001) made the controversial prediction that speech and tones have an equivalent disruptive effect. In the present study, 5 experiments tested their hypothesis. Experiments 1–4 showed that meaningful speech disrupts recall more than do tones. Experiments 3 and 4 provided some evidence that meaningful speech disrupts recall more than does meaningless speech, and Experiment 4 showed that even meaningless speech disrupts recall more than do tones. Using slightly different experimental procedures, Experiment 5 showed that letters disrupt recall more than do tones. Implications of these results for a number of theories of primary memory and the irrelevant speech effect are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
Three experiments investigate psychological, methodological, and domain-specific characteristics of loudness change in response to sounds that continuously increase in intensity (up-ramps), relative to sounds that decrease (down-ramps). Timbre (vowel, violin), layer (monotone, chord), and duration (1.8 s, 3.6 s) were manipulated in Experiment 1. Participants judged global loudness change between pairs of spectrally identical up-ramps and down-ramps. It was hypothesized that loudness change is overestimated in up-ramps, relative to down-ramps, using simple speech and musical stimuli. The hypothesis was supported and the proportion of up-ramp overestimation increased with stimulus duration. Experiment 2 investigated recency and a bias for end-levels by presenting paired dynamic stimuli with equivalent end-levels and steady-state controls. Experiment 3 used single stimulus presentations, removing artifacts associated with paired stimuli. Perceptual overestimation of loudness change is influenced by (1) intensity region of the dynamic stimulus; (2) differences in stimulus end-level; (3) order in which paired items are presented; and (4) duration of each item. When methodological artifacts are controlled, overestimation of loudness change in response to up-ramps remains. The relative influence of cognitive and sensory mechanisms is discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
Participants made speeded target-nontarget responses to singly presented auditory stimuli in 2 tasks. In within-dimension conditions, participants listened for either of 2 target features taken from the same dimension; in between-dimensions conditions, the target features were taken from different dimensions. Judgments were based on the presence or absence of either target feature. Speech sounds, defined relative to sound identity and locale, were used in Experiment 1, whereas tones, comprising pitch and locale components, were used in Experiments 2 and 3. In all cases, participants performed better when the target features were taken from the same dimension than when they were taken from different dimensions. Data suggest that the auditory and visual systems exhibit the same higher level processing constraints. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
We propose a multisensory framework based on Glaser and Glaser's (1989) general reading-naming interference model to account for the semantic priming effect by naturalistic sounds and spoken words on visual picture sensitivity. Four experiments were designed to investigate two key issues: First, can auditory stimuli enhance visual sensitivity when the sound leads the picture as well as when they are presented simultaneously? And, second, do naturalistic sounds (e.g., a dog's “woofing”) and spoken words (e.g., /d?g/) elicit similar semantic priming effects? Here, we estimated participants' sensitivity and response criterion using signal detection theory in a picture detection task. The results demonstrate that naturalistic sounds enhanced visual sensitivity when the onset of the sounds led that of the picture by 346 ms (but not when the sounds led the pictures by 173 ms, nor when they were presented simultaneously, Experiments 1-3A). At the same SOA, however, spoken words did not induce semantic priming effects on visual detection sensitivity (Experiments 3B and 4A). When using a dual picture detection/identification task, both kinds of auditory stimulus induced a similar semantic priming effect (Experiment 4B). Therefore, we suggest that there needs to be sufficient processing time for the auditory stimulus to access its associated meaning to modulate visual perception. Besides, the interactions between pictures and the two types of sounds depend not only on their processing route to access semantic representations, but also on the response to be made to fulfill the requirements of the task. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

11.
12.
Three same–different matching experiments used strings of letters as stimuli to explore the influence of orthography, familiarity, and lexical meaningfulness on visual code formation for words. In Experiment 1, larger effects of lexical meaningfulness occurred under conditions of visual stimulus degradation than when stimuli were bright and easy to resolve. Experiments 2 and 3 included diagnostics of two strategies (selective or divided attention) and showed that either could occur, but that only one produced the interaction. In the selective or visual, pattern of results which was observed in Experiment 2, lexicality interacted with the visual quality of the stimulus display, and there was no influence of phonological confusability between the strings being matched. In the divided attention, or multicode, pattern, observed in Experiment 3, lexicality and visual quality produced additive effects, while phonological confusability interfered with different decisions. This suggests that decisions were based on multiple, potentially redundant codes—visual and phonological—and that in such a situation the facilitatory influence of lexicality on visual code formation does not occur. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
Phoneme identification with audiovisually discrepant stimuli is influenced hy information in the visual signal (the McGurk effect). Additionally, lexical status affects identification of auditorily presented phonemes. The present study tested for lexical influences on the McGurk effect. Participants identified phonemes in audiovisually discrepant stimuli in which lexical status of the auditory component and of a visually influenced percept was independently varied. Visually influenced (McGurk) responses were more frequent when they formed a word and when the auditory signal was a nonword (Experiment 1). Lexical effects were larger for slow than for fast responses (Experiment 2), as with auditory speech, and were replicated with stimuli matched on physical properties (Experiment 3). These results are consistent with models in which lexical processing of speech is modality independent. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
The P300 component of the event-related brain potential (ERP) was elicited with auditory stimuli in two different tasks. The oddball paradigm presented both target and standard stimuli; the single-stimulus paradigm presented a target but no standard tone stimulus, with the inter-target interval the same as that for the oddball condition. Experiment 1 manipulated stimulus intensity and Experiment 2 manipulated tone stimulus frequency, with the relative target probability maintained 0.20 for both tasks. P300 amplitude and latency were highly similar for the oddball and single-stimulus procedures in both experiments across independent variables. The findings suggest that the single-stimulus paradigm may prove useful in experimental and applied contexts that require very simple ERP task conditions.  相似文献   

15.
This study explored the relation between phonological short-term memory and auditory-sensory processing in 7–9-year-old children. Twenty-four participants performed a pseudoword repetition test. The mismatch-negativity (MMN) component of auditory event-related brain potentials was obtained from 9 participants with the highest and 9 participants with the lowest scores on the test. The MMN indexes short-term auditory-sensory memory, including auditory-sensory representations for speech. It was recorded to just perceptible /baga/–/baka/ bisyllabic and easily discriminable 1000/1100-Hz tone contrasts with interstimulus intervals of 350 and 2,000 ms. The high and low repeaters differed significantly in MMN amplitude to speech stimuli at the shorter interstimulus interval. Thus, the accuracy of auditory-sensory processing seems to affect phonological short-term representations in school-age children and therefore may play a role in vocabulary development. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
17.
The purpose of the present study was to determine whether preweanling rats respond differentially to the intensity and energy source of a stimulus. Previous studies have suggested that infants process compound stimuli based on net stimulus intensity regardless of the energy source of the compound's elements, but more direct tests have been needed of the infant's response to the stimulus attributes of intensity and energy source. This response was tested for auditory and visual stimuli that had been equated (Experiment 1) in terms of perceived intensity (low and high). Intensity level and energy source of the stimuli were varied independently within nonassociative (Experiment 2) and associative (Experiment 3) procedures. The overall results indicate that stimuli of a low-perceived intensity were processed in terms of their intensity, whereas high-intensity stimuli were processed on the basis of energy source. These results are relevant to contemporary issues of cognitive development in humans and animals.  相似文献   

18.
Four experiments examined transfer of noncorresponding spatial stimulus-response associations to an auditory Simon task for which stimulus location was irrelevant. Experiment 1 established that, for a horizontal auditory Simon task, transfer of spatial associations occurs after 300 trials of practice with an incompatible mapping of auditory stimuli to keypress responses. Experiments 2-4 examined transfer effects within the auditory modality when the stimuli and responses were varied along vertical and horizontal dimensions. Transfer occurred when the stimuli and responses were arrayed along the same dimension in practice and transfer but not when they were arrayed along orthogonal dimensions. These findings indicate that prior task-defined associations have less influence on the auditory Simon effect than on the visual Simon effect, possibly because of the stronger tendency for an auditory stimulus to activate its corresponding response. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
[Correction Notice: An erratum for this article was reported in Vol 38(1) of Canadian Journal of Psychology Revue Canadienne de Psychologie (see record 2007-03769-001). A programming error occurred in preparing Figure 6. The correct figure looks quite similar to Figure 7, except that the upper eight components are considerably smaller in Figure 6 than in Figure 7. Values quoted in the text remain unchanged.] Processing in the peripheral auditory system of the human ear profoundly alters the characteristics of all acoustic signals impinging on the ear. Some of the 1st-order properties of this peripheral processing are now reasonably well understood: Humans see a heavily overlapped set of filters, with increasingly broader bandwidths at high frequencies, which results in good spectral resolution at low frequencies and good temporal resolution at high frequencies. Results of an examination of speech and music by this system are discussed. An attempt is then made to synthesize several papers on auditory and visual psychophysics, and to speculate on auditory-signal processing analogous to visual-color processing. Several simplified auditory representations of speech are proposed. (French abstract) (25 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
Auditory nerve single-unit population studies have demonstrated that phase-locking plays a dominant role in the neural encoding of the spectrum of speech sounds. Since the scalp-recorded human frequency-following response (FFR) reflects synchronous, phase-locked activity in a population of neurons in the rostral auditory brainstem, it was reasoned that the human FFR might preserve information about certain acoustic features of speech sounds. FFRs to three different two-tone approximations of vowels (symbols, see text) were obtained from 10 normal-hearing human adults at 85, 75, 65 and 55 dB nHL. Spectrum analyses of the FFRs revealed distinct peaks at frequencies corresponding to the first and the second formants across all levels suggesting that phase-locked activity among two distinct populations of neurons are indeed preserved in the FFR. Also, the FFR spectrum for vowels (symbols, see text) revealed a robust component at 2F1-F2 frequency suggesting that the human FFR contains a neural representation of cochlear nonlinearity. Finally, comparison of FFRs to the vowel approximations and the individual components at F1 and F2 revealed effects that may be suggestive of two-tone synchrony suppression and/or lateral inhibition. These results suggest that the scalp-recorded FFR may be used to evaluate not only neural encoding of speech sounds but also processes associated with cochlear nonlinearity.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号