首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Investigated whether an experimentally imposed 80 db (A) noise differentially affected the psychomotor, serial memory words and pictures, incidental memory, visual recall, paired associates, perceptual learning, and coding performance of 109 predominantly Black, low socioeconomic status (SES) 5-yr-old children attending daycare centers near and far from elevated subways. Ss were administered psychomotor tasks at the beginning and end of a noisy or quiet play condition and cognitive and perceptual tasks during a noisy or quiet test condition. Factorial analyses revealed that only psychomotor performance was significantly impaired by the 80 db (A) noise. A significant interaction between daycare center environmental noise level and test was found on the coding task, with Ss from relatively noisy daycare centers performing best under noisy test conditions and their counterparts under quiet conditions. Findings suggest that noise does not have a universal effect on children's performance, but the effects vary as a function of the nature of the task and previous exposure to noise. (39 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
Reviews studies of basic auditory processes (absolute and differential thresholds, masking) in infants. It is argued that autonomic and physiological responses to sounds may measure "attentional" thresholds rather than thresholds of hearing. A behavioral technique for threshold assessment involving reinforcement of a head turn to a sound source is described and used to determine thresholds for sounds in quiet and in background noise. (French abstract) (84 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
The ability to integrate information from different sensory systems is a fundamental characteristic of the brain. Because different bits of information are derived from different sensory channels, their synthesis markedly enhances the detection and identification of external stimuli. The neural substrate for such "multisensory" integration is provided by neurons that receive convergent input from two or more sensory modalities. Many such multisensory neurons are found in the superior colliculus (SC), a midbrain structure that plays a significant role in overt attentive and orientation behaviors. The various principles governing the integration of visual, auditory, and somatosensory inputs in SC neurons have been explored in several species. Thus far, the evidence suggests a remarkable conservation of integrative features during vertebrate evolution. One of the most robust of these principles is based on spatial relationships: a striking enhancement in activity is induced in a multisensory neuron when two different sensory stimuli (e.g., visual and auditory) are in spatial concordance, whereas a profound response depression can be induced when these cues are spatially discordant. The most extensive physiological observations have been made in cat, and in this species the same principles that have been shown to govern multisensory integration at the level of the individual SC neuron have also been shown to govern overt attentive and orientation responses to multisensory stimuli. Most surprising, however, is the critical role played by association (i.e. anterior ectosylvian) cortex in facilitating these midbrain processes. In the absence of the modulating corticotectal influences, multisensory SC neurons in cat are unable to integrate the different sensory cues converging upon them in an adult-like fashion, and are unable to mediate overt multisensory behaviors. This situation appears quite similar to that observed during early postnatal life. When multisensory SC neurons first appear, they are able to respond to multiple sensory inputs but are unable to synthesize these inputs to significantly enhance or degrade their responses. During ontogeny, individual multisensory neurons develop this capacity abruptly, but at very different ages, until the mature population condition is reached after several postnatal months. It appears likely that the abrupt onset of this capacity in any individual SC neuron reflects the maturation of inputs from anterior ectosylvian cortex. Presumably, the functional coupling of cortex with an individual SC neuron is essential to initiate and maintain that neuron's capability for multisensory integration throughout its life.  相似文献   

4.
This report describes research with 128 kindergarten children, the purpose of which was to assess the stability of verbal behavior and the relationship of verbal and nonverbal abilities and self-concept to talkativeness in the classroom. The children were divided into verbal and quiet groups on the basis of teacher rankings in the fall of kindergarten. Rankings in the spring term indicated that about one third of the quiet children became more verbal, thus making for a subdivision of the quiet children into the "reticent" group, who remained quiet, and the "mixed" group, who became more talkative. This distinction proved important because reticent children obtained lower parental ratings of communication skills at home and lower scores on a variety of language tests administered in Grade 1 than did verbal children; the mixed group obtained intermediate scores. No differences were observed among reticent, mixed, and verbal children on a general measure of self-concept. These findings are discussed in light of the literatures on shyness and classroom discourse. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
The current study addressed the question whether audiovisual (AV) speech can improve speech perception in older and younger adults in a noisy environment. Event-related potentials (ERPs) were recorded to investigate age-related differences in the processes underlying AV speech perception. Participants performed an object categorization task in three conditions, namely auditory-only (A), visual-only (V), and AVspeech. Both age groups revealed an equivalent behavioral AVspeech benefit over unisensory trials. ERP analyses revealed an amplitude reduction of the auditory P1 and N1 on AVspeech trials relative to the summed unisensory (A + V) response in both age groups. These amplitude reductions are interpreted as an indication of multisensory efficiency as fewer neural resources were recruited to achieve better performance. Of interest, the observed P1 amplitude reduction was larger in older adults. Younger and older adults also showed an earlier auditory N1 in AVspeech relative to A and A + V trials, an effect that was again greater in the older adults. The degree of multisensory latency shift was predicted by basic auditory functioning (i.e., higher hearing thresholds were associated with larger latency shifts) in both age groups. Together, the results show that AV speech processing is not only intact in older adults, but that the facilitation of neural responses occurs earlier in and to a greater extent than in younger adults. Thus, older adults appear to benefit more from additional visual speech cues than younger adults, possibly to compensate for more impoverished unisensory inputs because of sensory aging. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

6.
This study investigated multisensory interactions in the perception of auditory and visual motion. When auditory and visual apparent motion streams are presented concurrently in opposite directions, participants often fail to discriminate the direction of motion of the auditory stream, whereas perception of the visual stream is unaffected by the direction of auditory motion (Experiment 1). This asymmetry persists even when the perceived quality of apparent motion is equated for the 2 modalities (Experiment 2). Subsequently, it was found that this visual modulation of auditory motion is caused by an illusory reversal in the perceived direction of sounds (Experiment 3). This "dynamic capture" effect occurs over and above ventriloquism among static events (Experiments 4 and 5), and it generalizes to continuous motion displays (Experiment 6). These data are discussed in light of related multisensory phenomena and their support for a "modality appropriateness" interpretation of multisensory integration in motion perception. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
In the present study, we investigate how spatial attention, driven by unisensory and multisensory cues, can bias the access of information into visuo-spatial working memory (VSWM). In a series of four experiments, we compared the effectiveness of spatially-nonpredictive visual, auditory, or audiovisual cues in capturing participants' spatial attention towards a location where to-be-remembered visual stimuli were or were not presented (cued/uncued trials, respectively). The results suggest that the effect of peripheral visual cues in biasing the access of information into VSWM depend on the size of the attentional focus, while auditory cues did not have direct effects in biasing VSWM. Finally, spatially congruent multisensory cues showed an enlarged attentional effect in VSWM as compared to unimodal visual cues, as a likely consequence of multisensory integration. This latter result sheds new light on the interplay between spatial attention and VSWM, pointing to the special role exerted by multisensory (audiovisual) cues. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

8.
The effect of fast (syllabic) compression with overshoot reduction was studied in moderately hearing-impaired and in severely hearing-impaired listeners in quiet and in noisy situations. A test battery of daily masking noises was selected using multidimensional scaling techniques. Four relevant noises were selected: a multi-talker babble, the noise in an industrial plant, in a printing office and a city-noise background. The speech measurements show that only selected patients benefit from syllabic compression, i.e. listeners with a poor speech discrimination score. The effect in noisy surroundings was tested at the critical signal-to-noise ratio of each patient, showing whether they benefited from compression in the most critical condition or not. It turns out that the effect depends largely on the speech discrimination score and the modulation of the noise signal. When the speech discrimination score is good, compression tends to impair the results. When the speech discrimination score is poor, compression helps if the noise is modulated.  相似文献   

9.
Multisensory depression is a fundamental index of multisensory integration in superior colliculus (SC) neurons. It is initiated when one sensory stimulus (auditory) located outside its modality-specific receptive field degrades or eliminates the neuron's responses to another sensory stimulus (visual) presented within its modality-specific receptive field. The present experiments demonstrate that the capacity of SC neurons to engage in multisensory depression is strongly dependent on influences from two cortical areas (the anterior ectosylvian and rostral lateral suprasylvian sulci). When these cortices are deactivated, the ability of SC neurons to synthesize visual-auditory inputs in this way is compromised; multisensory responses are disinhibited, becoming more vigorous and in some cases indistinguishable from responses to the visual stimulus alone. Although obtaining a more robust multisensory SC response when cortex is nonfunctional than when it is functional may seem paradoxical, these data may help explain previous observations that the loss of these cortical influences permits visual orientation behavior in the presence of a normally disruptive auditory stimulus.  相似文献   

10.
Operant conditioning procedures were used to measure the effects of bilateral olivocochlear lesions on the cat's discrimination thresholds for changes in the second formant frequency (deltaF2) of the vowel /epsilon/. Three cats were tested with the formant discrimination task under quiet conditions and in the presence of continuous broadband noise at signal-to-noise ratios (S/Ns) of 23, 13, and 3 dB. In quiet, vowel levels of 50 and 70 dB produced average deltaF2s of 42 and 47 Hz, respectively, and these thresholds did not change significantly in low levels of background noise (S/Ns = 23 and 13 dB). Average deltaF2s increased to 94 and 97 Hz for vowel levels of 50 and 70 dB in the loudest level of background noise (S/N = 3 dB). Average deltaF2 thresholds in quiet and in lower noise levels were only slightly affected when the olivocochlear bundle was lesioned by making bilateral cuts into the floor of the IVth ventricle. In contrast, post-lesion deltaF2 thresholds in the highest noise level were significantly larger than pre-lesion values; the most severely affected subject showed post-lesion discrimination thresholds well over 200 Hz for both 50 and 70 dB vowels. These results suggest that olivocochlear feedback may enhance speech processing in high levels of ambient noise.  相似文献   

11.
Bonobos (Pan paniscus; n = 4), chimpanzees (Pan troglodytes; n = 12), gorillas (Gorilla gorilla; n = 8), and orangutans (Pongo pygmaeus; n = 6) were presented with 2 cups (1 baited) and given visual or auditory information about their contents. Visual information consisted of letting subjects look inside the cups. Auditory information consisted of shaking the cup so that the baited cup produced a rattling sound. Subjects correctly selected the baited cup both when they saw or heard the food. Nine individuals were above chance in both visual and auditory conditions. More important, subjects as a group selected the baited cup when only the empty cup was either shown or shaken, which means that subjects chose correctly without having seen or heard the food (i.e., inference by exclusion). Control tests showed that subjects were not more attracted to noisy cups, avoided shaken noiseless cups, or learned to use auditory information as a cue during the study. It is concluded that subjects understood that the food caused the noise, not simply that the noise was associated with the food. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

12.
We studied the relationship between auditory activity in the midbrain and selective phonotaxis in females of the treefrog, Pseudacris crucifer. Gravid females were tested in two-stimulus playback tests using synthetic advertisement calls of different frequencies (2600 versus 2875 Hz; 2800 versus 3500 Hz; 2600 versus 3500 Hz). Tests were conducted with and without a background of synthesized noise, which was filtered to resemble the spectrum of a chorus of spring peepers. There were no significant preferences for calls of any frequency in the absence of background noise. With background noise, females preferred calls of 3500 Hz to those of 2600 Hz. Multi-unit recordings of neural responses to synthetic sounds were made from the torus semicircularis of the same females following the tests of phonotaxis. We measured auditory threshold at 25 frequencies (1800-4200 Hz) as well as the magnitude of the neural response when stimulus amplitude was held constant and frequency was varied. This procedure yielded isointensity response contours, which we obtained at six amplitudes in the absence of noise and at the stimulus amplitude used during the phonotaxis tests with background noise. Individual differences in audiograms and isointensity responses were poorly correlated with behavioural data except for the test of 2600 Hz versus 3500 Hz calls in noise. The shape of the neural response contours changed with stimulus amplitude and in the presence of the simulated frog chorus. At 85 dB sound pressure level (SPL), the level at which females were tested, the contours of females were quite flat. The contours were more peaked at lower SPLs as well as during the broadcast of chorus noise and white noise at an equivalent spectrum level (45-46 dB/Hz). Peaks in the isointensity response plots of most females occurred at stimulus frequencies ranging from 3200 to 3400 Hz, frequencies close to the median best excitatory frequency (BEF) of 3357 Hz but higher than the mean of the mid-frequency of the male advertisement call (3011 Hz). Addition of background noise may cause a shift in the neural response-intensity level functions. Our results highlight the well-known nonlinearity of the auditory system and the danger inherent in focusing solely on threshold measures of auditory sensitivity when studying the proximate basis of female choice. The results also show an unexpected effect of the natural and noisy acoustic environment on behaviour and responses of the auditory system. Copyright 1998 The Association for the Study of Animal Behaviour.  相似文献   

13.
Evaluated the influence of physical properties of sensory stimuli (visual intensity, direction, and velocity; auditory intensity and location) on sensory activity and multisensory integration of superior colliculus (SC) neurons in awake, behaving primates. Two male monkeys were trained to fixate a central visual fixation point while visual and/or auditory stimuli were presented in the periphery. Visual stimuli were always presented within the contralateral receptive field of the neuron whereas auditory stimuli were presented at either ipsi- or contralateral locations. 66 of the 84 SC neurons responsive to these sensory stimuli had stronger responses when the visual and auditory stimuli were combined at contralateral locations than when the auditory stimulus was located on the ipsilateral side. This trend was significant across the population of auditory-responsive neurons. In addition, 31 SC neurons were presented a battery of tests in which the quality of one stimulus of a pair was systematically manipulated. Eight of these neurons showed preferential responses to stimuli with specific physical properties, and these preferences were not significantly altered when multisensory stimulus combinations were presented. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
Acoustic signals are generally encoded in the peripheral auditory system of vertebrates by a duality scheme. For frequency components that fall within the excitatory tuning curve, individual eighth nerve fibers can encode the effective spectral energy by a spike-rate code, while simultaneously preserving the signal waveform periodicity of lower frequency components by phase-locked spike-train discharges. To explore how robust this duality of representation may be in the presence of noise, we recorded the responses of auditory fibers in the eighth nerve of the Tokay gecko to tonal stimuli when masking noise was added simultaneously. We found that their spike-rate functions reached plateau levels fairly rapidly in the presence of noise, so the ability to signal the presence of a tone by a concomitant change in firing rate was quickly lost. On the other hand, their synchronization functions maintained a high degree of phase-locked firings to the tone even in the presence of high-intensity masking noise, thus enabling a robust detection of the tonal signal. Critical ratios (CR) and critical bandwidths showed that in the frequency range where units are able to phaselock to the tonal periodicity, the CR bands were relatively narrow and the bandwidths were independent of noise level. However, to higher frequency tones where phaselocking fails and only spike-rate codes apply, the CR bands were much wider and depended upon noise level, so that their ability to filter tones out of a noisy background degraded with increasing noise levels. The greater robustness of phase-locked temporal encoding contrasted with spike-rate coding verifies a important advantage in using lower frequency signals for communication in noisy environments.  相似文献   

15.
Individual differences in objective effects of noise on performance were analyzed with respect to their distribution, temporal stability, and the precision of measurement to be attained. Seventy-two subjects had to memorize sequences of visually presented digits while being exposed to one of three auditory background conditions which were randomly mixed on a trial-by-trial basis: (1) foreign speech; (2) pink noise; and (3) silence. Individual "irrelevant speech effects," operationalized by the difference in recall errors under speech and in silence, were normally distributed over a wide range extending from slight facilitation to severe disruption. When 25 subjects repeated the experiment after four weeks, the individual differences were replicated with a reliability of rtt = 0.45. Internal consistency, a measure of the precision with which individual effects can be measured in a single session, was moderate (alpha = 0.55). However, both retest, and consistency coefficients are severely attenuated by the use of (sound-minus-silence) difference scores, the reliability of which is bound to be considerably lower than that of the original error scores whenever these are correlated. Given that the original error rates in a specific auditory condition can be determined with reliabilities approaching 0.85, it may be concluded that individual performance decrements due to noise can be reliably measured in the "irrelevant speech" paradigm. Self-report measures of noise susceptibility collected to explore potential sources of the large inter-individual variation exhibited only weak relationships with the objectively measured noise effects: Subjects were quite inaccurate in assessing their individual impairment in the three auditory conditions, and a questionnaire-based measure of general noise sensitivity only accounted for a small portion of the variance in objectively measured performance decrements, although in both cases the predictive relationship was much stronger in female than in male subjects.  相似文献   

16.
OBJECTIVE: To define mechanisms accounting for transient deafness in three children (two siblings, ages 3 and 6, and an unrelated child, age 15) when they become febrile. DESIGN: Audiometric tests (pure-tone audiometry, speech and sentence comprehension), tympanometry, middle ear muscle reflex thresholds, otoacoustic emissions (OAEs), and electrophysiological methods (auditory brain stem responses [ABRs], sensory evoked potentials, peripheral nerve conduction velocities) were used to test the children when they were afebrile and febrile. RESULTS: ABRs, when afebrile, were abnormal with a profound delay of the IV-V and absence of waves I-III. The ABR in one of the children, tested when febrile, showed no ABR components. Measures of cochlear receptor function using OAEs were normal in both febrile and afebrile states. Cochlear microphonic potentials were present in the three children, and a summating potential was likely present in two. When afebrile, there was a mild threshold elevation for all frequencies in the 15-yr-old and a mild elevation of thresholds for just low frequencies in the two siblings. Speech comprehension in quiet was normal but impaired in noise. One of the siblings tested when febrile had a profound elevation (>80 dB) of pure-tone thresholds and speech comprehension was absent. Acoustic reflexes subserving middle ear muscles and olivocochlear bundle were absent when febrile and when afebrile. No other peripheral or cranial nerve abnormalities were found in any of the children. Sensory nerve action potentials from median nerve in one of the children showed no abnormalities on warming of the hand to 39 degrees C. CONCLUSION: These children have an auditory neuropathy manifested by a disorder of auditory nerve function in the presence of normal cochlear outer hair cell functions. They develop a conduction block of the auditory nerves when their core body temperature rises due, most likely, to a demyelinating disorder of the auditory nerve. The auditory neuropathy in the two affected siblings is likely to be inherited as a recessive disorder.  相似文献   

17.
OBJECTIVE: To determine whether there is a relationship between the presumed complexity of auditory processing and the time course of recovery of auditory function in children with a history of otitis media with effusion (OME). DESIGN: Longitudinal testing over a 1-year period following insertion of tympanostomy tubes in clinical and control groups. SUBJECTS: A total of 34 children with a history of OME were tested. Twenty-five were tested both just before the placement of tympanostomy tubes and on up to 3 separate occasions (1 month, 6 months, and 1 year) after the placement of the tubes. With subject attrition, there were 27, 16, and 10 listeners at the 1-month, 6-month, and 1-year tests, respectively. An age-matched control group of 29 children was tested. METHODS: The comodulation masking release (CMR) paradigm was used to measure the ability of the listener to detect a signal in a noise background composed of a simple (1 amplitude modulation pattern) or more complex (2 amplitude modulation patterns) masking background. RESULTS: Children with a history of OME had reduced masking release before and 1 month after insertion of tympanostomy tubes for both the simple and complex CMR tasks. After surgery, the CMR results for simple task was not significantly different from that in controls by 6 months, but CMR for the complex task remained significantly reduced even 1 year after surgery. CONCLUSION: Our results suggest a slower recovery of auditory function for more complex auditory tasks in children with a history of OME.  相似文献   

18.
Forty female clerical workers were randomly assigned to a control condition or to 3-hr exposure to low-intensity noise designed to simulate typical open-office noise levels. The simulated open-office noise elevated workers' urinary epinephrine levels, but not their norepinephrine or cortisol levels, and it produced behavioral aftereffects (fewer attempts at unsolvable puzzles) indicative of motivational deficits. Participants were also less likely to make ergonomic, postural adjustments in their computer work station while working under noisy, relative to quiet, conditions. Postural invariance is a risk factor for musculoskeletal disorder. Although participants in the noise condition perceived their work setting as significantly noisier than those working under quiet conditions did, the groups did not differ in perceived stress. Potential health consequences of long-term exposure to low-intensity office noise are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
Adults with severe or severe-to-profound hearing losses constitute between 11% and 13.5% of the hearing-impaired population. A detailed investigation of the speech recognition of adults with severe (n = 20) or severe-to-profound (n = 14) hearing loss was conducted at The University of Melbourne. Each participant took part in a series of speech recognition tasks while wearing his or her currently fitted hearing aid(s). The assessments included closed-set tests of consonant recognition and vowel recognition, combined with open-set tests of monosyllabic word recognition and sentence recognition. Sentences were presented in quiet and in noise at +10 dB SNR to replicate an environment more typical of everyday listening conditions. Although the results demonstrated wide variability in performance, some general trends were observed. As expected vowels were generally well perceived compared with consonants. Monosyllabic word recognition scores for both the adults with a severe hearing impairment (M = 67.2%) and the adults with a severe-to-profound hearing impairment (M = 38.6%) could be predicted from the segmental tests, with an allowance for lexical effects. Scores for sentences presented in quiet showed additional linguistic effects and a significant decrease in performance with the addition of background noise (from 82.9% to 74.1% for adults with a severe hearing loss and from 55.8% to 34.2% for adults with a severe-to-profound hearing loss). Comparisons were made between the participants and a group of adults using a multiple-channel cochlear implant. This comparison indicated that some adults with a severe or severe-to-profound hearing loss may benefit from the use of a cochlear implant. The results of this study support the contention that cochlear implant candidacy should not rely solely on audiometric thresholds.  相似文献   

20.
The main aim in this work was to study the interaction between auditory and kinesthetic stimuli and its influence on motion control. The study was performed on healthy subjects and patients with Parkinson's disease (PD). Thirty-five right-handed volunteers (young, PD, and age-matched healthy participants, and PD-patients) were studied with three different motor tasks (slow cyclic movements, fast cyclic movements, and slow continuous movements) and under the action of kinesthetic stimuli and sounds at different beat rates. The action of kinesthesia was evaluated by comparing real movements with virtual movements (movements imaged but not executed). The fast cyclic task was accelerated by kinesthetic but not by auditory stimuli. The slow cyclic task changed with the beat rate of sounds but not with kinesthetic stimuli. The slow continuous task showed an integrated response to both sensorial modalities. These data show that the influence of the multisensory integration on motion changes with the motor task and that some motor patterns are modulated by the simultaneous action of auditory and kinesthetic information, a cross-modal integration that was different in PD-patients. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号