首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Twenty subjects made graphic ratings of the perceived laterality of amplitude modulated sounds that were presented through earphones. A 200 Hz modulation frequency was combined with carrier frequencies of 2200 Hz, 3200 Hz, 4200 Hz, and 5200 Hz. The modulator sinusoid was delayed to either ear by temporal intervals ranging from zero to 0.6 ms. A significant interaction of carrier frequency with the linear trend for interaural temporal disparity indicated that the slopes of laterality ratings on temporal disparity decreased with carrier frequency. A significant interaction of carrier frequency with the difference in ratings for the 0.6 ms delays to the two ears indicated that the range in laterality ratings decreased with carrier frequency. The results indicate that the amount of laterality is a decreasing function of carrier frequency for high frequency sounds, which may be a consequence of greater weight being given to zero intensity difference as frequency increases.  相似文献   

2.
A high-resolution method of spectral analysis, of the class generally called "maximum entropy method," was used in a study of aortic porcine valve closing sounds in 37 patients (ages 19 to 76). Spectra from 27 normal xenografts, implanted from 2 weeks to 61 months previously, were characterized by a dominant frequency peak, F1, at 89 +/- 15 Hz (mean +/- SD), with a lower amplitude peak, F2, at 154 +/- 25 Hz. Eight of nine patients with aortic porcine valve dysfunction were proved surgically to have leaflet degeneration or infection and had either F1 (139 +/- 54 Hz) and/or F2 (195 +/- 74 Hz) significantly higher than normal (p less than .001). In two patients with paravalvar leak but no leaflet abnormality, F1 and F2 were in the normal range. Estimation of F1 and F2 was highly reproducible and was unaffected by duration of implant up to 5 years. Spectral analysis of aortic porcine valve closing sounds by the maximum entropy method may be useful for detection of intrinsic xenograft dysfunction.  相似文献   

3.
Twenty-five subjects made graphic ratings of the perceived lateral position within the head of sounds presented through headphones. The stimuli were high frequency, pure tones and amplitude modulated sounds. For the amplitude modulated sounds, a 200 Hz modulation frequency was combined with carrier frequencies of 2200 Hz, 3200 Hz, 4200 Hz, and 5200 Hz, which were also the pure tone frequencies. Interaural level differences in the signals ranged from zero to 12 dB. The rate of lateralization was defined as the slope of the linear trend relating laterality ratings to interaural level differences. The rate of lateralization was found to be a decreasing function of frequency. The laterality ratings of amplitude modulated signals were nearly identical to those for pure tones. This result suggests that, for high frequency signals, conflicting temporal information that a source is centered is suppressed in favor of information from level differences that the source is off-center. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
Although continuous auscultation has been used during surgery as a monitor of cardiac function for many years, the effect of anesthetics on heart sounds has never been quantified. We determined the root mean squared amplitude and frequency characteristics (peak frequency, spectral edge, and power ratios) of the first (S1) and second (S2) heart sounds in 19 healthy children during induction of anesthesia with halothane. In all patients, halothane decreased the amplitude of S1 (R2 = 0.87 +/- 0.12) and S2 (R2 = 0.66 +/- 0.33) and the high-frequency components (>80 Hz) of these sounds. These changes were clearly audible and preceded decreases in heart rate and blood pressure. The spectral edge decreased for S1 in 18 patients (R2 = 0.73 +/- 0.24) and for S2 in 13 patients (R2 = 0.58 +/- 0.25). Peak frequency did not change. The rapidity with which myocardial depression and its associated changes in heart sound characteristics occurred confirms that continuous auscultation of heart sounds is a useful clinical tool for hemodynamic monitoring of anesthetized infants and children. Implications: Heart sound characteristics can be used to monitor cardiac function during halothane anesthesia in children. The changes occur rapidly and precede noticeable changes in heart rate and blood pressure.  相似文献   

5.
Superior laryngeal nerve (SLN) stimulation can activate the brainstem swallowing mechanism to produce a complete swallowing sequence consisting of oropharyngeal, oesophageal and lower oesophageal sphincter (LOS) components. However, little is known of the effect of SLN stimulation (peripheral-sensory input from the pharynx) on the characteristics of oesophageal motor activity, especially in the smooth muscle portion. The present study examined the effect of varying stimulus train length and frequency on each of the three components of the reflex. Acute studies were performed in urethane anaesthetized cats. Oesophageal motility was monitored using conventional manometric techniques, and oropharyngeal swallowing by the mylohyoid electromyogram. SLN stimulus train length (1-10 sec) and frequency (5-30 Hz) were varied independently. Increased train length or frequency resulted in (1) an increase in oropharyngeal swallowing and incidence of the complete swallowing response, (2) an increase in latency to onset of the oesophageal peristaltic wave, (3) reduction of the amplitude of the evoked peristaltic contraction in the smooth muscle portion, without altering its velocity, (4) increased LOS relaxation, and increased LOS after-contraction. The LOS contraction was abolished by atropine (100 micrograms kg-1). Therefore, increased SLN stimulation not only results in excitation of the central swallowing program and the oropharyngeal stage of swallowing, but has major effects on the oesophageal and LOS stages of swallowing. Afferent SLN stimuli can impact on the control mechanisms for each stage, to inhibit or excite the stages in different ways.  相似文献   

6.
The three-dimensional (3-D) properties of the translational vestibulo-ocular reflexes (translational VORs) during lateral and fore-aft oscillations in complete darkness were studied in rhesus monkeys at frequencies between 0.16 and 25 Hz. In addition, constant velocity off-vertical axis rotations extended the frequency range to 0.02 Hz. During lateral motion, horizontal responses were in phase with linear velocity in the frequency range of 2-10 Hz. At both lower and higher frequencies, phase lags were introduced. Torsional response phase changed more than 180 degrees in the tested frequency range such that torsional eye movements, which could be regarded as compensatory to "an apparent roll tilt" at the lowest frequencies, became anticompensatory at all frequencies above approximately 1 Hz. These results suggest two functionally different frequency bandwidths for the translational VORs. In the low-frequency spectrum (<0.5 Hz), horizontal responses compensatory to translation are small and high-pass-filtered whereas torsional response sensitivity is relatively frequency independent. At higher frequencies however, both horizontal and torsional response sensitivity and phase exhibit a similar frequency dependence, suggesting a common role during head translation. During up-down motion, vertical responses were in phase with translational velocity at 3-5 Hz but phase leads progressively increased for lower frequencies (>90 degrees at frequencies <0.2 Hz). No consistent dependence on static head orientation was observed for the vertical response components during up-down motion and the horizontal and torsional response components during lateral translation. The frequency response characteristics of the translational VORs were fitted by "periphery/brain stem" functions that related the linear acceleration input, transduced by primary otolith afferents, to the velocity signals providing the input to the velocity-to-position neural integrator and the oculomotor plant. The lowest-order, best-fit periphery/brain stem model that approximated the frequency dependence of the data consisted of a second order transfer function with two alternating poles (at 0.4 and 7.2 Hz) and zeros (at 0.035 and 3.4 Hz). In addition to clearly differentiator dynamics at low frequencies (less than approximately 0.5 Hz), there was no frequency bandwidth where the periphery/brain stem function could be approximated by an integrator, as previously suggested. In this scheme, the oculomotor plant dynamics are assumed to perform the necessary high-frequency integration as required by the reflex. The detailed frequency dependence of the data could only be precisely described by higher order functions with nonminimum phase characteristics that preclude simple filtering of afferent inputs and might be suggestive of distributed spatiotemporal processing of otolith signals in the translational VORs.  相似文献   

7.
Functional asymmetries were examined in 59 newborns by recording headturns from midline to binaurally equivalent sounds. Results showed that robust, asymmetric pattern of headturning occurred in most newborns' responses to binaurally presented unfiltered female speech sounds, with increased rightward orientation demonstrated in five replications. Female speech that was modified by attenuation of frequencies above 500 Hz, as well as speech attenuated below 1500 Hz and above 3000 Hz, resulted in a significant rightward bias in headturning. In contrast, female speech attenuated below 3500 Hz, and continuous, repetitive stimuli such as heartbeat sounds and phrases of speech repeated at the rate of heartbeat (termed heartspeech), failed to generate the rightward orientation bias. These results suggest that female speech sounds, particularly low-frequency sounds related to the naturally occurring prosodic characteristics of speech, are a salient class of stimuli for the organization of lateral biases in orienting in newborns.  相似文献   

8.
Single-neuron activity was recorded from the posterior auditory field (PAF) in the cortex of gas-anesthetized cats. Tone bursts and broadband complex sounds were used for auditory stimulation. Responses to frequency-modulated (FM) sounds, in particular, were studied systematically. Linear FM sweeps were centered around the best frequency (BF) of a neuron and had an excursion large enough to cover its whole frequency tuning range. Rate and direction of change of the FM sweeps were varied. In the majority of PAF neurons (75%) the FM response seemed not to be linear, i.e., their best instantaneous frequency (BIF) varied by more than one octave at different FM rates (FMR). When the difference between BIF and BF at each FMR was used as a measure of linearity, it was within one-third octave only at five or fewer FMR in most PAF neurons (74%). The majority of PAF neurons (70%) preferred moderate FM rates (<200 Hz/ms). Fifty-four percent of all neurons in this area showed band-pass behavior with a clear preference in the middle range of FM rates in at least one direction. Overall, neurons with high-pass behavior in both directions made up only a minor portion (22%) of PAF neurons. When both directions of an FM sweep (low-to-high and high-to-low frequency) were tested, 50% of the neurons were clearly selective for one direction, i.e., the response to one FM direction was at least twice as large as that to the other direction. This selectivity was not necessarily present at the preferred FM rate. In general, FM direction selectivity was equally distributed over FM rates tested. The selectivity of PAF neurons for the rate and direction of FM sounds makes these neurons suitable for the detection and analysis of communication sounds, which often contain FM components with a moderate sweep rate in a particular direction.  相似文献   

9.
This study utilized psychophysical data and acoustical measurements of sonar echoes from artificial fluttering targets to develop insights to the information used by FM bats to discriminate the wingbeat rate of flying insects. Fluttering targets were produced by rotating blades that moved towards the bat, and the animal learned to discriminate between two rates of movement, a reference rate (30 or 50 Hz) and a slower, variable rate. Threshold discrimination performance depended on the rotation rate of the reference target, with a difference value of 9 Hz for the reference rate of 30 Hz and 14 Hz for the reference rate of 50 Hz. Control experiments demonstrated that the bats used sonar echoes from the moving targets to perform the discrimination task. Acoustical measurements showed that the moving target produced a Doppler shift in the echo and a concomitant change in the arrival time of each frequency in the linear period FM sweep. The difference in delay between echoes from moving and stationary parts varied linearly with flutter rate and depended on the characteristics of the bat's sonar sounds. Simulations also showed a reduction in average echo bandwidth with increasing flutter rate, which may account for a higher delay discrimination threshold using the 50-Hz reference rate. This work suggests that Doppler-induced changes in echo delays produced by fluttering targets may contribute to the FM bat's perception of flying insect prey.  相似文献   

10.
Periodic envelope or amplitude modulations (AM) with periodicities up to several thousand Hertz are characteristic for many natural sounds. Throughout the auditory pathway, signal periodicity is evident in neuronal discharges phase-locked to the envelope. In contrast to lower levels of the auditory pathway, cortical neurons do not phase-lock to periodicities above about 100 Hz. Therefore, we investigated alternative coding strategies for high envelope periodicities at the cortical level. Neuronal responses in the primary auditory cortex (AI) of gerbils to tones and AM were analysed. Two groups of stimuli were tested: (1) AM with a carrier frequency set to the unit's best frequency evoked phase-locked responses which were confined to low modulation frequencies (fms) up to about 100 Hz, and (2) AM with a spectrum completely outside the unit's frequency-response range evoked completely different responses that never showed phase-locking but a rate-tuning to high fms (50 to about 3000 Hz). In contrast to the phase-locked responses, the best fms determined from these latter responses appeared to be topographically distributed, reflecting a periodotopic organization in the AI. Implications of these results for the cortical representation of the perceptual qualities rhythm, roughness and pitch are discussed.  相似文献   

11.
Eight dogs were bilaterally implanted with stainless steel electrodes in dorsal and ventral hippocampi (DHp, VHp), basolateral amygdala (BLA), lateral hypothalamus (LH) and with silver spherical electrodes in the frontal cortex (FC). The EEG of these structures was recorded in the state of wakefulness without any stimulation. Rhythmical activity in the theta (4.4 +/- 0.05 Hz) and alpha (10.7 +/- 0.2 Hz) ranges was revealed in all the dogs. Rhythm of the beta-2 range (22.4 +/- 0.1 Hz) was recorded in four, and that of the beta-3 (37.8 +/- 0.5 Hz) only in two of the animals. The mean frequency of the theta-rhythm recorded in the LH was higher (p < 0.001) than that in the VHp and AMB. The spectral density in the theta range was higher in the VHp than that in the other structures (p < 0.01). The same values for the DHp and AMB were higher that for the LH (p < 0.001) and FC (p < 0.01). The spectral densities in the right DHp and VHp were higher than in the symmetrical left derivations (p < 0.001). The dogs were different in the expression of specific rhythms, their frequency and power. These characteristics depended on the degree of emotional excitation and motor activity of the dogs during recording of the electrical activity.  相似文献   

12.
Low-frequency noise emitted into the environment by technical equipment in the residential buildings, including equipment of workshops for services or production near these buildings, was measured. In the spectrum of noise derived from installations and equipment in residential buildings and shops low frequency (20-125 Hz) sounds and infrasounds (below 20 Hz) were detected. Their sources were mainly pumps in hydrophors, lifts, cooling machinery, central heating, air conditioning and ventilating installations. The analysed noise was in a small degree only damped by partitions in buildings and penetrated more easily than higher-frequency noise, without exceeding usually the permitted levels. Noises with dominating low-frequency sounds are regarded by the inhabitants as troublesome and causing various adverse psychosomatic effects, such as pulsation feeling, somnolence, headaches, nausea etc. The present system of noise assessment leaves low-frequency noise aside and fails to protect sufficiently the inhabitants against this nuisance.  相似文献   

13.
This study investigated the effects of decreased audibility produced by high-pass noise masking on cortical event-related potentials (ERPs) N1, N2, and P3 to the speech sounds /ba/and/da/presented at 65 and 80 dB SPL. Normal-hearing subjects pressed a button in response to the deviant sound in an oddball paradigm. Broadband masking noise was presented at an intensity sufficient to completely mask the response to the 65-dB SPL speech sounds, and subsequently high-pass filtered at 4000, 2000, 1000, 500, and 250 Hz. With high-pass masking noise, pure-tone behavioral thresholds increased by an average of 38 dB at the high-pass cutoff and by 50 dB one octave above the cutoff frequency. Results show that as the cutoff frequency of the high-pass masker was lowered, ERP latencies to speech sounds increased and amplitudes decreased. The cutoff frequency where these changes first occurred and the rate of the change differed for N1 compared to N2, P3, and the behavioral measures. N1 showed gradual changes as the masker cutoff frequency was lowered. N2, P3, and behavioral measures showed marked changes below a masker cutoff of 2000 Hz. These results indicate that the decreased audibility resulting from the noise masking affects the various ERP components in a differential manner. N1 is related to the presence of audible stimulus energy, being present whether audible stimuli are discriminable or not. In contrast, N2 and P3 were absent when the stimuli were audible but not discriminable (i.e., when the second formant transitions were masked), reflecting stimulus discrimination. These data have implications regarding the effects of decreased audibility on cortical processing of speech sounds and for the study of cortical ERPs in populations with hearing impairment.  相似文献   

14.
In utero transmission of external and maternal sounds has been studied in pregnant women and in an animal model of human species, the sheep. These works, especially the most recent ones, suggest that local and environmental factors interfere in such a way that signals are attenuated in a complex manner as frequency increases. The present work investigated whether a plain rubber sphere which was filled with water could be considered as a reliable nonbiological model in a study describing the characteristics of sound transmission. A sweeping pure tone, presented externally, was measured inside the rubber sphere using a high signal-to-noise ratio experimental hydrophone. A paradigmatic three components curve was observed between 100 and 20,000 Hz. In the first component of the curve (low to midfrequencies between 100 and 1,000 Hz), the intensity of the inside signal remained stable. The second component of the curve was composed of higher frequencies with the inside pressure falling gradually, demonstrating attenuation of the external signal. The third component of the curve appeared above a critical frequency, the value of which depended on several model and environment parameters. In this component, a series of rapid peaks and drops of the inside high frequency pressure was observed, indicating the presence of resonance systems. Analyses were carried out on the effects of several acoustical parameters, including: the size of the sphere, the location of the hydrophone in the sphere, the distance between the signal source and the hydrophone, the location of the external reference microphone, and the acoustical structure of the environment. These parameters allowed for the definition of their respective roles in the in-utero transmission of external sounds. These data were then compared with measurements performed within a biological model--ewes--under close acoustical settings. The comparisons confirmed the validity of the measurements, suggesting that the model may be useful in studies of sound transmission in utero.  相似文献   

15.
Inspiratory hypoglossal motoneurons (IHMs) mediate contraction of the genioglossus muscle and contribute to the regulation of upper airway patency. Intracellular recordings were obtained from antidromically identified IHMs in anesthetized, vagotomized cats, and IHM responses to electrical activation of superior laryngeal nerve (SLN) afferent fibers at various frequencies and intensities were examined. SLN stimulus frequencies <2 Hz evoked an excitatory-inhibitory postsynaptic potential (EPSP-IPSP) sequence or only an IPSP in most IHMs that did not change in amplitude as the stimulus was maintained. During sustained stimulus frequencies of 5-10 Hz, there was a reduction in the amplitude of SLN-evoked IPSPs with time with variable changes in the EPSP. At stimulus frequencies >25 Hz, the amplitude of EPSPs and IPSPs was reduced over time. At a given stimulus frequency, increasing stimulus intensity enhanced the decay of the SLN-evoked postsynaptic potentials (PSPs). Frequency-dependent attenuation of SLN inputs to IHMs also occurred in newborn kittens. These results suggest that activation of SLN afferents evokes different PSP responses in IHMs depending on the stimulus frequency. At intermediate frequencies, inhibitory inputs are selectively filtered so that excitatory inputs predominate. At higher frequencies there was no discernible SLN-evoked PSP temporally locked to the SLN stimuli. Alterations in SLN-evoked PSPs could play a role in the coordination of genioglossal contraction during respiration, swallowing, and other complex motor acts where laryngeal afferents are activated.  相似文献   

16.
We studied the relationship between auditory activity in the midbrain and selective phonotaxis in females of the treefrog, Pseudacris crucifer. Gravid females were tested in two-stimulus playback tests using synthetic advertisement calls of different frequencies (2600 versus 2875 Hz; 2800 versus 3500 Hz; 2600 versus 3500 Hz). Tests were conducted with and without a background of synthesized noise, which was filtered to resemble the spectrum of a chorus of spring peepers. There were no significant preferences for calls of any frequency in the absence of background noise. With background noise, females preferred calls of 3500 Hz to those of 2600 Hz. Multi-unit recordings of neural responses to synthetic sounds were made from the torus semicircularis of the same females following the tests of phonotaxis. We measured auditory threshold at 25 frequencies (1800-4200 Hz) as well as the magnitude of the neural response when stimulus amplitude was held constant and frequency was varied. This procedure yielded isointensity response contours, which we obtained at six amplitudes in the absence of noise and at the stimulus amplitude used during the phonotaxis tests with background noise. Individual differences in audiograms and isointensity responses were poorly correlated with behavioural data except for the test of 2600 Hz versus 3500 Hz calls in noise. The shape of the neural response contours changed with stimulus amplitude and in the presence of the simulated frog chorus. At 85 dB sound pressure level (SPL), the level at which females were tested, the contours of females were quite flat. The contours were more peaked at lower SPLs as well as during the broadcast of chorus noise and white noise at an equivalent spectrum level (45-46 dB/Hz). Peaks in the isointensity response plots of most females occurred at stimulus frequencies ranging from 3200 to 3400 Hz, frequencies close to the median best excitatory frequency (BEF) of 3357 Hz but higher than the mean of the mid-frequency of the male advertisement call (3011 Hz). Addition of background noise may cause a shift in the neural response-intensity level functions. Our results highlight the well-known nonlinearity of the auditory system and the danger inherent in focusing solely on threshold measures of auditory sensitivity when studying the proximate basis of female choice. The results also show an unexpected effect of the natural and noisy acoustic environment on behaviour and responses of the auditory system. Copyright 1998 The Association for the Study of Animal Behaviour.  相似文献   

17.
18.
The effects of frequency differences between the lead and lag stimuli on auditory apparent motion (AAM--the perception of continuous changes in the location of a sound image over time) were examined in two experiments. In experiment 1, three standard frequencies (500, 1000, and 5000 Hz) and three SOAs (40, 60, and 100 ms) were tested. Both standard frequency and stimulus onset asynchrony (SOA) were constant throughout a session. Eleven comparison frequencies were tested within each session, with the range dependent on the standard frequency. At standard frequencies of 500 and 1000 Hz, AAM was heard when the frequencies of the lead and lag stimuli were within 100 Hz of each other. At 5000 Hz, the range of frequencies producing AAM increased with SOA. In experiment 2, two standards (500 and 5000 Hz) were tested with a wider range of SOAs (10-210 ms) varied within a session, and a narrower range of comparison frequencies. Here, comparison frequency was constant throughout a session. At 500 Hz, the SOAs producing AAM did not depend on comparison frequency. At 5000 Hz, the SOAs producing AAM increased with comparison frequency, consistent with Korte's third law of visual apparent motion.  相似文献   

19.
The presence of a binaurally activated nucleus (nucleus laminaris) in the hindbrain of birds suggests that they may be capable of binaural sound localization. In this report, after verification that pigeons were capable of either homing or scanning for the source of a sound, 5 Ss were also shown to be (a) capable of localizing a single burst of noise whose duration was too brief for homing or scanning; (b) capable of using either binaural time or intensity disparities for the localization of brief tones; (c) capable of localizing a single tone-pip throughout their frequency range of hearing from 125 Hz to 8 kHz, though having considerable difficulty in their midfrequency range (1–2 kHz); and (d) capable of localizing a single brief burst of narrow band noise even in their midfrequency range. It is concluded that the capacity of pigeons to localize brief sounds and to use binaural disparity cues for doing so is not qualitatively different from that of mammals and, therefore, that the nucleus laminaris or some similar brain-stem nuclei in pigeons are probably analogous in their contribution to sound localization to the superior olivary complex in mammals. (55 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
This paper investigates the cues used by the auditory system in the perceptual organization of sequential sounds. In particular, the ability to organize sounds in the absence of spectral cues is studied. In the first experiment listeners were presented with a tone sequence ABA ABA ..., where the fundamental frequency (f0) of tone A was fixed at 100 Hz and the f0 difference between tones A and B varied across trials between 1 and 11 semitones. Three spectral conditions were tested: pure tones, harmonic complexes filtered with a bandpass region between 500 and 2000 Hz, and harmonic complexes filtered with a bandpass region chosen so that only harmonics above the tenth would be passed by the filter, thus severely limiting spectral information. Listeners generally reported that they could segregate tones A and B into two separate perceptual streams when the f0 interval exceeded about four semitones. This was true for all conditions. The second experiment showed that most listeners were better able to recognize a short atonal melody interleaved with random distracting tones when the distracting tones were in an f0 region 11 semitones higher than the melody than when the distracting tones were in the same f0 region. The results were similar for both pure tones and complex tones comprising only high, unresolved harmonics. The results from both experiments show that spectral separation is not a necessary condition for perceptual stream segregation. This suggests that models of stream segregation that are based solely on spectral properties may require some revision.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号