首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A dolphin performed a 3-alternative matching-to-sample task in different modality conditions (visual/echoic, both vision and echolocation: visual, vision only; echoic, echolocation only). In Experiment 1, training occurred in the dual-modality (visual/echoic) condition. Choice accuracy in tests of all conditions was above chance without further training. In Experiment 2, unfamiliar objects with complementary similarity relations in vision and echolocation were presented in single-modality conditions until accuracy was about 70%. When tested in the visual/echoic condition, accuracy immediately rose (95%), suggesting integration across modalities. In Experiment 3, conditions varied between presentation of sample and alternatives. The dolphin successfully matched familiar objects in the cross-modal conditions. These data suggest that the dolphin has an object-based representational system.  相似文献   

2.
The authors previously reported that American shad (Alosa sapidissima) can detect sounds from 100 Hz to 180 kHz, with two regions of best sensitivity, one from 200 to 800 Hz and the other from 25 to 150 kHz [Mann et al., Nature 389, 341 (1997)]. These results demonstrated ultrasonic hearing by shad, but thresholds at lower frequencies were potentially masked by background noise in the experimental room. In this study, the thresholds of the American shad in a quieter and smaller tank, as well as thresholds for detecting stimulated echolocation sounds of bottlenosed dolphins was determined. Shad had lower thresholds for detection (from 0.2 to 0.8 kHz) in the quieter and smaller tank compared with the previous experiment, with low-frequency background noise but similar thresholds at ultrasonic frequencies. Shad were also able to detect echolocation clicks with a threshold of 171 dB re: 1 microPa peak to peak. If spherical spreading and an absorption coefficient of 0.02 dB/m of dolphin echolocation clicks are assumed, shad should be able to detect echolocating Tursiops truncatus at ranges up to 187 m. The authors propose that ultrasonic hearing evolved in shad in response to selection pressures from echolocating odontocete cetaceans.  相似文献   

3.
A single adult female bottlenose dolphin was tested in a series of perceptual studies. On each trial, 4 sine-wave tones were presented that contained a falling frequency contour or some other contour. There were several frequency-transposed exemplars of each contour type in each experiment. The dolphin discriminated contours at a level significantly greater than chance in all experiments. In the 1st 2 experiments, the dolphin demonstrated only modest transfer to novel stimuli and a sensitivity to the absolute frequency of stimuli. In the 3rd experiment, there was no effect of the absolute frequency of stimuli; in the 4th experiment, the dolphin successfully transferred the discrimination to novel stimuli drawn from the octave above the previously heard range. These results demonstrate dolphins' capability to perceive frequency contours, which may underlie the recognition of conspecific whistles. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
We examined the ability of a bottlenose dolphin (Tursiops truncatus) to recognize aspect-dependent objects using echolocation. An aspect-dependent object such as a cube produces acoustically different echoes at different angles relative to the echolocation signal. The dolphin recognized the objects even though the objects were free to rotate and sway. A linear discriminant analysis and nearest centroid classifier could classify the objects using average amplitude, center frequency, and bandwidth of object echoes. The results show that dolphins can use varying acoustic properties to recognize constant objects and suggest that aspect-independent representations may be formed by combining information gleaned from multiple echoes. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
The control of vocalization depends significantly on auditory feedback in any species of mammals. Echolocating horseshoe bats, however, provide an excellent model system to study audio-vocal (AV) interactions. These bats can precisely control the frequency of their echolocation calls by monitoring the characteristics of the returning echo; they compensate for flight-induced Doppler shifts in the echo frequency by lowering the frequency of the subsequent vocalization cells (Schnitzler, 1968; Schuller et al., 1974, 1975). It was the aim of this study to investigate the neuronal mechanisms underlying this Doppler-shift compensation (DSC) behavior. For that purpose, the neuronal activity of single units was studied during spontaneous vocalizations of the bats and compared with responses to auditory stimuli such as playback vocalizations and artificially generated acoustic stimuli. The natural echolocation situation was simulated by triggering an acoustic stimulus to the bat's own vocalization and by varying the time delay of this artificial "echo" relative to the vocalization onset. Single-unit activity was observed before, during, and/or after the bat's vocalization as well as in response to auditory stimuli. However, the activity patterns associated with vocalization differed from those triggered by auditory stimuli even when the auditory stimuli were acoustically identical to the bat's vocalization. These neurons were called AV neurons. Their distribution was restricted to an area in the paralemniscal tegmentum of the midbrain. When the natural echolocation situation was stimulated, the responses of AV neurons depended on the time delay between the onset of vocalization and the beginning of the simulated echo. This delay sensitivity disappeared completely when the act of vocalization was replaced by an auditory stimulus that mimicked acoustic self-stimulation during the emission of an echolocation call. The activity of paralemniscal neurons was correlated with all parameters of echolocation calls and echoes that are relevant in context with DSC. These results suggest a model for the regulation of vocalization frequencies by inhibitory auditory feedback.  相似文献   

6.
Acoustic recordings were used to investigate the cardiac responses of a captive dolphin (Tursiops truncatus) to sound playback stimuli. A suction-cup hydrophone placed on the ventral midline of the dolphin produced a continuous heartbeat signal while the dolphin was submerged. Heartbeats were timed by applying a matched-filter to the phonocardiogram. Significant heart rate accelerations were observed in response to playback stimuli involving conspecific vocalizations compared with baseline rates or tank noise playbacks. This method documents that objective psychophysiological measures can be obtained for physically unrestrained cetaceans. In addition, the results are the 1st to show cardiac responses to acoustic stimuli from a cetacean at depth. Preliminary evidence suggests that the cardiac response patterns of dolphins are consistent with the physiological defense and startle responses in terrestrial mammals and birds. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
The collection of field data is essential to monitoring in river basins to prevent flooding in areas with intense rainfall. A network of stations equipped with ultrasonic sensors was installed in a wet basin in Northwest Spain to monitor the water surface elevation of the rivers in the area. At one of these stations during the summer months, interference and anomalies were detected, presenting a regular pattern. After ruling out other hypotheses, the frequency and intensity of the echolocation calls emitted by bats at specific time intervals was investigated. It was concluded that bat echolocation was the cause of the interference detected.  相似文献   

8.
Experiment 1 tested a dolphin (Tursiops truncatus) for cross-modal recognition of 25 unique pairings of 8 familiar, complexly shaped objects, using the senses of echolocation and vision. Cross-modal recognition was errorless or nearly so for 24 of the 25 pairings under both visual to echoic matching (V-E) and echoic to visual matching (E-V). First-trial recognition occurred for 20 pairings under V-E and for 24 under E-V. Echoic decision time under V-E averaged only 1.88 s. Experiment 2 tested 4 new pairs of objects for 24 trials of V-E and 24 trials of E-V without any prior exposure of these objects. Two pairs yielded performance significantly above chance in both V-E and E-V. Also, the dolphin matched correctly on 7 of 8 1st trials with these pairs. The results support a capacity for direct echoic perception of object shape by this species and demonstrate that prior object exposure is not required for spontaneous cross-modal recognition. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
Trained groups of young domestic chicks (N = 310) to reject selectively quinine-flavored or electrified water distinguished by 1 of several visual and auditory stimuli. With visual discriminative stimuli, Ss learned within a few trials, but with sounds they learned poorly, although they could hear the sounds. When a compound of flashing light and clicks signaled footshocks for drinking or shock through the water, drinking was completely controlled by the flashing light. In contrast, when the same compound was paired with footshock in a CER paradigm, Ss' behavior was controlled primarily by the clicks. Results constitute a demonstration of stimulus relevance or belongingness, but differ in important ways from other examples of the nonequivalence of stimuli, responses, or reinforcers. (35 ref.) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
The present study was undertaken to examine the feasibility of using tone pip stimuli rather than conventional clicks in brainstem evoked response (BER) audiometry. Trains of 2000-Hz tone pips, 4000-HZ tone pips or clicks were presented at seven intensity levels to six normal young adults. Results demonstrated that BERs can be readily elicited by tone pips. This latter finding may be attributable to the differences in stimulus rise times. Tone pips appear to introduce greater frequency specificity to BER audiometry without a marked loss in the ability to elicit the response.  相似文献   

11.
Using echolocation, bats can not only locate objects in space but also discriminate objects of different shape. The acoustic image of an object is its impulse response (IR). The current experiments investigate whether bats just perceive changes in echo composition or whether bats perceive the IR itself through a detailed comparison of the emitted sound with the echo. The bat Megaderma lyra was trained to classify unknown virtual objects according to learned reference objects of different temporal and spectral composition. The bats' spontaneous classification was compared to predictions based on various physical and simulated peripheral auditory representations of the objects. The results show that the bats developed an accurate internal representation of the objects' IRs. In the auditory periphery, the IRs of small objects (  相似文献   

12.
The flash-lag effect is a visual illusion wherein intermittently flashed, stationary stimuli seem to trail after a moving visual stimulus despite being flashed synchronously. We tested hypotheses that the flash-lag effect is due to spatial extrapolation, shortened perceptual lags, or accelerated acquisition of moving stimuli, all of which call for an earlier awareness of moving visual stimuli over stationary ones. Participants judged synchrony of a click either to a stationary flash of light or to a series of adjacent flashes that seemingly bounced off or bumped into the edge of the visual display. To be judged synchronous with a stationary flash, audio clicks had to be presented earlier--not later--than clicks that went with events, like a simulated bounce (Experiment 1) or crash (Experiments 2-4), of a moving visual target. Click synchrony to the initial appearance of a moving stimulus was no different than to a flash, but clicks had to be delayed by 30-40 ms to seem synchronous with the final (crash) positions (Experiment 2). The temporal difference was constant over a wide range of motion velocity (Experiment 3). Interrupting the apparent motion by omitting two illumination positions before the last one did not alter subjective synchrony, nor did their occlusion, so the shift in subjective synchrony seems not to be due to brightness contrast (Experiment 4). Click synchrony to the offset of a long duration stationary illumination was also delayed relative to its onset (Experiment 5). Visual stimuli in motion enter awareness no sooner than do stationary flashes, so motion extrapolation, latency difference, and motion acceleration cannot explain the flash-lag effect. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
The inferior colliculus (IC) model of Cai et al. [J. Acoust. Soc. Am. 103, 475-493 (1998)] simulated the binaural response properties of low-frequency IC neurons in response to various acoustic stimuli. This model, however, failed to simulate the sensitivities of IC neurons to dynamically changing temporal features, such as the sharpened dynamic interaural phase difference (IPD) functions. In this paper, the Cai et al. (1998) model is modified such that an adaptation mechanism, viz., an additional channel simulating a calcium-activated, voltage-independent potassium channel which is responsible for afterhyperpolarization, is incorporated in the IC membrane model. Simulations were repeated with this modified model, including the responses to pure tones, binaural beat stimuli, interaural phase-modulated stimuli, binaural clicks, and pairs of binaural clicks. The discharge patterns of the model in response to current injection were also studied and compared with physiological data. It was demonstrated that this model showed all the properties that were simulated by the Cai et al. (1998) model. In addition, it showed some properties that were not simulated by that model, such as the sharpened dynamic IPD functions and adapting discharge patterns in response to current injection.  相似文献   

14.
The communicative value of body position and facial expression was evaluated by measuring an O's ability to detect a relationship between nonverbal and verbal behavior which had been simultaneously emitted. The verbal and nonverbal stimuli were collected during 2 different standardized stress interviews. Judges (Js) were shown pairs of photographs together with short written speech samples and required on each trial to pick the photograph which matched the verbal behavior. In 4 separate experiments with different groups of Js, accurate judgments were obtained. Evidence for a relationship between nonverbal and verbal behavior simultaneously emitted was replicated across 2 different samples of interview behavior and under 3 cue conditions—seeing the head, body, or whole person. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
In 3 experiments, the authors compared duration judgments of filled stimuli (tones) with unfilled ones (intervals defined by clicks or gaps in tones). Temporal generalization procedures (Experiment 1) and verbal estimation procedures (Experiments 2 and 3) all showed that subjective durations of the tones were considerably longer than those of unfilled intervals defined either by clicks or gaps, with the unfilled intervals being judged as approximately 55%-65% of the duration of the filled ones when real duration was the same. Analyses derived from the pacemaker-switch-accumulator clock model incorporated into scalar timing theory suggested that the filled/unfilled difference in mean estimates was due to higher pacemaker speed in the former case, although conclusively ruling out alternative interpretations in terms of attention remains difficult. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
The bases for variations of the middle-component (8-50 ms) auditory averaged electroencephalic response (AER) to clicks during the averaging process were explored by examining (1) the trend of the mean variance of the averaged waveform, (2) averaged waveforms generated by successive blocks of stimuli fractioned from a total of 512 stimuli, and (3) averaged waveforms generated by successive, but partially overlapping, blocks of responses recorded for a train of 512 stimuli. These analyses indicated no systematic changes in peak latencies or peak-to-peak amplitudes as stimulation progressed. Fluctuations in middle AER waveform are more readily explained by non-stationarity of the background electrophysiologic noise. Previously reported amplitude reduction with increasing stimulus number possibly can be explained by progressive reduction of the background noise on which the consistent-amplitude middle components are superimposed.  相似文献   

17.
Previous studies have shown that the lateral nucleus of the amygdala (AL) is essential in auditory fear conditioning and that neurons in the AL respond to auditory stimuli. The goals of the present study were to determine whether neurons in the AL are also responsive to somatosensory stimuli and, if so, whether single neurons in the AL respond to both auditory and somatosensory stimulation. Single-unit activity was recorded in the AL in anesthetized rats during the presentation of acoustic (clicks) and somatosensory (footshock) stimuli. Neurons in the dorsal subdivision of the AL responded to both somatosensory and auditory stimuli, whereas neurons in the ventrolateral AL responded only to somatosensory stimuli and neurons in the ventromedial AL did not respond to either stimuli. These findings indicate that the dorsal AL is a site of auditory and somatosensory convergence and may therefore be a focus of convergence of conditioned and unconditioned stimuli (CS and UCS) in auditory fear conditioning. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

18.
Three squirrel monkeys were shown lists of 3 items drawn from a pool of 150 slides containing colored pictures of natural objects and scenes. A delayed matching technique was used to probe recognition memory for each serial position on different trials. Four experiments on the effects of picture-exposure time and off time were conducted. In agreement with human picture memory experiments, accuracy improved as exposure duration increased from 0.3 to 6 sec. In contrast to research on humans, off time after picture exposure did not improve accuracy relative to a condition with no off time. Further, a comparison of different off-time conditions showed no difference between off times spent in darkness and off times filled either with filler pictures or white light. This finding differs from the well-known observation that illumination interpolated between sample and comparison stimuli interferes with delayed matching. ( French abstract) (27 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
Using the magnetic search coil technique to measure eye and ear movements, we trained cats by operant conditioning to look in the direction of light and sound sources with their heads fixed. Cats were able to localize noise bursts, single clicks, or click trains presented from sources located on the horizontal and vertical meridians within their oculomotor range. Saccades to auditory targets were less accurate and more variable than saccades to visual targets at the same spatial positions. Localization accuracy of single clicks was diminished compared with the long-duration stimuli presented from the same sources. Control experiments with novel auditory targets, never associated with visual targets, demonstrated that the cats localized the sound sources using acoustic cues and not from memory. The role of spectral features imposed by the pinna for vertical sound localization was shown by the breakdown in localization of narrow-band (one-sixth of an octave) noise bursts presented from sources along the midsagittal plane. In addition, we show that cats experience summing localization, an illusion associated with the precedence effect. Pairs of clicks presented from speakers at (+/-18 degrees,0 degrees ) with interclick delays of +/-300 microsec were perceived by the cat as originating from phantom sources extending from the midline to approximately +/-10 degrees.  相似文献   

20.
The ability to lateralize dichotic clicks with either interaural time delays (ITD) or interaural level differences (ILD) was tested in seven multiple sclerosis (MS) subjects who had normal audiograms. Along with the psychoacoustical tests, magnetic resonance images (MRI) of the subjects' brainstem were obtained. After matching each MRI section with the corresponding section of a computerized atlas of the brainstem, the parts of the auditory pathway affected by each MS lesion were determined. Of the seven subjects two performed normally with both types of interaural asymmetry and had no brainstem lesions involving the auditory pathway. Two subjects performed normally only with level differences, but perceived all the dichotic clicks with different ITDs in the center of the head; both had lesions involving the trapezoid body. Three subjects could not perform normally with either task, perceiving the clicks to the sides and never in the center for both ITDs and ILDs; all three had unilateral lesions of the lateral lemniscus. A multi-level decision making model is proposed to account for these results.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号