首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Oscine birds are among the few animal groups that have vocal learning, and their brains contain a specialized system for song learning and production. We describe here the immunocytochemical distribution of dopamine-beta-hydroxylase (DBH), a noradrenergic marker, in the brain of an oscine, the zebra finch (Taeniopygia guttata). DBH-positive cells were seen in the locus coeruleus, the nucleus subcoeruleus ventralis, the nucleus of the solitary tract, and the caudolateral medulla. Immunoreactive fibers and varicosities had a much wider brain distribution. They were particularly abundant in the hippocampus, septum, hypothalamus, area ventralis of Tsai, and substantia nigra, where they formed dense pericellular arrangements. Significant immunoreactivity was observed in auditory nuclei, including the nucleus mesencephalicus lateralis pars dorsalis, the thalamic nucleus ovoidalis, field L, the shelf of the high vocal center (HVC), and the cup of the nucleus robustus archistriatalis (RA), as well as in song control nuclei, including the HVC, RA, the lateral magnocellular nucleus of the anterior neostriatum, and the dorsomedial nucleus (DM) of the intercollicular complex. Except for the DM, DBH immunoreactivity within song nuclei was comparable to that of surrounding tissues. Conspicuously negative were the lobus paraolfactorius, including song nucleus area X, and the paleostriatum. Our results are in agreement with previous studies of the noradrenergic system performed in nonoscines. More importantly, they provide direct evidence for a noradrenergic innervation of auditory and song control nuclei involved in song perception and production, supporting the notion that noradrenaline is involved in vocal communication and learning in oscines.  相似文献   

2.
Like males of many anuran species, fire-bellied toads (Bombina orientalis) call antiphonally, which demonstrates an auditory input into the call-generating network. Males produce their calls by an inspiratory airstream, which is generated exclusively by contraction of the muscles of the buccal cavity. The painted frog (Discoglossus pictus) possesses a combined inspiratory and expiratory call mechanism, and also uses only buccal muscles. These muscles are controlled by branchial motoneurons, which receive vocal premotor input mainly from the pretrigeminal nucleus. The interconnections between the auditory pathway and the vocal pathway were examined by neuroanatomical tracing and intracellular recording. Mesencephalic auditory nuclei, laminar and magnocellular nucleus of the torus semicircularis, and tegmental nuclei constitute strong descending efferents, which, in turn, form collaterals that terminate in vocal premotor nuclei. These findings imply fast audio-vocal interfacing, which is a prerequisite for the control of antiphonal calling.  相似文献   

3.
Operant conditioning and multidimensional scaling procedures were used to study auditory perception of complex sounds in the budgerigar. In a same–different discrimination task, budgerigars learned to discriminate among natural vocal signals. Multidimensional scaling procedures were used to arrange these complex acoustic stimuli in a two-dimensional space reflecting perceptual organization. Results show that budgerigars group vocal stimuli according to functional and acoustical categories. Studies with only contact calls show that birds also make within-category discriminations. The acoustic cues in contact calls most salient to budgerigars appear to be quite complex. There is a suggestion that the sex of the signaler may also be encoded in these calls. The results from budgerigars were compared with the results from humans tested on some of the same sets of complex sounds. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
Examined the actual vocalizations of 24 auditory self-stimulated and 21 unstimulated wood ducklings to explore the possibility that there is a difference in the kind and/or amount of auditory self-stimulation in the 2 groups. Previous research shows that wood ducklings vocalized copiously when in auditory isolation; however, such self-stimulation was ineffective in maintaining their preference for descending frequency-modulated (FM) notes of the maternal call. Only isolated ducklings that had been exposed to recorded descending sib calls exhibited the normal preference for descending maternal notes in a choice test with ascending and descending maternal calls. Results of the present study with a similar choice test show that although stimulated Ss produced more ascending notes than unstimulted Ss, no differences were found in the overall vocal behavior, vocal reactivity, or specific kinds of frequency modulation produced by Ss that preferred the descending maternal call and other Ss that responded in the choice test. This absence of a difference in vocal production supports the previous conclusion that self-stimulation plays no role in the development or maintenance of the species-typical perceptual preference for descending FM notes. (14 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
Here, we examine the connectivity of two previously identified telencephalic stations of the auditory system of adult zebra finches, the neostriatal "shelf" that underlies the high vocal center (HVC) and the archistriatal "cup" adjacent to the robust nucleus of the archistriatum (RA). We used different kinds of neuroanatomical tracers to visualize the projections from the shelf to the HVC. In addition, we show that the shelf projects to the cup and that the cup projects to thalamic, midbrain, and pontine nuclei of the ascending auditory pathway. Our observations extend to songbirds anatomical features that are found in the auditory pathways of a nonoscine bird, the pigeon (Wild et al. [1993] J. Comp. Neurol. 337:32-62), and we suggest that the descending auditory projections found in mammals may also be a general property of the avian brain. Finally, we show that the oscine song control system is closely apposed to auditory pathways at many levels. Our observations may help in understanding the evolution and organization of networks for vocal communication and vocal learning in songbirds.  相似文献   

6.
Two experiments examined the role of compatibility of input and output (I-O) modality mappings in task switching. We define I-O modality compatibility in terms of similarity of stimulus modality and modality of response-related sensory consequences. Experiment 1 included switching between 2 compatible tasks (auditory–vocal vs. visual–manual) and between 2 incompatible tasks (auditory–manual vs. visual–vocal). The resulting switch costs were smaller in compatible tasks compared to incompatible tasks. Experiment 2 manipulated the response–stimulus interval (RSI) to examine the time course of the compatibility effect. The effect on switch costs was confirmed with short RSI, but the effect was diminished with long RSI. Together, the data suggest that task sets are modality specific. Reduced switch costs in compatible tasks may be due to special linkages between input and output modalities, whereas incompatible tasks increase cross-talk, presumably due to dissipating interference of correct and incorrect response modalities. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
Computer simulations of a network model of an isofrequency patch of the dorsal cochlear nucleus (DCN) were run to explore possible mechanisms for the level-dependent features observed in the cross-correlograms of pairs of type IV units in the cat and nominal type IV units in the gerbil DCN. The computer model is based on the conceptual model (of a cat) that suggests two sources of shared input to DCN's projection neurons (type IV units): excitatory input for auditory nerves and inhibitory input from interneurons (type II units). Use of tonal stimuli is thought to cause competition between these sources resulting in the decorrelation of type IV unit activities at low levels. In the model, P-cells (projection neurons), representing type IV units, receive inhibitory input from I-cells (interneurons), representing type II units. Both sets of model neurons receive a simulated excitatory auditory nerve (AN) input from same-CF AN fibers, where the AN input is modeled as a dead-time modified Poisson process whose intensity is given by a computationally tractable discharge rate versus sound pressure level function. Subthreshold behavior of each model neuron is governed by a set of normalized state equations. The computer mode has previously been shown to reproduce the major response properties of both type IV and type II units (e.g., rate-level curves and peri-stimulus time histograms) and the level-dependence of the functional type II-type IV inhibitory interaction. This model is adapted for the gerbil by simulating a reduced population of I-cells. Simulations were carried out for several auditory nerve input levels, and cross-correlograms were computed from the activities of pairs of P-cells for a complete (cat model) and reduced (gerbil model) population of I-cells. The resultant correlograms show central mounds (CMs), indicative of either shared excitatory or inhibitory input, for both spontaneous and tone-evoked driven activities. Similar to experimental results, CM amplitudes are a non-monotonic function of level and CM widths decrease as a function of level. These results are consistent with the hypothesis that shared excitatory input correlates the spontaneous activities of type IV units adn shared inhibitory input correlates their driven activities. The results also suggest that the decorrelation of the activities of type IV units can result from a reduced effectiveness of the AN input as a function of increasing level. Thus, competition between the excitatory and inhibitory inputs is not required.  相似文献   

8.
Besides their song, which is usually a functionally well-defined communication signal with an elaborate acoustic structure, songbirds also produce a variety of shorter vocalizations named calls. While a considerable amount of work has focused on information coding in songs, little is known about how calls' acoustic structure supports communication processes. Because male and female zebra finches use calls during most of their interactions and answer to conspecific calls without visual contact, we aimed at identifying which calls' acoustic cues are necessary to elicit a vocal response. Using synthetic zebra finch calls, we examined evoked vocal response of male and female zebra finches to modified versions of the distance calls. Our results show that the vocal response of zebra finches to female calls requires the full harmonic structure of the call, whereas the frequency downsweep of male calls is necessary to evoke a vocal response. It is likely that both female and male calls require matching a similar frequency bandwidth to trigger a response in conspecific individuals. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

9.
Three experiments were designed to test whether perception and action are coordinated in a way that distinguishes sequencing from timing (Pfordresher, 2003). Each experiment incorporated a trial design in which altered auditory feedback (AAF) was presented for varying lengths of time and then withdrawn. Experiments 1 and 2 included AAF that resulted in action-effect asynchronies (delayed auditory feedback) during simple tapping (Experiment 1) and melody production (Experiment 2). Asynchronous AAF immediately slowed production; this effect then diminished rapidly after removal of AAF. By contrast, sequential alterations of feedback pitch during melody production (Experiment 3) had an effect that varied over successive presentations of AAF (by increasing error rates) that lasted after its withdrawal. The presence of auditory feedback after withdrawal of asynchronous AAF (Experiments 1 and 2) led to overcompensation of timing, whereas the presence of auditory feedback did not influence performance after withdrawal of AAF in Experiment 3. Based on these results, we suggest that asynchronous AAF perturbs the phase of an internal timekeeper, whereas alterations to feedback pitch over time degrade the internal representation of sequence structure. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

10.
Squirrel monkey vocalization can be considered as a suitable model for the study in humans of the neurobiological basis of nonverbal emotional vocal utterances, such as laughing, crying, and groaning. Evaluation of electrical and chemical brain stimulation data, lesioning studies, single-neurone recordings, and neuroanatomical tracing work leads to the following conclusions: The periaqueductal gray and laterally bordering tegmentum of the midbrain represent a crucial area for the production of vocalization. This area collects the various vocalization-triggering stimuli, such as auditory, visual, and somatosensory input from diverse sensory-processing structures, motivation-controlling input from some limbic structures, and volitional impulses from the anterior cingulate cortex. Destruction of this area causes mutism. It is still under dispute whether the periaqueductal region harbors the vocal pattern generator or merely couples vocalization-triggering information to motor-coordinating structures further downward in the brainstem. The periaqueductal region is connected with the phonatory motoneuron pools indirectly via one or several interneurons. The nucleus retroambiguus represents a crucial relay station for the laryngeal and expiratory component of vocalization. The articulatory component reaches the orofacial motoneuron pools via the parvocellular reticular formation. Essential proprioceptive feedback from the larynx and lungs enter the vocal-controlling network via the solitary tract nucleus.  相似文献   

11.
Magnetic resonance images of the vocal tract during sustained production of [symbol: see text] by four native American English talkers are employed for measuring vocal-tract dimensions and for morphological analysis of the 3D vocal tract and tongue shapes. Electropalatography contact profiles are used for studying inter- and intra-talker variabilities. The vocal tract during the production of [symbol: see text] appears to be characterized by three cavities due to the presence of two supraglottal constrictions: the primary one in the oral cavity, and a secondary one in the pharyngeal cavity. All subjects show a large volume anterior to the oral constriction, which results from an inward-drawn tongue body, an anterior tongue body that is characterized by convex cross sections, and a concave posterior tongue body shape. Inter-subject variabilities are observed in the oral-constriction location and the way the constriction is formed. No systematic differences are found between the 3-D vocal tract and tongue shapes of word-initial and syllabic [symbol: see text]s. Tongue-shaping mechanisms for these sounds and their acoustic implications are discussed.  相似文献   

12.
In 3 experiments, mute Peking ducklings, devocalized as embryos and manipulated in auditory isolation, manifested a selective high-frequency perceptual deficit vis-a-vis the maternal cal of their species at 24 hrs after hatching. Since it takes a rather specific auditory experiential input to rectify this high-frequency insensitivity at 24 hrs, it was predicted that, in the absence of auditory experience, devocal-isolated Ss would fail to show sufficient endogenously mediated improvement to bring them up to the level of perceptual competence of vocal-communal Ss at any age. This hypothesis proved wrong in that the proportion of devocalized Ss showing a preference for the normal maternal call over the >825-Hz attenuated one, became equivalent to the vocal Ss at 48 hrs after hatching, as did their ability to discriminate the normal maternal call from >1,800-Hz attenuated maternal call. At 65 hrs, however, the devocalized Ss' performance deteriorated back to the level observed at 24 hrs. It is concluded that embryonic exposure to the (sibling) contact-contentment call prevents the perceptual deficit at 24 hrs and the deterioration at 65 hrs. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
Does the speech motor control system use invariant vocal tract shape targets when producing vowels and semivowels? A 4-part theoretical treatment favoring models whose only invariant targets are regions in auditory perceptual space over models that posit invariant constriction targets is presented. Auditory target regions are hypothesized to arise during development as an emergent property of neural map formation in the auditory system. Furthermore, speech movements are planned as trajectories in auditory perceptual space. These trajectories are then mapped into articulator movements through a neural mapping that allows motor equivalent variability in constriction locations and degrees when needed. These hypotheses are illustrated using computer simulations of the DIVA model of speech acquisition and production. Finally, several difficult challenges to proponents of constriction theories based on this theoretical treatment are posed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
The current study addressed the question whether audiovisual (AV) speech can improve speech perception in older and younger adults in a noisy environment. Event-related potentials (ERPs) were recorded to investigate age-related differences in the processes underlying AV speech perception. Participants performed an object categorization task in three conditions, namely auditory-only (A), visual-only (V), and AVspeech. Both age groups revealed an equivalent behavioral AVspeech benefit over unisensory trials. ERP analyses revealed an amplitude reduction of the auditory P1 and N1 on AVspeech trials relative to the summed unisensory (A + V) response in both age groups. These amplitude reductions are interpreted as an indication of multisensory efficiency as fewer neural resources were recruited to achieve better performance. Of interest, the observed P1 amplitude reduction was larger in older adults. Younger and older adults also showed an earlier auditory N1 in AVspeech relative to A and A + V trials, an effect that was again greater in the older adults. The degree of multisensory latency shift was predicted by basic auditory functioning (i.e., higher hearing thresholds were associated with larger latency shifts) in both age groups. Together, the results show that AV speech processing is not only intact in older adults, but that the facilitation of neural responses occurs earlier in and to a greater extent than in younger adults. Thus, older adults appear to benefit more from additional visual speech cues than younger adults, possibly to compensate for more impoverished unisensory inputs because of sensory aging. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

15.
Reviews recent experimental findings that show that the perception of phonetic distinctions relies on the integration of multiple acoustic cues and is sensitive to the surrounding context in specific ways. Most of these effects have correspondences in speech production and are readily explained by the assumption that listeners make continuous use of their tacit knowledge of speech patterns. A general auditory theory that does not make reference to the specific origin and characteristics of speech can, at best, handle only a small portion of the phenomena reviewed here. Special emphasis is placed on studies that obtained different patterns of results depending on whether the same stimuli were perceived as speech or as nonspeech. Findings provide strong empirical evidence for the existence of a speech-specific mode of perception. (4? p ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
C. T. Best, M. Studdert-Kennedy, S. Manuel, and J. Rubin-Spitz (1989) reported that listeners given speech labels showed categorical-like perception of a series of complex tone analogs to a /la/-/ra/ speech series, whereas nonspeech listeners were unable to classify the stimuli consistently. In 2 experiments, a new training and testing procedure was used with adult listeners given nonspeech instructions. They classified the /la/-/ra/ tone analogs consistently, showed categorical-like perception, and generalized their training to a new, /li/-/ri/ tone analog series. Two sets of auditory attributes were described for coding the /l/-/r/ distinction, and 1 was shown to quantitatively predict listeners' classification of both series. These results are consistent with models of perception in which a rich, abstract auditory code is computed and forms the basis for both speech and nonspeech auditory categories. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
The ventrolateral (VL) thalamus in mammals is a site well-situated to show vocalization-related neural activity if there is general or classical motor system involvement in vocal production. It receives input from both the basal ganglia and cerebellum, and forms reciprocal connections with motor cortical areas. The current study examined the activity in cat VL thalamus neurons during instrumentally conditioned vocalization. Units in our sample showed irregular spontaneous firing which could be modulated by slowly occurring fluctuations in intensity of vocalization task performance. Two main types of behavioral events were associated with changes in neural firing rate. The first of these was the ingestion of food reward. More than half of all recordings showed phasic bursting patterns during licking; a similar number had increases in firing preparatory to this phasic activity. The second behavioral event modulating unit responses was vocalization. Approximately 60% of recordings showed activity changes time-locked to vocalization. These responses were almost always excitatory, and often involved changes in firing that preceded vocalization onset. No spatial organization of differences in firing pattern between neurons could be distinguished. Our results suggest that VL thalamus may well be involved in mediating vocal behavior, although its functional role remains an object of speculation. Results are compared with previous studies of vocalization-related activity and of VL thalamus activity.  相似文献   

18.
The personal attributes of a talker perceived via acoustic properties of speech are commonly considered to be an extralinguistic message of an utterance. Accordingly, accounts of the perception of talker attributes have emphasized a causal role of aspects of the fundamental frequency and coarse-grain acoustic spectra distinct from the detailed acoustic correlates of phonemes. In testing this view, in four experiments, we estimated the ability of listeners to ascertain the sex or the identity of 5 male and 5 female talkers from sinusoidal replicas of natural utterances, which lack fundamental frequency and natural vocal spectra. Given such radically reduced signals, listeners appeared to identify a talker's sex according to the central spectral tendencies of the sinusoidal constituents. Under acoustic conditions that prevented listeners from determining the sex of a talker, individual identification from sinewave signals was often successful. These results reveal that the perception of a talker's sex and identity are not contingent and that fine-grain aspects of a talker's phonetic production can elicit individual identification under conditions that block the perception of voice quality.  相似文献   

19.
The sequence of neurophysiological processes elicited in the auditory system by a sound is analyzed in search of the stage at which the processes carrying sensory information cross the borderline beyond which they directly underlie sound perception. Neurophysiological data suggest that this transition occurs when the sensory input is mapped onto the physiological basis of sensory memory in the auditory cortex. At this point, the sensory information carried by the stimulus-elicited process corresponds, for the first time, to that contained by the actual sound percept. Before this stage, the sensory stimulus code is fragmentary, lacks the time dimension, cannot enter conscious perception, and is not accessible to top-down processes (voluntary mental operations). On these grounds, 2 distinct stages of auditory sensory processing, prerepresentational and representational, can be distinguished. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
Rising sound intensity provides an important cue for the detection of looming objects. Studies with humans indirectly suggest that rising pitch can also signal a looming object. This link between rising intensity and rising frequency is puzzling because no physical rise in frequency occurs when a sound source approaches. Putative explanations include (a) the idea that the loudness of sound depends on its frequency, (b) the frequent co-occurrence of rising intensity with rising frequency in vocalizations generates an association between the 2 features, and (c) auditory neurons process amplitude- and frequency-modulated sounds similarly. If these hypotheses are valid, then rhesus monkeys (Macaca mulatta)—which share some homologies in the vocal production apparatus and auditory system—should also associate rising frequency with rising intensity, and thus should perceive rising frequency as a looming sound source. A head-turning assay and a preferential-looking paradigm revealed that monkeys show an attentional bias toward rising versus falling frequency sounds and link the former to visual looming signals. This suggests that monkeys hear a rising frequency sound as a looming sound source even though, in the real world, no such link exists. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号