首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Categorization researchers typically present single objects to be categorized. But real-world categorization often involves object recognition within complex scenes. It is unknown how the processes of categorization stand up to visual complexity or why they fail facing it. The authors filled this research gap by blending the categorization and visual-search paradigms into a visual-search and categorization task in which participants searched for members of target categories in complex displays. Participants have enormous difficulty in this task. Despite intensive and ongoing category training, they detect targets at near-chance levels unless displays are extremely simple or target categories extremely focused. These results, discussed from the perspectives of categorization and visual search, might illuminate societally important instances of visual search (e.g., diagnostic medical screening). (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
The aim of this study was to analyze the timing and topography of brain activity in relation to the cognitive processing of different types of auditory information. We specifically investigated the effects of familiarity on environmental sound identification, an issue which has been little studied with respect to cognitive processes, neural substrates, and time course of brain activity. To address this issue, we implemented and applied an electroencephalographic mapping method named event-related desynchronization, which allows one to assess the dynamics of neuronal activity with high temporal resolution (here, 125 ms); we used 19 recording electrodes with standard positioning. We designed an activation paradigm in which healthy subjects were asked to discriminate binaurally heard sounds belonging to one of two distinct categories, "familiar" (i.e., natural environmental sounds) or "unfamiliar" (i.e., altered environmental sounds). The sounds were selected according to strict preexperimental tests so that the former should engage greater semantic, and the latter greater structural, analysis, which we predicted to preferentially implicate left posterior and right brain regions, respectively. During the stimulations, significant desynchronizations (thought to reflect neuronal activations) were recorded over left hemisphere regions for familiar sounds and right temporofrontal regions for unfamiliar sounds, but with only few significant differences between the two sound categories and a common bilateral activation in the frontal regions. However, strongly significant differences between familiar and unfamiliar sounds occurred near the end of and following the stimulations, due to synchronizations (though to reflect deactivations) which appeared over the left posterior regions, as well as the vertex and bilateral frontal cortex, only after unfamiliar sounds. These unexpected synchronizations after the unfamiliar stimuli may reflect an awareness of the unfamiliarity of such sounds, which may have induced an inhibition of semantic and episodic representations because the latter could not be associated with meaningless sounds.  相似文献   

3.
Auditory stream segregation (or streaming) is a phenomenon in which 2 or more repeating sounds differing in at least 1 acoustic attribute are perceived as 2 or more separate sound sources (i.e., streams). This article selectively reviews psychophysical and computational studies of streaming and comprehensively reviews more recent neurophysiological studies that have provided important insights into the mechanisms of streaming. On the basis of these studies, segregation of sounds is likely to occur beginning in the auditory periphery and continuing at least to primary auditory cortex for simple cues such as pure-tone frequency but at stages as high as secondary auditory cortex for more complex cues such as periodicity pitch. Attention-dependent and perception-dependent processes are likely to take place in primary or secondary auditory cortex and may also involve higher level areas outside of auditory cortex. Topographic maps of acoustic attributes, stimulus-specific suppression, and competition between representations are among the neurophysiological mechanisms that likely contribute to streaming. A framework for future research is proposed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
Research has shown the existence of perceptual and neural bias toward sounds perceived as sources approaching versus receding a listener. It has been suggested that a greater biological salience of approaching auditory sources may account for these effects. In addition, these effects may hold only for those sources critical for our survival. In the present study, we bring support to these hypotheses by quantifying the emotional responses to different sounds with changing intensity patterns. In 2 experiments, participants were exposed to artificial and natural sounds simulating approaching or receding sources. The auditory-induced emotional effect was reflected in the performance of participants in an emotion-related behavioral task, their self-reported emotional experience, and their physiology (electrodermal activity and facial electromyography). The results of this study suggest that approaching unpleasant sound sources evoke more intense emotional responses in listeners than receding ones, whereas such an effect of perceived sound motion does not exist for pleasant or neutral sound sources. The emotional significance attributed to the sound source itself, the loudness of the sound, and loudness change duration seem to be relevant factors in this disparity. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
The categories that social targets belong to are often activated automatically. Most studies investigating social categorization have used visual stimuli or verbal labels, whereas ethnolinguistic identity theory posits that language is an essential dimension of ethnic identity. Language should therefore be used for social categorization. In 2 experiments, using the “Who Said What?” paradigm, the authors investigated social categorization by using accents (auditory stimuli) and looks (visual stimuli) to indicate ethnicity, either separately or in combination. Given either looks or accents only, the authors demonstrated that ethnic categorization can be based on accents, and the authors found a similar degree of ethnic categorization by accents and looks. When ethnic cues of looks and accents were combined by creating cross categories, there was a clear predominance of accents as meaningful cues for categorization, as shown in the respective parameters of a multinomial model. The present findings are discussed with regard to the generalizability of findings using one channel of presentation (e.g., visual) and the asymmetry found with different presentation channels for the category ethnicity. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
Monkey auditory memory was tested with increasing list lengths of 4, 6, 8, and 10 sounds. Five-hundred and twenty environmental sounds of 3-s duration were used. In Experiment 1, the monkeys initiated each list by touching the center speaker. They touched 1 of 2 side speakers to indicate whether a single test sound (presented from both side speakers simultaneously) was or was not in the list. The serial-position functions showed prominent primacy effects (good first-item memory) and recency effects (good last-item memory). Experiment 2 repeated the procedure without the list-initiation response and with a variable intertrial interval. The results of both experiments were similar and are discussed in relation to theories and hypotheses of serial-position effects.  相似文献   

7.
8.
Monkey auditory memory was tested with increasing list lengths of 4, 6, 8, and 10 sounds. Five-hundred and twenty environmental sounds of 3-s duration were used. In Experiment 1, the monkeys initiated each list by touching the center speaker. They touched 1 of 2 side speakers to indicate whether a single test sound (presented from both side speakers simultaneously) was or was not in the list. The serial-position functions showed prominent primacy effects (good first-item memory) and recency effects (good last-item memory). Experiment 2 repeated the procedure without the list-initiation response and with a variable intertrial interval. The results of both experiments were similar and are discussed in relation to theories and hypotheses of serial-position effects. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
Sound-processing strategies that use the highly non-random structure of natural sounds may confer evolutionary advantage to many species. Auditory processing of natural sounds has been studied almost exclusively in the context of species-specific vocalizations, although these form only a small part of the acoustic biotope. To study the relationships between properties of natural soundscapes and neuronal processing mechanisms in the auditory system, we analysed sound from a range of different environments. Here we show that for many non-animal sounds and background mixtures of animal sounds, energy in different frequency bands is coherently modulated. Co-modulation of different frequency bands in background noise facilitates the detection of tones in noise by humans, a phenomenon known as co-modulation masking release (CMR). We show that co-modulation also improves the ability of auditory-cortex neurons to detect tones in noise, and we propose that this property of auditory neurons may underlie behavioural CMR. This correspondence may represent an adaptation of the auditory system for the use of an attribute of natural sounds to facilitate real-world processing tasks.  相似文献   

10.
11.
Response areas (RAs) of sensory neurones are dynamically modified by attention, denervation of specific afferent input, blocking inhibition, and by prolonged conditioning with extra-RA stimuli. Here we demonstrate in auditory neurones that the RA is also critically influenced by the background to stimuli. When RAs are measured in the presence of non-excitatory extra-RA tones, new RAs arise at frequencies otherwise not excitatory, as a consequence of non-linear receptor organ transduction. The new RAs can become more sensitive than the RA in quiet conditions such that neurones are then effectively tuned to a new frequency. Thus, even in a modestly complex environment, auditory neurones do not signal a fixed range of sounds but effectively code sounds to which they are otherwise unresponsive.  相似文献   

12.
13.
1. Cats with one cochlea destroyed were trained to localize sound. After behavioral measures of the animal's accuracy of localization were made, cortical auditory areas were ablated unilaterally. 2. The results showed: a) like binaural localization, monaural localization of sound in space, as measured by the ability of an animal to move toward a sound source, depends on integrity of auditory cortex; b) it is only ablation of cortex contralateral to the functional ear that seriously affects localizing behavior; ablation of cortex ipsilateral to the intact cochlea has little or no effect on localizing behavior. 3. To explain the results, we suggest that auditory cortex is essential for an organized perception of space including the relation of the animal's position to other objects in space. We also suggest that auditory cortex contralateral to a given ear is necessary in order for the animal to recognize that a stimulus is presented to that ear of, when both ears are intact, to recognize that the stimulus to the given ear differs in some way (intensity, time of arrival, sequential arrangement of sounds) from the stimulus to the opposite ear.  相似文献   

14.
Used 2-choice and 3-choice tests to evaluate the effects of bilateral auditory cortical lesions on pure-tone sound localization by 10 male albino rats. Both tests required that Ss approach a distant sound source to obtain water reinforcement. Stimuli were single noise and tone bursts, 65 msec in duration including 20-msec rise and fall times. Tone frequencies were 2, 4, 8, 16, and 32 kHz adjusted to 40 dB (sound pressure level) above the S's absolute threshold. Five Ss were tested in the 2-choice situation following bilateral ablation of auditory cortex. Some reduction in performance was observed relative to normals, but impairments were not severe. Similar results were obtained for 2 brain-damaged Ss tested in the 3-choice situation. Thus, the ability to localize sounds in space remained intact after complete destruction of auditory cortex, and there was no indication of a frequency-dependent deficit. Findings are considered in relation to the more severe deficits observed in other mammals after lesions of the auditory cortex. (30 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
Detection of auditory signals by frog inferior collicular neurons in the presence of spatially separated noise. J. Neurophysiol. 80: 2848-2859, 1998. Psychophysical studies have shown that the ability to detect auditory signals embedded in noise improves when signal and noise sources are widely separated in space; this allows humans to analyze complex auditory scenes, as in the cocktail-part effect. Although these studies established that improvements in detection threshold (DT) are due to binaural hearing, few physiological studies were undertaken, and very little is known about the response of single neurons to spatially separated signal and noise sources. To address this issue we examined the responses of neurons in the frog inferior colliculus (IC) to a probe stimulus embedded in a spatially separated masker. Frogs perform auditory scene analysis because females select mates in dense choruses by means of auditory cues. Results of the extracellular single-unit recordings demonstrate that 22% of neurons (A-type) exhibited improvements in signal DTs when probe and masker sources were progressively separated in azimuth. In contrast, 24% of neurons (V-type) showed the opposite pattern, namely, signal DTs were lowest when probe and masker were colocalized (in many instances lower than the DT to probe alone) and increased when the two sound sources were separated. The remaining neurons demonstrated a mix of these two types of patterns. An intriguing finding was the strong correlation between A-type masking release patterns and phasic neurons and a weaker correlation between V-type patterns and tonic neurons. Although not decisive, these results suggest that phasic units may play a role in release from masking observed psychophysically. Analysis of the data also revealed a strong and nonlinear interaction among probe, masker, and masker azimuth and that signal DTs were influenced by two factors: 1) the unit's sensitivity to probe in the presence of masker and 2) the criterion level for estimating DT. For some units, it was possible to examine the interaction between these two factors and gain insights into the variation of DTs with masker azimuth. The implications of these findings are discussed in relation to signal detection in the auditory system.  相似文献   

16.
Sensory saltation is a spatiotemporal illusion in which the judged positions of stimuli are shifted toward subsequent stimuli that follow closely in time. So far, studies on saltation in the auditory domain have usually employed subjective rating techniques, making it difficult to exactly quantify the extent of saltation. In this study, temporal and spatial properties of auditory saltation were investigated using the "reduced-rabbit" paradigm and a direct-location method. In 3 experiments, listeners judged the position of the 2nd sound within sequences of 3 short sounds by using a hand pointer. When the delay between the 2nd and 3rd sound was short, the target sound was shifted toward the subsequent sound. The magnitude of displacement increased when the temporal and spatial distance between the sounds was reduced. In a 4th experiment, a modified reduced-rabbit paradigm was used to test the hypothesis that auditory saltation is associated with an impairment of target sound localization. The findings are discussed with regard to a spatiotemporal integration approach in which the processing of auditory information is combined with information from subsequent stimuli. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
Little research has explored the auditory categorization abilities of mammals. To better understand these processes, the authors tested the abilities of rats (Rattus norvegicus) to categorize multidimensional acoustic stimuli by using a classic category-learning task developed by R. N. Shepard, C. I. Hovland, and H. M. Jenkins (1961). Rats proved to be able to categorize 8 complex sounds on the basis of either the direction or rate of frequency modulation but not on the basis of the range of frequency modulation. Rats' categorization abilities were limited but improved slowly and incrementally, suggesting that learning was not facilitated by selective attention to acoustic dimensions. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

18.
Found that 2-day-old White Leghorn and New Hampshire chicks showed an unlearned preference for an ancestral maternal call over a brief, repetitive pure tone burst when choice preference tests were between stationary models emitting maternal call and tone burst sounds. However, other Ss of both breeds showed an unlearned preference for tone burst over maternal call when choice preference tests were between moving models emitting tone burst and call sounds. These same preferences were found in Ss that had been imprinted (exposed) to moving call and tone burst sounds on their 1st posthatch day. The tone bursts were briefer than the call note duration (25 vs 80 msec). Since very brief sound bursts are easier to localize, it is concluded that Ss preferred tone bursts over calls when sound sources were moving because of the greater ease of localizing tone bursts. Along with other recent data, the failure to find imprinting to a maternal call or to tone bursts (i.e., the call and tone burst preferences found were uninfluenced by a brief prior exposure to either sound) suggests the need to question whether or not auditory imprinting occurs in the domestic chick. (12 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
In auditory warning design the idea of the strength of the association between sound and referent has been pivotal. Research has proceeded via constructing classification systems of signal-referent associations and then testing predictions about ease of learning of different levels of signal-referent relation strength across and within different types of auditory signal (viz., speech, abstract sounds, and auditory icons). However, progress is hampered by terminological confusions and by neglect of the cognitive contribution (viz., learning) of the person or user. Drawing upon semiotics and cognitive psychology, the authors highlight the indexical (as opposed to iconic) nature of so-called auditory icons, and the authors identify the cogniser as an indispensable element in the tripartite nature of signification. Classifications that neglect this third element, defining signal-referent relation strength only dyadically, yield results confounded by learning; classifications that correctly include the triadic relation yield research predictions that are redundant. These limitations of the standard method of constructing and testing classification systems suggest that auditory warning design must take the cognitive contribution of the user into account at an earlier stage in the design process. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
The division of the auditory cortex into various fields, functional aspects of these fields, and neuronal coding in the primary auditory cortical field (AI) are reviewed with stress on features that may be common to mammals. On the basis of 14 topographies and clustered distributions of neuronal response characteristics in the primary auditory cortical field, a hypothesis is developed of how a certain complex acoustic pattern may be encoded in an equivalent spatial activity pattern in AI, generated by time-coordinated firing of groups of neurons. The auditory cortex, demonstrated specifically for AI, appears to perform sound analysis by synthesis, i.e. by combining spatially distributed coincident or time-coordinated neuronal responses. The dynamics of sounds and the plasticity of cortical responses are considered as a topic for research.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号