首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Duplex perception has been interpreted as revealing distinct systems for general auditory perception and speech perception. The systems yield distinct experiences of the same acoustic signal, one conforming to the acoustic structure itself and the other to its source in vocal-tract activity. However, this interpretation has not been tested by examining whether duplex perception can be obtained for nonspeech sounds that are not plausibly perceived by a specialized system. In 5 experiments, some of the phenomena associated with duplex perception of speech are replicated using the sound of a slamming door. Similarities between 26 university students' responses to syllables and door sounds are striking enough to suggest that some conclusions in the speech literature should be tempered that (1) duplex perception is special to sounds for which there are perceptual modules and (2) duplex perception occurs because distinct systems have rendered different percepts of the same acoustic signal. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
An important aspect of the analysis of auditory “scenes” relates to the perceptual organization of sound sequences into auditory “streams.” In this study, we adapted two auditory perception tasks, used in recent human psychophysical studies, to obtain behavioral measures of auditory streaming in ferrets (Mustela putorius). One task involved the detection of shifts in the frequency of tones within an alternating tone sequence. The other task involved the detection of a stream of regularly repeating target tones embedded within a randomly varying multitone background. In both tasks, performance was measured as a function of various stimulus parameters, which previous psychophysical studies in humans have shown to influence auditory streaming. Ferret performance in the two tasks was found to vary as a function of these parameters in a way that is qualitatively consistent with the human data. These results suggest that auditory streaming occurs in ferrets, and that the two tasks described here may provide a valuable tool in future behavioral and neurophysiological studies of the phenomenon. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
Three experiments examined various facets of the perception of continuous and discontinuous line segments in pigeons. Pigeons were presented with 2 straight lines that were interrupted by a gap. In some instances, the lines were the same angle and were positioned so that they appeared (to human observers) to form a continuous line. In other instances, the lines were different angles or the same angle but spatially misaligned. The birds were trained to classify each stimulus as continuous or discontinuous using a go/no-go procedure. A series of tests followed in which the birds received novel discontinuous displays made up of familiar line segments, continuous and discontinuous stimuli made up of novel line segments (novel straight lines or curved lines), and familiar displays in which the gap was covered with a gray square. Results from the tests indicated that 2 of the 3 pigeons had learned a continuous-discontinuous categorization and that they appeared to use the relationship between the 2 line segments in discriminating the displays. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
People frequently analyze the actions of other people for the purpose of action coordination. To understand whether such self-relative action perception differs from other-relative action perception, the authors had observers either compare their own walking speed with that of a point-light walker or compare the walking speeds of 2 point-light walkers. In Experiment 1, observers walked, bicycled, or stood while performing a gait-speed discrimination task. Walking observers demonstrated the poorest sensitivity to walking speed, suggesting that perception and performance of the same action alters visual-motion processes. Experiments 2-6 demonstrated that the processes used during self-relative and other-relative action perception differ significantly in their dependence on observers' previous motor experience, current motor effort, and potential for action coordination. These results suggest that the visual analysis of human motion during traditional laboratory studies can differ substantially from the visual analysis of human movement under more realistic conditions. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
The perception of complex sounds, such as speech and animal vocalizations, requires the central auditory system to analyze rapid, ongoing fluctuations in sound frequency and intensity. A decline in temporal acuity has been identified as one component of age-related hearing loss. The detection of short, silent gaps is thought to reflect an important fundamental dimension of temporal resolution. In this study we compared the neural response elicited by silent gaps imbedded in noise of single neurons in the inferior colliculus (IC) of young and old CBA mice. IC neurons were classified by their temporal discharge patterns. Phasic units, which accounted for the majority of response types encountered, tended to have the shortest minimal gap thresholds (MGTs), regardless of age. We report three age-related changes in neural processing of silent gaps. First, although the shortest MGTs (1-2 msec) were observed in phasic units from both young and old animals, the number of neurons exhibiting the shortest MGTs was much lower in old mice, regardless of the presentation level. Second, in the majority of phasic units, recovery of response to the stimulus after the silent gap was of a lower magnitude and much slower in units from old mice. Finally, the neuronal map representing response latency versus best frequency was found to be altered in the old IC. These results demonstrate a central auditory system correlate for age-related decline in temporal processing at the level of the auditory midbrain.  相似文献   

6.
Lateralized displays are used widely to investigate hemispheric asymmetry in language perception. However, few studies have used lateralized displays to investigate hemispheric asymmetry in visual speech perception, and those that have yielded mixed results. This issue was investigated in the current study by presenting visual speech to either the left hemisphere (LH) or the right hemisphere (RH) using the face as recorded (normal), a mirror image of the normal face (reversed), and chimeric displays constructed by duplicating and reversing just one hemiface (left or right) to form symmetrical images (left-duplicated, right-duplicated). The projection of displays to each hemisphere was controlled precisely by an automated eye-tracking technique. Visual speech perception showed the same, clear LH advantage for normal and reversed displays, a greater LH advantage for right-duplicated displays, and no hemispheric difference for left-duplicated displays. Of particular note is that perception of LH displays was affected greatly by the presence of right-hemiface information, whereas perception of RH displays was unaffected by changes in hemiface content. Thus, when investigated under precise viewing conditions, the indications are not only that the dominant processes of visual speech perception are located in the LH but that these processes are uniquely sensitive to right-hemiface information. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
The authors examined age-related differences in the detection of collision events. Older and younger observers were presented with displays simulating approaching objects that would either collide or pass by the observer. In 4 experiments, the authors found that older observers, as compared with younger observers, had less sensitivity in detecting collisions with an increase in speed, at shorter display durations, and with longer time-to-contact conditions. Older observers also had greater difficulty when the scenario simulated observer motion, suggesting that older observers have difficulty discriminating object motion expansion from background expansion from observer motion. The results of these studies support the expansion sensitivity hypothesis-that age-related decrements in detecting collision events involving moving objects are the result of a decreased sensitivity to recover expansion information. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
Auditory stream segregation (or streaming) is a phenomenon in which 2 or more repeating sounds differing in at least 1 acoustic attribute are perceived as 2 or more separate sound sources (i.e., streams). This article selectively reviews psychophysical and computational studies of streaming and comprehensively reviews more recent neurophysiological studies that have provided important insights into the mechanisms of streaming. On the basis of these studies, segregation of sounds is likely to occur beginning in the auditory periphery and continuing at least to primary auditory cortex for simple cues such as pure-tone frequency but at stages as high as secondary auditory cortex for more complex cues such as periodicity pitch. Attention-dependent and perception-dependent processes are likely to take place in primary or secondary auditory cortex and may also involve higher level areas outside of auditory cortex. Topographic maps of acoustic attributes, stimulus-specific suppression, and competition between representations are among the neurophysiological mechanisms that likely contribute to streaming. A framework for future research is proposed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
The simple task of bouncing a ball on a racket offers a model system for studying how human actors exploit the physics and information of the environment to control their behavior. Previous work shows that people take advantage of a passively stable solution for ball bouncing but can also use perceptual information to actively stabilize bouncing. In this article, we investigate (a) active and passive contributions to the control of bouncing, (b) the visual information in the ball's trajectory, and (c) how it modulates the parameters of racket oscillation. We used a virtual ball bouncing apparatus to manipulate the coefficient of restitution α and gravitational acceleration g during steady-state bouncing (Experiment 1) and sudden transitions (Experiment 2) to dissociate informational variables. The results support a form of mixed control, based on the half-period of the ball's trajectory, in which racket oscillation is actively regulated on every cycle in order to keep the system in or near the passively stable region. The mixed control mode may be a general strategy for integrating passive stability with active stabilization in perception–action systems. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
Two pairs of experiments studied the effects of attention and of unilateral neglect on auditory streaming. The first pair showed that the build up of auditory streaming in normal participants is greatly reduced or absent when they attend to a competing task in the contralateral ear. It was concluded that the effective build up of streaming depends on attention. The second pair showed that patients with an attentional deficit toward the left side of space (unilateral neglect) show less stream segregation of tone sequences presented to their left than to their right ears. Streaming in their right ears was similar to that for stimuli presented to either ear of healthy and of brain-damaged controls, who showed no across-ear asymmetry. This result is consistent with an effect of attention on streaming, constrains the neural sites involved, and reveals a qualitative difference between the perception of left- and right-sided sounds by neglect patients. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

11.
In 4 experiments, the authors examined sex differences in audiospatial perception of sounds that moved toward and away from the listener. Experiment 1 showed that both men and women underestimated the time-to-arrival of full-cue looming sounds. However, this perceptual bias was significantly stronger among women than among men. In Experiment 2, listeners estimated the terminal distance of sounds that approached but stopped before reaching them. Women perceived the looming sounds as closer than did men. However, in Experiment 3, with greater statistical power, the authors found no sex difference in the perceived distance of sounds that traveled away from the listener, demonstrating a sex-based specificity for auditory looming perception. Experiment 4 confirmed these results using equidistant looming and receding sounds. The findings suggest that sex differences in auditory looming perception are not due to general differences in audiospatial ability, but rather illustrate the environmental salience and evolutionary importance of perceiving looming objects. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

12.
Using a habituation/test procedure, the author investigated adults' and infants' perception of auditory–visual temporal synchrony. Participants were familiarized with a bouncing green disk and a sound that occurred each time the disk bounced. Then, they were given a series of asynchrony test trials where the sound occurred either before or after the disk bounced. The magnitude of the auditory visual temporal asynchrony threshold differed markedly in adults and infants. The threshold for the detection of asynchrony created by a sound preceding a visible event was 65 ms in adults and 350 ms in infants and for the detection of asynchrony created by a sound following a visible event was 112 ms in adults and 450 ms in infants. Also, infants did not respond to asynchronies that exceeded intervals that yielded reliable discrimination. Infants' perception of auditory–visual temporal unity is guided by a synchrony and an asynchrony window, both of which become narrower in development. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

13.
Dutch listeners were exposed to the English theta sound (as in bath), which replaced [f] in /f/-final Dutch words or, for another group, [s] in /s/-final words. A subsequent identity-priming task showed that participants had learned to interpret theta as, respectively, /f/ or /s/. Priming effects were equally strong when the exposure sound was an ambiguous [fs]-mixture and when primes contained unambiguous fricatives. When the exposure sound was signal-correlated noise, listeners interpreted it as the spectrally similar /f/, irrespective of lexical bias during exposure. Perceptual learning about speech is thus constrained by spectral similarity between the input and established phonological categories, but within those limits, adjustments are thorough enough that even nonnative sounds can be treated fully as native sounds. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
Why are human observers particularly sensitive to human movement? Seven experiments examined the roles of visual experience and motor processes in human movement perception by comparing visual sensitivities to point-light displays of familiar, unusual, and impossible gaits across gait-speed and identity discrimination tasks. In both tasks, visual sensitivity to physically possible gaits was superior to visual sensitivity to physically impossible gaits, supporting perception-action coupling theories of human movement perception. Visual experience influenced walker-identity perception but not gait-speed discrimination. Thus, both motor experience and visual experience define visual sensitivity to human movement. An ecological perspective can be used to define the conditions necessary for experience-dependent sensitivity to human movement. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
Sounds arriving at the eardrum are filtered by the external ear and associated structures in a frequency and direction specific manner. When convolved with the appropriate filters and presented to human listeners through headphones, broadband noises can be precisely localized to the corresponding position outside of the head (reviewed in Blauert, 1997). Such a 'virtual auditory space' can be a potentially powerful tool for neurophysiological and behavioral work in other species as well. We are developing a virtual auditory space for the barn owl, Tyto alba, a highly successful auditory predator that has become a well-established model for hearing research. We recorded catalogues of head-related transfer functions (HRTFs) from the frontal hemisphere of 12 barn owls and compared virtual and free sound fields acoustically and by their evoked neuronal responses. The inner ca. 1 cm of the ear canal was found to contribute little to the directionality of the HRTFs. HRTFs were recorded by inserting probetube microphones to within about 1 or 2 mm of the eardrum. We recorded HRTFs at frequencies between 2 and 11 kHz, which includes the frequencies most useful to the owl for sound localization (3-9 kHz; Konishi, 1973). Spectra of virtual sounds were within +/- 1 dB of amplitude and +/- 10 degrees of phase of the spectra of free field sounds measured near to the eardrum. The spatial pattern of responses obtained from neurons in the inferior colliculus were almost indistinguishable in response to virtual and to free field stimulation.  相似文献   

16.
Human face perception is a finely tuned, specialized process. When comparing faces between species, therefore, it is essential to consider how people make these observational judgments. Comparing facial expressions may be particularly problematic, given that people tend to consider them categorically as emotional signals, which may affect how accurately specific details are processed. The bared-teeth display (BT), observed in most primates, has been proposed as a homologue of the human smile (J. A. R. A. M. van Hooff, 1972). In this study, judgments of similarity between BT displays of chimpanzees (Pan troglodytes) and human smiles varied in relation to perceived emotional valence. When a chimpanzee BT was interpreted as fearful, observers tended to underestimate the magnitude of the relationship between certain features (the extent of lip corner raise) and human smiles. These judgments may reflect the combined effects of categorical emotional perception, configural face processing, and perceptual organization in mental imagery and may demonstrate the advantages of using standardized observational methods in comparative facial expression research. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
Sensory saltation is a spatiotemporal illusion in which the judged positions of stimuli are shifted toward subsequent stimuli that follow closely in time. So far, studies on saltation in the auditory domain have usually employed subjective rating techniques, making it difficult to exactly quantify the extent of saltation. In this study, temporal and spatial properties of auditory saltation were investigated using the "reduced-rabbit" paradigm and a direct-location method. In 3 experiments, listeners judged the position of the 2nd sound within sequences of 3 short sounds by using a hand pointer. When the delay between the 2nd and 3rd sound was short, the target sound was shifted toward the subsequent sound. The magnitude of displacement increased when the temporal and spatial distance between the sounds was reduced. In a 4th experiment, a modified reduced-rabbit paradigm was used to test the hypothesis that auditory saltation is associated with an impairment of target sound localization. The findings are discussed with regard to a spatiotemporal integration approach in which the processing of auditory information is combined with information from subsequent stimuli. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

18.
The pine, gopher, or bull snake (Pituophis melanoleucus) makes two different defensive sounds. Hisses are characterized by lack of frequency and amplitude modulation; bellows have a brief initial period of high-amplitude, broad-frequency sound followed by a longer period of lower-amplitude, constant-frequency sound. Both defensive sounds contain distinct harmonic elements. The modulation and harmonic nature of these sounds seems to be unique among snakes. The larynx of Pituophis is unusual in having an epiglottal keel, a dorsal expansion of the cricoid cartilage, previously proposed to contribute to sound production; however, this study shows that it plays only a small role in increasing the amplitude of bellows. Within the larynx of Pituophis is a "vocal cord," the laryngeal septum, which is a flexible, horizontal shelf of tissue that divides the anterior portion of the larynx. Removal of the laryngeal septum alters the defensive sounds and eliminates their harmonic elements. The laryngeal septum is unique among previously described vertebrate vocal cords or folds because it is supported by the cricoid (as opposed to arytenoid) cartilage and is a single (as opposed to bilaterally paired) structure.  相似文献   

19.
The perception of affordances for the actions of other people (actors) was examined. Observers judged the maximum and preferred sitting heights of tall and short actors. Judgments were scaled in centimeters, as a proportion of the observer's leg length, and as a proportion of each actor's leg length. In Experiment 1 observers viewed live actors standing next to a chair. When judgments were scaled by actor leg length, they reflected the actual ordinal relation between the capabilities of the actors. The perception of affordances from kinematic displays was then evaluated. Observers differentiated tall and short actors, but only when the displays contained direct information about relations between the actors and the chair. It is concluded that observers can perceive affordances for the actions of actors and that kinematic displays can be enough to support such percepts if they preserve actor–environment relations that define affordances. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
In the natural world, a number of visual cues indicate that an item is quickly approaching the perceiver. Binocular disparity is one cue for depth, and it has been demonstrated that abrupt changes in disparity, artificially unaccompanied by correlated depth cues, are capable of causing the perception of looming for the observer. An experiment involving 38 undergraduates, using a computer-controlled stereoscopic display, examined the ability of above-threshold changes in disparity (artificial looming) to facilitate response time and accuracy for observers engaged in an object-enumeration task within a cluttered display. Compared with performance using the same stimuli without disparity information (lateral motion), participants were more accurate regardless of the disparity level (9, 12, 24, or 48 minutes of arc) and faster at the two lowest levels of disparity. Participants showed the classic subitizing function, suggesting that target stimuli presented with motion information were segregated from otherwise identical distractor items. It is proposed that binocular disparity information can act as a valid location cuing method in stereoscopic computer displays in which form and color information are to be preserved.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号