首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Experiments were conducted investigating unimodal and cross-modal phonetic context effects on /r/ and /l/ identifications to test a hypothesis that context effects arise in early auditory speech processing. Experiment 1 demonstrated an influence of a preceding bilabial stop consonant on the acoustic realization of /r/ and /l/ produced within the stop clusters /ibri/ and /ibli/. In Experiment 2, members of an acoustic /iri/ to /ili/ continuum were paired with an acoustic /ibi/. These dichotic tokens were associated with an increase in "l" identification relative to the /iri/ to /ili/ continuum. In Experiment 3, the /iri/ to /ili/ tokens were dubbed onto a video of a talker saying /ibi/. This condition was associated with a reliable perceptual shift relative to an auditory-only condition in which the /iri/ to /ili/ tokens were presented by themselves, ruling out an account of these context effects as arising during early auditory processing. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
Combining information from the visual and auditory senses can greatly enhance intelligibility of natural speech. Integration of audiovisual speech signals is robust even when temporal offsets are present between the component signals. In the present study, we characterized the temporal integration window for speech and nonspeech stimuli with similar spectrotemporal structure to investigate to what extent humans have adapted to the specific characteristics of natural audiovisual speech. We manipulated spectrotemporal structure of the auditory signal, stimulus length, and task context. Results indicate that the temporal integration window is narrower and more asymmetric for speech than for nonspeech signals. When perceiving audiovisual speech, subjects tolerate visual leading asynchronies, but are nevertheless very sensitive to auditory leading asynchronies that are less likely to occur in natural speech. Thus, speech perception may be fine-tuned to the natural statistics of audiovisual speech, where facial movements always occur before acoustic speech articulation. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
A replication of the audiovisual test of speech selective adaptation performed by Roberts and Summerfield [Percept. Psychophys. 30, 309-314 (1981)] was conducted. The audiovisual methodology allows for the dissociation of acoustic and phonetic components of an adapting stimulus. Roberts and Summerfield's results have been interpreted to support an auditory basis for selective adaptation. However, their subjects did not consistently report hearing the adaptor as a visually influenced syllable making this interpretation questionable. In the present experiment, a more compelling audiovisual adaptor was implemented resulting in a visually influenced percept 99% of the time. Still, systematic adaptation occurred only for the auditory component.  相似文献   

4.
Both visual and auditory information are important for songbirds, especially in developmental and sexual contexts. To investigate bimodal cognition in songbirds, the authors conducted audiovisual discrimination training in Bengalese finches. The authors used two types of stimulus: an "artificial stimulus," which is a combination of simple figures and sound, and a "biological stimulus," consisting of video images of singing males along with their songs. The authors found that while both sexes predominantly used visual cues in the discrimination tasks, males tended to be more dependent on auditory information for the biological stimulus. Female responses were always dependent on the visual stimulus for both stimulus types. Only males changed their discrimination strategy according to stimulus type. Although males used both visual and auditory cues for the biological stimulus, they responded to the artificial stimulus depending only on visual information, as the females did. These findings suggest a sex difference in innate auditory sensitivity. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
Arrays of small squares of 2 colors were presented in various proportions to pigeons on a video screen. Birds pecked differentially at the left or right side of the screen to obtain grain. In Experiment 1, pecking at 1 side was correct when more blue than red elements were displayed; the other was correct with more red than blue. Proportions of responses to the 2 locations reflected the proportions of elements in an orderly manner and were little affected by alterations in spacing or size of elements. When red elements were replaced by green, the discrimination readily transferred to the new arrays. In Experiment 2, 1 side of the screen was correct when uniform red or blue arrays were presented; the other was correct for mixed arrays. Orderly gradients of response location reflected degree of stimulus mixture. Good transfer was obtained with green and blue elements. These results support the robust nature of discriminations of emergent properties of complex arrays when stimuli are equally associated with reinforcement and when response location, and not response rate, indicates stimulus control. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
This study investigated whether compatibility between responses and their consistent sensorial effects influences performance in manual choice reaction tasks. In Experiment 1 responses to the nonspatial stimulus attribute of color were affected by the correspondence between the location of responses and the location of their visual effects. In Experiment 2, a comparable influence was found with nonspatial responses of varying force and nonspatial response effects of varying auditory intensity. Experiment 3 ruled out the hypothesis that acquired stimulus-effect associations may account for this influence of response-effect compatibility. In sum the results show that forthcoming response effects influence response selection as if these effects were already sensorially present, suggesting that in line with the classical ideomotor theory, anticipated response effects play a substantial role in response selection. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
Participants in Experiments 1 and 2 performed a discrimination and counting task to assess the effect of lead stimulus modality on attentional modification of the acoustic startle reflex. Modality of the discrimination stimuli was changed across subjects. Electrodermal responses were larger during task-relevant stimuli than during task-irrelevant stimuli in all conditions. Larger blink magnitude facilitation was found during auditory and visual task-relevant stimuli, but not for tactile stimuli. Experiment 3 used acoustic, visual, and tactile conditioned stimuli (CSs) in differential conditioning with an aversive unconditioned stimulus (US). Startle magnitude facilitation and electrodermal responses were larger during a CS that preceded the US than during a CS that was presented alone regardless of lead stimulus modality. Although not unequivocal, the present data pose problems for attentional accounts of blink modification that emphasize the importance of lead stimulus modality.  相似文献   

8.
We propose a multisensory framework based on Glaser and Glaser's (1989) general reading-naming interference model to account for the semantic priming effect by naturalistic sounds and spoken words on visual picture sensitivity. Four experiments were designed to investigate two key issues: First, can auditory stimuli enhance visual sensitivity when the sound leads the picture as well as when they are presented simultaneously? And, second, do naturalistic sounds (e.g., a dog's “woofing”) and spoken words (e.g., /d?g/) elicit similar semantic priming effects? Here, we estimated participants' sensitivity and response criterion using signal detection theory in a picture detection task. The results demonstrate that naturalistic sounds enhanced visual sensitivity when the onset of the sounds led that of the picture by 346 ms (but not when the sounds led the pictures by 173 ms, nor when they were presented simultaneously, Experiments 1-3A). At the same SOA, however, spoken words did not induce semantic priming effects on visual detection sensitivity (Experiments 3B and 4A). When using a dual picture detection/identification task, both kinds of auditory stimulus induced a similar semantic priming effect (Experiment 4B). Therefore, we suggest that there needs to be sufficient processing time for the auditory stimulus to access its associated meaning to modulate visual perception. Besides, the interactions between pictures and the two types of sounds depend not only on their processing route to access semantic representations, but also on the response to be made to fulfill the requirements of the task. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

9.
Thresholds for the discrimination of temporal order were determined for selected auditory and visual stimulus dimensions in 10 normal-adult volunteers. Auditory stimuli consisted of binary pure tones varying in frequency or sound pressure level, and visual stimuli consisted of binary geometric forms varying in size, orientation, or color. We determined the effect of psychophysical method and the reliability of performance across stimulus dimensions. Using a single-track adaptive procedure, Experiment 1 showed that temporal-order thresholds (TOTs) varied with stimulus dimension, being lowest for auditory frequency, intermediate for size, orientation, and auditory level, and longest for color. Test performance improved over sessions and the profile of thresholds across stimulus dimensions had a modest reliability. Experiment 2 used a double-interleaved adaptive procedure and TOTs were similarly ordered as in Experiment 1. However, TOTs were significantly lower for initially ascending versus descending tracks. With this method, the reliability of the profile across stimulus dimensions and tracks was relatively low. In Experiment 3, psychometric functions were obtained for each of the stimulus dimensions and thresholds were defined as the interpolated 70.7% correct point. The relative ordering of TOTs was similar to those obtained in the first two experiments. Non-monotonicities were found in some of the psychometric functions, with the most prominent being for the color dimension. A cross-experiment comparison of results demonstrates that TOTs and their reliability are significantly influenced by the psychophysical method. Taken together, these results support the notion that the temporal resolution of ordered stimuli involves perceptual mechanisms specific to a given sensory modality or submodality.  相似文献   

10.
In this article the operation of a direct visual route to action in response to objects, in addition to a semantically mediated route, is demonstrated. Four experiments were conducted in which participants made gesturing or naming responses to pictures under deadline conditions. There was a cross-over interaction in the number of visual errors relative to the number of semantic plus semantic-visual errors in the two tasks: In gesturing, compared with naming, participants made higher proportions of visual errors and lower proportions of semantic plus semantic-visual errors (Experiments 1, 3, and 4). These results suggest that naming and gesturing are dependent on separate information-processing routes from stimulus to response, with gesturing dependent on a visual route in addition to a semantic route. Partial activation of competing responses from the visual information present in objects (mediated by the visual route to action) leads to high proportions of visual errors under deadline conditions. Also, visual errors do not occur when gestures are made in response to words under a deadline (Experiment 2), which indicates that the visual route is specific to seen objects.  相似文献   

11.
It is well-known that facial orientation affects the processing of static facial information, but similar effects on the processing of visual speech have yet to be explored fully. Three experiments are reported in which the effects of facial orientation on visual speech processing were examined using a talking face presented at 8 orientations through 360 degrees. Auditory and visual forms of the syllables /ba/, /bi/, /ga/, /gi/, /ma/, /mi/, /ta/, and /ti/ were used to produce the following speech stimulus types: auditory, visual, congruent audiovisual, and incongruent audiovisual. Facial orientation did not affect identification of visual speed per se or the near-perfect accuracy of auditory speech report with congruent audiovisual speech stimuli. However, facial orientation did affect the accuracy of auditory speech report with incongruent audiovisual speech stimuli. Moreover, the nature of this effect depended on the type of incongruent visual speech used. Implications for the processing of visual and audiovisual speech are discussed.  相似文献   

12.
The authors investigated the effects of changes in horizontal viewing angle on visual and audiovisual speech recognition in 4 experiments, using a talker's face viewed full face, three quarters, and in profile. When only experimental items were shown (Experiments 1 and 2), identification of unimodal visual speech and visual speech influences on congruent and incongruent auditory speech were unaffected by viewing angle changes. However, when experimental items were intermingled with distractor items (Experiments 3 and 4), identification of unimodal visual speech decreased with profile views, whereas visual speech influences on congruent and incongruent auditory speech remained unaffected by viewing angle changes. These findings indicate that audiovisual speech recognition withstands substantial changes in horizontal viewing angle, but explicit identification of visual speech is less robust. Implications of this distinction for understanding the processes underlying visual and audiovisual speech recognition are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
Six experiments demonstrated cross-modal influences from the auditory modality on the visual modality at an early level of perceptual organization. Participants had to detect a visual target in a rapidly changing sequence of visual distractors. A high tone embedded in a sequence of low tones improved detection of a synchronously presented visual target (Experiment 1), but the effect disappeared when the high tone was presented before the target (Experiment 2). Rhythmically based or order-based anticipation was unlikely to account for the effect because the improvement was unaffected by whether there was jitter (Experiment 3) or a random number of distractors between successive targets (Experiment 4). The facilitatory effect was greatly reduced when the tone was less abrupt and part of a melody (Experiments 5 and 6). These results show that perceptual organization in the auditory modality can have an effect on perceptibility in the visual modality. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
Cortical representational plasticity has been well documented after peripheral and central injuries or improvements in perceptual and motor abilities. This has led to inferences that the changes in cortical representations parallel and account for the improvement in performance during the period of skill acquisition. There have also been several examples of rapidly induced changes in cortical neuronal response properties, for example, by intracortical microstimulation or by classical conditioning paradigms. This report describes similar rapidly induced changes in a cortically mediated perception in human subjects, the ventriloquism aftereffect, which presumably reflects a corresponding change in the cortical representation of acoustic space. The ventriloquism aftereffect describes an enduring shift in the perception of the spatial location of acoustic stimuli after a period of exposure of spatially disparate and simultaneously presented acoustic and visual stimuli. Exposure of a mismatch of 8 degrees for 20-30 min is sufficient to shift the perception of acoustic space by approximately the same amount across subjects and acoustic frequencies. Given that the cerebral cortex is necessary for the perception of acoustic space, it is likely that the ventriloquism aftereffect reflects a change in the cortical representation of acoustic space. Comparisons between the responses of single cortical neurons in the behaving macaque monkey and the stimulus parameters that give rise to the ventriloquism aftereffect suggest that the changes in the cortical representation of acoustic space may begin as early as the primary auditory cortex.  相似文献   

15.
Event-related potentials were recorded to brief presentations of four levels of inspiratory flow-resistive loads in young adults. We labeled the loads according to the level of resistance they provided subjectively: sub-threshold (0.34 cmH2O/l per s), near-threshold (4.01 cmH2O/l per s), intermediate (10.4cmH2O/l per s), and near-occlusion (57.5 cmH2O/l per s). No discernible ERPs were elicited by the undetected, sub-threshold stimulus but late components of the ERP (P2, N2, and P3) were observed to each of the three larger stimuli. They were related, in part, to behavioral judgments obtained during the stimulus periods. Both the latency and amplitude of the ERP components varied systematically as a function of stimulus magnitude, in a manner comparable to that observed in ERP paradigms using auditory and visual stimuli. Thus, the data show that event-related potentials to breathing are sensitive to physiologic effects of resistive loads present at the onset of inspiration. Respiratory ERPs may be used to infer sensory and perceptual responses to increases in airflow resistance and, accordingly, may relate to the perception of airflow obstruction in patient populations.  相似文献   

16.
Examined the utility of cardiac habituation response recovery as a method for assessing infant cerebral asymmetries in auditory perception in a dichotic listening test. In a within-Ss design 12 3-mo-old infants were given a series of 4 10-trial tests during which their cardiac responses were habituated to a pair of dichotic speech syllables or music notes. The 10th trial in each test was a test trial on which one ear received its habituation stimulus while the other ear received a novel stimulus of the same type as the habituation pair (speech or music). Both stimulus type and ear receiving the novel stimulus were counterbalanced across tests. Overall, Ss' cardiac responses habituated during the tests and showed differential recovery to the novel stimuli. Specifically, greater response recovery occurred when a novel speech syllable was presented to the right, than to the left ear. Conversely, greater response recovery was found when a novel music note was presented to the left than to the right ear. Results indicate that young infants show a pattern of auditory perceptual asymmetries much like that found in older children and adults. Findings are consistent with the theory that in man the left hemisphere is superior at processing speech and the right hemisphere superior with nonspeech. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
Four experiments examined transfer of noncorresponding spatial stimulus-response associations to an auditory Simon task for which stimulus location was irrelevant. Experiment 1 established that, for a horizontal auditory Simon task, transfer of spatial associations occurs after 300 trials of practice with an incompatible mapping of auditory stimuli to keypress responses. Experiments 2-4 examined transfer effects within the auditory modality when the stimuli and responses were varied along vertical and horizontal dimensions. Transfer occurred when the stimuli and responses were arrayed along the same dimension in practice and transfer but not when they were arrayed along orthogonal dimensions. These findings indicate that prior task-defined associations have less influence on the auditory Simon effect than on the visual Simon effect, possibly because of the stronger tendency for an auditory stimulus to activate its corresponding response. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

18.
Seeing a talker's face influences auditory speech recognition, but the visible input essential for this influence has yet to be established. Using a new seamless editing technique, the authors examined effects of restricting visible movement to oral or extraoral areas of a talking face. In Experiment 1, visual speech identification and visual influences on identifying auditory speech were compared across displays in which the whole face moved, the oral area moved, or the extraoral area moved. Visual speech influences on auditory speech recognition were substantial and unchanging across whole-face and oral-movement displays. However, extraoral movement also influenced identification of visual and audiovisual speech. Experiments 2 and 3 demonstrated that these results are dependent on intact and upright facial contexts, but only with extraoral movement displays. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
We compared the ability of auditory, visual, and audiovisual (bimodal) exogenous cues to capture visuo-spatial attention under conditions of no load versus high perceptual load. Participants had to discriminate the elevation (up vs. down) of visual targets preceded by either unimodal or bimodal cues under conditions of high perceptual load (in which they had to monitor a rapidly presented central stream of visual letters for occasionally presented target digits) or no perceptual load (in which the central stream was replaced by a fixation point). The results of 3 experiments showed that all 3 cues captured visuo-spatial attention in the no-load condition. By contrast, only the bimodal cues captured visuo-spatial attention in the high-load condition, indicating for the first time that multisensory integration can play a key role in disengaging spatial attention from a concurrent perceptually demanding stimulus. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
A number of recent studies have questioned the idea that lexical selection during speech production is a competitive process. One type of evidence against selection by competition is the observation that in the picture–word interference task semantically related distractors may facilitate the naming of a picture, whereas the selection by competition account predicts them to interfere. In the experiments reported in this article, the authors systematically varied, for a given type of semantic relation—that is, basic-level distractors (e.g., fish) during subordinate-level naming (e.g., carp)—the modality in which distractor words were presented (auditory vs. visual) and the proportion of response-congruent trials (i.e., trials allowing for the correct naming response to be derived from both the distractor and the target). With auditory distractors, semantic interference was obtained irrespective of the proportion of response-congruent trials (low in Experiment 1, high in Experiment 2). With visual distractors, no semantic effect was obtained with a low proportion of response-congruent trials (Experiment 3), whereas a semantic facilitation effect was obtained with a high proportion of response-congruent trials (Experiment 4). The authors propose that two processes contribute to semantic effects observed in the picture–word interference paradigm, namely selection by competition (leading to interference) and response congruency (leading to facilitation). Whether facilitation due to response congruency overrules the interference effect because of competition depends on the relative strength of these two processes. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号