首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
There is now convincing evidence that an involuntary shift of spatial attention to a stimulus in one modality can affect the processing of stimuli in other modalities, but inconsistent findings across different paradigms have led to controversy. Such inconsistencies have important implications for theories of cross-modal attention. The authors investigated why orienting attention to a visual event sometimes influences responses to subsequent sounds and why it sometimes fails to do so. They examined visual-cue-on-auditory-target effects in two paradigms--implicit spatial discrimination (ISD) and orthogonal cuing (OC)--that have yielded conflicting findings in the past. Consistent with previous research, visual cues facilitated responses to same-side auditory targets in the ISD paradigm but not in the OC paradigm. Furthermore, in the ISD paradigm, visual cues facilitated responses to auditory targets only when the targets were presented directly at the cued location, not when they appeared above or below the cued location. This pattern of results confirms recent claims that visual cues fail to influence responses to auditory targets in the OC paradigm because the targets fall outside the focus of attention. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
Stelmach, Herdman, and McNeil (1994) suggested recently that the perceived duration for attended stimuli is shorter than that for unattended ones. In contrast, the attenuation hypothesis (Thomas & Weaver, 1975) suggests the reverse relation between directed attention and perceived duration. We conducted six experiments to test the validity of the two contradictory hypotheses. In all the experiments, attention was directed to one of two possible stimulus sources. Experiments 1 and 2 employed stimulus durations from 70 to 270 msec. A stimulus appeared in either the visual or the auditory modality. Stimuli in the attended modality were rated as longer than stimuli in the unattended modality. Experiment 3 replicated this finding using a different psychophysical procedure. Experiments 4-6 showed that the finding applies not only to stimuli from different sensory modalities but also to stimuli appearing at different locations within the visual field. The results of all six experiments support the assumption that directed attention prolongs the perceived duration of a stimulus.  相似文献   

3.
Selective attention requires the ability to focus on relevant information and to ignore irrelevant information. The ability to inhibit irrelevant information has been proposed to be the main source of age-related cognitive change (e.g., Hasher & Zacks, 1988). Although age-related distraction by irrelevant information has been extensively demonstrated in the visual modality, studies involving auditory and cross-modal paradigms have revealed a mixed pattern of results. A comparative evaluation of these paradigms according to sensory modality suggests a twofold trend: Age-related distraction is more likely (a) in unimodal than in cross-modal paradigms and (b) when irrelevant information is presented in the visual modality, rather than in the auditory modality. This distinct pattern of age-related changes in selective attention may be linked to the reliance of the visual and auditory modalities on different filtering mechanisms. Distractors presented through the auditory modality can be filtered at both central and peripheral neurocognitive levels. In contrast, distractors presented through the visual modality are primarily suppressed at more central levels of processing, which may be more vulnerable to aging. We propose the hypothesis that age-related distractibility is modality dependent, a notion that might need to be incorporated in current theories of cognitive aging. Ultimately, this might lead to a more accurate account for the mixed pattern of impaired and preserved selective attention found in advancing age. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
It is suggested that the distinction between global versus local processing styles exists across sensory modalities. Activation of one-way of processing in one modality should affect processing styles in a different modality. In 12 studies, auditory, haptic, gustatory or olfactory global versus local processing was induced, and participants were tested with a measure of their global versus local visual attention; the content of this measure was unrelated to the inductions. In a different set of 4 studies, the effect of local versus global visual processing on the way people listen to a poem or touch, taste, and smell objects was examined. In all experiments, global/local processing in 1 modality shifted to global/local processing in the other modality. A final study found more pronounced shifts when compatible processing styles were induced in 2 rather than 1 modality. Moreover, the study explored mediation by relative right versus left hemisphere activation as measured with the line bisection task and accessibility of semantic associations. It is concluded that the effects reflect procedural rather than semantic priming effects that occurred out of participants' awareness. Because global/local processing has been shown to affect higher order processing, future research may activate processing styles in other sensory modalities to produce similar effects. Furthermore, because global/local processing is triggered by a variety of real world variables, one may explore effects on other sensory modalities than vision. The results are consistent with the global versus local processing model, a systems account (GLOMOsys; F?rster & Dannenberg, 2010). (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

5.
Previous assessments of verbal cross-modal priming have typically been conducted with the visual and auditory modalities. Within-modal priming is always found to be substantially larger than cross-modal priming, a finding that could reflect modality modularity, or alternatively, differences between the coding of visual and auditory verbal information (i.e., geometric vs phonological). The present experiments assessed implicit and explicit memory within and between vision and haptics, where verbal information could be coded in geometric terms. Because haptic perception of words is sequential or letter-by-letter, experiments were also conducted to isolate the effects of simultaneous versus sequential processing from the manipulation of modality. Together, the results reveal no effects of modality change on implicit or explicit tests. The authors discuss representational similarities between vision and haptics as well as image mediation as possible explanations for the results. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
A target identification paradigm was used to study cross-modal spatial cuing effects on auditory and visual target identification. Each trial consisted of an auditory or visual spatial cue followed by an auditory or visual target. The cue and target could be either of the same modality (within-modality conditions) or of different modalities (between-modalities conditions). In 3 experiments, a larger cue validity effect was apparent on within-modality trials than on between-modalities trials. In addition, the likelihood of identifying a significant cross-modal cuing effect was observed to depend on the predictability of the cue-target relation. These effects are interpreted as evidence (a) of separate auditory and visual spatial attention mechanisms and (b) that target identification may be influenced by spatial cues of another modality but that this effect is primarily dependent on the engagement of endogenous attentional mechanisms. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
A target identification paradigm was used to study cross-modal spatial cuing effects on auditory and visual target identification. Each trial consisted of an auditory or visual spatial cue followed by an auditory or visual target. The cue and target could be either of the same modality (within-modality conditions) or of different modalities (between-modalities conditions). In 3 experiments, a larger cue validity effect was apparent on within-modality trials than on between-modalities trials. In addition, the likelihood of identifying a significant cross-modal cuing effect was observed to depend on the predictability of the cue-target relation. These effects are interpreted as evidence (a) of separate auditory and visual spatial attention mechanisms and (b) that target identification may be influenced by spatial cues of another modality but that this effect is primarily dependent on the engagement of endogenous attentional mechanisms.  相似文献   

8.
In this experiment, a Stroop-like paradigm was used to investigate the ability to attend to visuospatial cues while ignoring distracting stimuli in the auditory or visual modality. In Part 1, the authors investigated whether linguistic cue words (i.e., RIGHT, LEFT, DOWN, and UP) would induce endogenous shifts of attention to visual targets. In Part 2, a relevant distractor stimulus was introduced in a different modality from the endogenous cues to investigate effects of interference. Twenty-five right-handed students served as participants. Auditory and visual linguistic cues were effective in inducing shifts of visual attention when cues were presented alone. Furthermore, introducing a distractor stimulus decreased the efficacy of these cues differently depending on modality, suggesting that language processing and visuospatial attention may share neuronal resources. Implications for unimodal and supramodal mechanisms of selective attention and relevant neuronal networks are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
The relative efficacy of (1) repeated auditory and somesthetic stimulation for the habituation of cardiac acceleration responses and (2) intramodal and cross-modal stimulation for the dishabituation of cardiac responses was studied in 45 full-term 2-day-old infants. Although the stimuli were equally effective initially, repeated presentation of the somesthetic stimulus had a greater decremental effect than repeated presentation of the auditory stimulus. The stimuli were equally effective in producing dishabituation when in a different modality from that of the habituating stimulus (cross-modal) but not when in the same modality (intramodal). Changes in the locus of stimulation without a change in modality were ineffective for producing dishabituation. The findings indicate the human newborn discriminates auditory and somesthetic inputs effectively and equally but does not discriminate contralateral from ipsilateral stimulation in either modality.  相似文献   

10.
A great deal is now known about the effects of spatial attention within individual sensory modalities, especially for vision and audition. However, there has been little previous study of possible cross-modal links in attention. Here, we review recent findings from our own experiments on this topic, which reveal extensive spatial links between the modalities. An irrelevant but salient event presented within touch, audition, or vision, can attract covert spatial attention in the other modalities (with the one exception that visual events do not attract auditory attention when saccades are prevented). By shifting receptors in one modality relative to another, the spatial coordinates of these cross-modal interactions can be examined. For instance, when a hand is placed in a new position, stimulation of it now draws visual attention to a correspondingly different location, although some aspects of attention do not spatially remap in this way. Cross-modal links are also evident in voluntary shifts of attention. When a person strongly expects a target in one modality (e.g. audition) to appear in a particular location, their judgements improve at that location not only for the expected modality but also for other modalities (e.g. vision), even if events in the latter modality are somewhat more likely elsewhere. Finally, some of our experiments suggest that information from different sensory modalities may be integrated preattentively, to produce the multimodal internal spatial representations in which attention can be directed. Such preattentive cross-modal integration can, in some cases, produce helpful illusions that increase the efficiency of selective attention in complex scenes.  相似文献   

11.
A divided attention paradigm was used to investigate whether graphemes and phonemes can mutually activate or inhibit each other during bimodal processing. In 3 experiments, Dutch Ss reacted to visual and auditory targets in single-channel or bimodal stimuli. In some bimodal conditions, the visual and auditory targets were nominally identical or redundant (e.g., visual A and auditory /a/); in others they were not (e.g., visual U and auditory /a/). Temporal aspects of cross-modal activation were examined by varying the stimulus onset asynchrony of visual and auditory stimuli. Cross-modal facilitation, but not inhibition, occurred rapidly and automatically between phoneme and grapheme representations. Implications for current models of bimodal processing and word recognition are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

12.
Age-related deficits in selective attention have often been demonstrated in the visual modality and, to a lesser extent, in the auditory modality. In contrast, a mounting body of evidence has suggested that cross-modal selective attention is intact in aging, especially in visual tasks that require ignoring the auditory modality. Our goal in this study was to investigate age-related differences in the ability to ignore cross-modal auditory and visual distraction and to assess the role of cognitive control demands thereby. In a set of two experiments, 30 young (mean age = 23.3 years) and 30 older adults (mean age = 67.7 years) performed a visual and an auditory n-back task (0 ≤ n ≤ 2), with and without cross-modal distraction. The results show an asymmetry in cross-modal distraction as a function of sensory modality and age: Whereas auditory distraction did not disrupt performance on the visual task in either age group, visual distraction disrupted performance on the auditory task in both age groups. Most important, however, visual distraction was disproportionately larger in older adults. These results suggest that age-related distraction is modality dependent, such that suppression of cross-modal auditory distraction is preserved and suppression of cross-modal visual distraction is impaired in aging. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

13.
In 7 experiments we investigated cross-modal links for endogenous covert spatial orienting in hearing and vision. Participants judged the elevation (up vs. down) of auditory or visual targets regardless of their laterality or modality. When participants were informed that targets were more likely on 1 side, elevation judgments were faster on that side, even if the modality of the target was uncertain. When participants expected a target on a particular side in Just 1 modality, corresponding shifts of covert attention also took place in the other modality, as evidenced by faster elevation judgments on that side. However, it was possible to "split" auditory and visual attention when targets in the 2 modalities were expected on constant but opposite sides throughout a block, although covert orienting effects were larger when targets were expected on the same side in both modalities. These results show that although endogenous covert attention does not operate exclusively within a supramodal system, there are strong spatial links between auditory and visual attention. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
When participants are asked to report a visual target and find a subsequent visual probe, a deficit in probe report accuracy is usually found during an interval of several hundred milliseconds after the target. This attentional blink (AB) deficit has often been attributed to a uniquely visual limitation. In this research, targets and probes were created and defined in terms of auditory information. Target modality (visual or auditory) was fully crossed with probe modality (visual or auditory). In Experiment 1, a robust AB, found in all modality conditions, was equally large for cross-modality and within-modality target and probe combinations. Experiments 2 and 3 ruled out two alternative explanations for cross-modal blinks. Experiment 4 showed that as the rate of presentation was slowed, the AB for auditory probes attenuated more quickly than for visual probes. Results are discussed in terms of a central (amodal) limitation of attention. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
Event-related potentials (ERPs) were recorded to trains of rapidly presented auditory and visual stimuli. ERPs in conditions in which subjects attended to different features of visual stimuli were compared with ERPs to the same type of stimuli when subjects attended to different features of auditory stimuli. This design permitted us to study effects of variations in both intramodal and intermodal visual attention on the timing and topography of ERP components in the same experiment. There were no indications that exogenous N110, P140 and N180 components to line gratings of high and low spatial frequencies were modulated by either intra- or intermodal forms of attention. Furthermore, intramodal and intermodal attention effects on ERPs showed similar topographical distributions. These combined findings suggest that the same neural generators in extrastriate occipital areas are involved in both forms of attention. Visual ERPs elicited in the condition in which subjects were engaged in auditory selective attention showed a large positive displacement at the occipital scalp sites relative to ERPs to attended and unattended stimuli in the visual condition. The early onset of this positivity might be associated with a highly confident and early rejection of the irrelevant visual stimuli, when these stimuli are presented among auditory stimuli. In addition, the later onset of selection potentials in the intramodal condition suggests that a more precise stimulus selection is needed when features of visual stimuli are rejected among other features of the same stimulus pattern, than when visual stimuli are rejected among stimuli of another modality.  相似文献   

16.
Notes that orienting attention involuntarily to the location of a sensory event influences responses to subsequent stimuli that appear in different modalities with one possible exception; orienting attention involuntarily to a sudden light sometimes fails to affect responses to subsequent sounds (e.g., C. Spence and J. Driver, 1997). Here the authors investigated the effects of involuntary attention to a brief flash on the processing of subsequent sounds in a design that eliminates stimulus–response compatibility effects and criterion shifts as confounding factors. 13 18–31 yr olds participated in the study. In addition, the neural processes mediating crossmodal attention were studied by recording event-related brain potentials. The data show that orienting attention to the location of a spatially nonpredictive visual cue modulates behavioral and neural responses to subsequent auditory targets when the stimulus onset asynchrony is short (between 100 and 300 ms). These findings are consistent with the hypothesis that involuntary shifts of attention are controlled by supramodal brain mechanisms rather than by modality-specific ones. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
In this study, the authors combined the cross-modal dynamic capture task (involving the horizontal apparent movement of visual and auditory stimuli) with spatial cuing in the vertical dimension to investigate the role of spatial attention in cross-modal interactions during motion perception. Spatial attention was manipulated endogenously, either by means of a blocked design or by predictive peripheral cues, and exogenously by means of nonpredictive peripheral cues. The results of 3 experiments demonstrate a reduction in the magnitude of the cross-modal dynamic capture effect on cued trials compared with uncued trials. The introduction of neutral cues (Experiments 4 and 5) confirmed the existence of both attentional costs and benefits. This attention-related reduction in cross-modal dynamic capture was larger when a peripheral cue was used compared with when attention was oriented in a purely endogenous manner. In sum, the results suggest that spatial attention reduces illusory binding by facilitating the segregation of unimodal signals, thereby modulating audiovisual interactions in information processing. Thus, the effect of spatial attention occurs prior to or at the same time as cross-modal interactions involving motion information. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

18.
A visual target (T?) containing either 1 or 2 letters, or a random 10-sided polygon, was presented after an auditory target (T?) at a stimulus onset asynchrony (SOA) of either 50, 150, 250, or 600 ms. Task? was a speeded pitch discrimination to the tone, and across experiments, T? was either 1 of 2 tones (2-alternative discrimination [2AD]) or 1 of 4 tones (4-alternative discrimination [4AD]). Memory for the visual information decreased as SOA was reduced when a mask was used, but not when there was no mask. The effects of SOA were larger for the 4AD Task? than the 2AD Task?. The results demonstrate cross-modal, dual-task interference on visual encoding and suggest central interference with the short-term consolidation of visual information in short-term memory. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
The purpose of the present study was to compare the relative effectiveness of stimulation of different sensory modalities in eliciting Type 2 theta in the rat in the presence or absence of a ferret. Visual, auditory, and tactile stimuli were presented to rats in both conditions. Tactile stimulation produced more movement than either visual or auditory stimuli when the ferret was present. In both conditions, however, more Type 2 theta was observed in response to tactile or visual stimulation than to auditory stimulation. In the arousal condition, stimulation of tactile and auditory modalities resulted in significant increases in the amount of Type 2 theta produced. Input to the visual modality produced high levels of Type 2 theta production in both low- and high-arousal conditions. It is argued that Type 2 theta is not necessarily a precursor to movement but rather sensory processing while in a high state of arousal. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
Research on cross-modal performance in nonhuman primates is limited to a small number of sensory modalities and testing methods. To broaden the scope of this research, the authors tested capuchin monkeys (Cebus apella) for a seldom-studied cross-modal capacity in nonhuman primates, auditory-visual recognition. Monkeys were simultaneously played 2 video recordings of a face producing different vocalizations and a sound recording of 1 of the vocalizations. Stimulus sets varied from naturally occurring conspecific vocalizations to experimentally controlled human speech stimuli. The authors found that monkeys preferred to view face recordings that matched presented vocal stimuli. Their preference did not differ significantly across stimulus species or other stimulus features. However, the reliability of the latter set of results may have been limited by sample size. From these results, the authors concluded that capuchin monkeys exhibit auditory-visual cross-modal perception of conspecific vocalizations. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号