首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Five experiments were conducted to investigate how subsyllabic, syllabic, and prosodic information is processed in Cantonese monosyllabic word production. A picture-word interference task was used in which a target picture and a distractor word were presented simultaneously or sequentially. In the first 3 experiments with visually presented distractors, null effects on naming latencies were found when the distractor and the picture name shared the onset, the rhyme, the tone, or both the onset and tone. However, significant facilitation effects were obtained when the target and the distractor shared the rhyme + tone (Experiment 2), the segmental syllable (Experiment 3), or the syllable + tone (Experiment 3). Similar results were found in Experiments 4 and 5 with spoken rather than visual distractors. Moreover, a significant facilitation effect was observed in the rhyme-related condition in Experiment 5, and this effect was not affected by the degree of phonological overlap between the target and the distractor. These results are interpreted in an interactive model, which allows feedback sending from the subsyllabic to the lexical level during the phonological encoding stage in Cantonese word production. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
The present study demonstrates that incongruent distractor letters at a constant distance from a target letter produce more response competition and negative priming when they share a target's color than when they have a different color. Moreover, perceptual grouping by means of color, attenuated the effects of spatial proximity. For example, when all items were presented in the same color, near distractors produced more response competition and negative priming than far distractors (Experiment 3A). However, when near distractors were presented in a different color and far distractors were presented in the same color as the target, the response competition x distractor proximity interaction was eliminated and the proximity x negative priming interaction was reversed (Experiment 3B). A final experiment demonstrated that distractors appearing on the same object as a selected target produced comparable amounts of response competition and negative priming whether they were near or far from the target. This suggests that the inhibitory mechanisms of visual attention can be directed to perceptual groups/objects in the environment and not only to unsegmented regions of visual space.  相似文献   

3.
A number of recent studies have questioned the idea that lexical selection during speech production is a competitive process. One type of evidence against selection by competition is the observation that in the picture–word interference task semantically related distractors may facilitate the naming of a picture, whereas the selection by competition account predicts them to interfere. In the experiments reported in this article, the authors systematically varied, for a given type of semantic relation—that is, basic-level distractors (e.g., fish) during subordinate-level naming (e.g., carp)—the modality in which distractor words were presented (auditory vs. visual) and the proportion of response-congruent trials (i.e., trials allowing for the correct naming response to be derived from both the distractor and the target). With auditory distractors, semantic interference was obtained irrespective of the proportion of response-congruent trials (low in Experiment 1, high in Experiment 2). With visual distractors, no semantic effect was obtained with a low proportion of response-congruent trials (Experiment 3), whereas a semantic facilitation effect was obtained with a high proportion of response-congruent trials (Experiment 4). The authors propose that two processes contribute to semantic effects observed in the picture–word interference paradigm, namely selection by competition (leading to interference) and response congruency (leading to facilitation). Whether facilitation due to response congruency overrules the interference effect because of competition depends on the relative strength of these two processes. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
The relationship between semantic–syntactic and phonological levels in speaking was investigated using a picture naming procedure with simultaneously presented visual or auditory distractor words. Previous results with auditory distractors have been used to support the independent stage model (e.g., H. Schriefers, A. S. Meyer, & W. J. M. Levelt, 1990), whereas results with visual distractors have been used to support an interactive view (e.g., P. A. Starreveld & W. La Heij, 1996). Experiment 1 demonstrated that with auditory distractors, semantic effects preceded phonological effects, whereas the reverse pattern held for visual distractors. Experiment 2 indicated that the results for visual distractors followed the auditory pattern when distractor presentation time was limited. Experiment 3 demonstrated an interaction between phonological and semantic relatedness of distractors for auditory presentation, supporting an interactive account of lexical access in speaking. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
When participants are asked to report a visual target and find a subsequent visual probe, a deficit in probe report accuracy is usually found during an interval of several hundred milliseconds after the target. This attentional blink (AB) deficit has often been attributed to a uniquely visual limitation. In this research, targets and probes were created and defined in terms of auditory information. Target modality (visual or auditory) was fully crossed with probe modality (visual or auditory). In Experiment 1, a robust AB, found in all modality conditions, was equally large for cross-modality and within-modality target and probe combinations. Experiments 2 and 3 ruled out two alternative explanations for cross-modal blinks. Experiment 4 showed that as the rate of presentation was slowed, the AB for auditory probes attenuated more quickly than for visual probes. Results are discussed in terms of a central (amodal) limitation of attention. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
Investigated whether distractibility in learning disabled (LD) children could be predicted on the basis of diagnosed visual and auditory learning deficits. 26 children in Grades 2–4 were classified as having visual or auditory reading disorders on the basis of the Illinois Test of Psycholinguistic Abilities. They and 17 normally achieving children from the same grades performed visual and auditory recognition memory tasks with visual or auditory distractors presented on 80% of the trials. Analysis of error frequencies revealed that with distractors, Ss in the 2 LD groups made more errors and did not improve over trials as much as control Ss. However, the predicted interaction between learning disability modality and task or distractor modality did not obtain. Rather, all 3 S groups made more errors when task and distractor were in the same modality. (20 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
This study investigated multisensory interactions in the perception of auditory and visual motion. When auditory and visual apparent motion streams are presented concurrently in opposite directions, participants often fail to discriminate the direction of motion of the auditory stream, whereas perception of the visual stream is unaffected by the direction of auditory motion (Experiment 1). This asymmetry persists even when the perceived quality of apparent motion is equated for the 2 modalities (Experiment 2). Subsequently, it was found that this visual modulation of auditory motion is caused by an illusory reversal in the perceived direction of sounds (Experiment 3). This "dynamic capture" effect occurs over and above ventriloquism among static events (Experiments 4 and 5), and it generalizes to continuous motion displays (Experiment 6). These data are discussed in light of related multisensory phenomena and their support for a "modality appropriateness" interpretation of multisensory integration in motion perception. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
Three experiments studied auditory streaming using sequences of alternating “ABA” triplets, where “A” and “B” were 50-ms tones differing in frequency by Δf semitones and separated by 75-ms gaps. Experiment 1 showed that detection of a short increase in the gap between a B tone and the preceding A tone, imposed on one ABA triplet, was better when the delay occurred early versus late in the sequence, and for Δf = 4 vs. Δf = 8. The results of this experiment were consistent with those of a subjective streaming judgment task. Experiment 2 showed that the detection of a delay 12.5 s into a 13.5-s sequence could be improved by requiring participants to perform a task on competing stimuli presented to the other ear for the first 10 s of that sequence. Hence, adding an additional task demand could improve performance via its effect on the perceptual organization of a sound sequence. The results demonstrate that attention affects streaming in an objective task and that the effects of build-up are not completely under voluntary control. In particular, even though build-up can impair performance in an objective task, participants are unable to prevent this from happening. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

9.
The modality-match effect in recognition refers to superior memory for words presented in the same modality at study and test. Prior research on this effect is ambiguous and inconsistent. The present study demonstrates that the modality-match effect is found when modality is rendered salient at either encoding or retrieval. Specifically, in Experiment 1, visual and auditory study trials were either randomly intermixed or presented in blocks, followed by a standard (old–new) recognition test. The modality-match effect was observed for the mixed but not the blocked condition. Experiment 2 used a modality-judgment test (requiring a seen, heard, or new judgment). The resulting measure of recognition memory exhibited the modality-match effect for both list conditions. These results imply (a) that the modality-match effect is a consistent finding when modality is salient and (b) that the effect arises at retrieval rather than encoding. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
The group scanning model of feature integration theory suggests that Ss search visual displays serially by groups, but process items within each group in parallel. Group size is determined by the discriminability of the targets in the background of distractors. When the target is poorly discriminable, the size of the scanned group will be small, and search will be slow. The model predicts that group size will be smallest when targets of an intermediate value on a perceptual dimension are presented in a heterogeneous background of distractors that have higher and lower values on the same dimension. Experiment 1 (30 Ss) demonstrates this effect; Exp 2 (12 Ss) controls for a possible confound of decision complexity in Exp 1. For simple feature targets, the group scanning model provides a good account of the visual search process. (French abstract) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

11.
[Correction Notice: An erratum for this article was reported in Vol 17(2) of Journal of Experimental Psychology: Applied (see record 2011-11863-002). The copyright for the article was incorrectly listed. The copyright is in the correction.] Set size and crowding affect search efficiency by limiting attention for recognition and attention against competition; however, these factors can be difficult to quantify in complex search tasks. The current experiments use a quantitative measure of the amount and variability of visual information (i.e., clutter) in highly complex stimuli (i.e., digital aeronautical charts) to examine limits of attention in visual search. Undergraduates at a large southern university searched for a target among 4, 8, or 16 distractors in charts with high, medium, or low global clutter. The target was in a high or low local-clutter region of the chart. In Experiment 1, reaction time increased as global clutter increased, particularly when the target was in a high local-clutter region. However, there was no effect of distractor set size, supporting the notion that global clutter is a better measure of attention against competition in complex visual search tasks. As a control, Experiment 2 demonstrated that increasing the number of distractors leads to a typical set size effect when there is no additional clutter (i.e., no chart). In Experiment 3, the effects of global and local clutter were minimized when the target was highly salient. When the target was nonsalient, more fixations were observed in high global clutter charts, indicating that the number of elements competing with the target for attention was also high. The results suggest design techniques that could improve pilots' search performance in aeronautical charts. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

12.
Eyeblink conditioning using a conditioned stimulus (CS) from one sensory modality (e.g., an auditory CS) is greatly enhanced when the subject is previously trained with a CS from a different sensory modality (e.g., a visual CS). The enhanced acquisition to the second modality CS results from cross modal savings. The current study was designed to examine the role of the cerebellum in establishing cross modal savings in eyeblink conditioning with rats. In the first experiment rats were given paired or unpaired presentations with a CS (tone or light) and an unconditioned stimulus. All rats were then given paired training with a different modality CS. Only rats given paired training showed cross modal savings to the second modality CS. Experiment 2 showed that cerebellar inactivation during initial acquisition to the first modality CS completely prevented savings when training was switched to the second modality CS. Experiment 3 showed that cerebellar inactivation during initial cross modal training also prevented savings to the second modality stimulus. These results indicate that the cerebellum plays an essential role in establishing cross modal savings of eyeblink conditioning. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
Phoneme identification with audiovisually discrepant stimuli is influenced hy information in the visual signal (the McGurk effect). Additionally, lexical status affects identification of auditorily presented phonemes. The present study tested for lexical influences on the McGurk effect. Participants identified phonemes in audiovisually discrepant stimuli in which lexical status of the auditory component and of a visually influenced percept was independently varied. Visually influenced (McGurk) responses were more frequent when they formed a word and when the auditory signal was a nonword (Experiment 1). Lexical effects were larger for slow than for fast responses (Experiment 2), as with auditory speech, and were replicated with stimuli matched on physical properties (Experiment 3). These results are consistent with models in which lexical processing of speech is modality independent. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
Advance information about a target's identity improved visual search efficiency in pigeons. Experiments 1 and 2 compared information supplied by visual cues with information supplied by trial sequences. Reaction times (RTs) were lower when visual cues signaled a single target rather than two. RTs were lower (Experiment 1) or accuracy improved (Experiment 2) when a sequence of trials presented a single target rather than a mixture of 2. Experiments 3, 4, and 5 considered the selectivity of visual priming by introducing probe trials that reversed the usual cue–target relationship. RT was higher following such miscues than following the usual 1- or 2-target cuing relationships (Experiment 3); the miscuing effect persisted over variations in the target's concealment (Experiments 4 and 5), but did not occur when the target was presented alone (Experiment 4). Findings indicate that priming modifies an attentional mechanism and suggest that this effect accounts for search images. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
This study explored the extent to which sequential auditory grouping affects the perception of temporal synchrony. In Experiment 1, listeners discriminated between 2 pairs of asynchronous “target” tones at different frequencies, A and B, in which the B tone either led or lagged. Thresholds were markedly higher when the target tones were temporally surrounded by “captor tones” at the A frequency than when the captor tones were absent or at a remote frequency. Experiment 2 extended these findings to asynchrony detection, revealing that the perception of synchrony, one of the most potent cues for simultaneous auditory grouping, is not immune to competing effects of sequential grouping. Experiment 3 examined the influence of ear separation on the interactions between sequential and simultaneous grouping cues. The results showed that, although ear separation could facilitate perceptual segregation and impair asynchrony detection, it did not prevent the perceptual integration of simultaneous sounds. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
A target identification paradigm was used to study cross-modal spatial cuing effects on auditory and visual target identification. Each trial consisted of an auditory or visual spatial cue followed by an auditory or visual target. The cue and target could be either of the same modality (within-modality conditions) or of different modalities (between-modalities conditions). In 3 experiments, a larger cue validity effect was apparent on within-modality trials than on between-modalities trials. In addition, the likelihood of identifying a significant cross-modal cuing effect was observed to depend on the predictability of the cue-target relation. These effects are interpreted as evidence (a) of separate auditory and visual spatial attention mechanisms and (b) that target identification may be influenced by spatial cues of another modality but that this effect is primarily dependent on the engagement of endogenous attentional mechanisms. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
Selective attention requires the ability to focus on relevant information and to ignore irrelevant information. The ability to inhibit irrelevant information has been proposed to be the main source of age-related cognitive change (e.g., Hasher & Zacks, 1988). Although age-related distraction by irrelevant information has been extensively demonstrated in the visual modality, studies involving auditory and cross-modal paradigms have revealed a mixed pattern of results. A comparative evaluation of these paradigms according to sensory modality suggests a twofold trend: Age-related distraction is more likely (a) in unimodal than in cross-modal paradigms and (b) when irrelevant information is presented in the visual modality, rather than in the auditory modality. This distinct pattern of age-related changes in selective attention may be linked to the reliance of the visual and auditory modalities on different filtering mechanisms. Distractors presented through the auditory modality can be filtered at both central and peripheral neurocognitive levels. In contrast, distractors presented through the visual modality are primarily suppressed at more central levels of processing, which may be more vulnerable to aging. We propose the hypothesis that age-related distractibility is modality dependent, a notion that might need to be incorporated in current theories of cognitive aging. Ultimately, this might lead to a more accurate account for the mixed pattern of impaired and preserved selective attention found in advancing age. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

18.
Unlike the other sensory modalities of precocial infants, the visual modality does not normally become functional until after birth or hatching. Despite this unique developmental status, the role of emerging visual experience on postnatal perceptual organization remains unclear. In this study, bobwhite quail hatchlings were reared in conditions that manipulated postnatal experience with maternal visual cues, either alone or in conjunction with maternal auditory cues. Results revealed that bobwhite chicks require postnatal exposure to both maternal auditory and visual cues following hatching to demonstrate species-specific perceptual preferences. Chicks that received temporally disparate maternal auditory and visual cues or experience with only maternal visual or maternal auditory cues failed to show species-typical perceptual responsiveness. These results suggest that developmental mechanisms involving both visual and auditory sensory experience underlie the emergence of early intersensory integration. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
Reports an error in "Measuring search efficiency in complex visual search tasks: Global and local clutter" by Melissa R. Beck, Maura C. Lohrenz and J. Gregory Trafton (Journal of Experimental Psychology: Applied, 2010[Sep], Vol 16[3], 238-250). The copyright for the article was incorrectly listed. The correct copyright information is provided in the erratum. (The following abstract of the original article appeared in record 2010-19027-002.) Set size and crowding affect search efficiency by limiting attention for recognition and attention against competition; however, these factors can be difficult to quantify in complex search tasks. The current experiments use a quantitative measure of the amount and variability of visual information (i.e., clutter) in highly complex stimuli (i.e., digital aeronautical charts) to examine limits of attention in visual search. Undergraduates at a large southern university searched for a target among 4, 8, or 16 distractors in charts with high, medium, or low global clutter. The target was in a high or low local-clutter region of the chart. In Experiment 1, reaction time increased as global clutter increased, particularly when the target was in a high local-clutter region. However, there was no effect of distractor set size, supporting the notion that global clutter is a better measure of attention against competition in complex visual search tasks. As a control, Experiment 2 demonstrated that increasing the number of distractors leads to a typical set size effect when there is no additional clutter (i.e., no chart). In Experiment 3, the effects of global and local clutter were minimized when the target was highly salient. When the target was nonsalient, more fixations were observed in high global clutter charts, indicating that the number of elements competing with the target for attention was also high. The results suggest design techniques that could improve pilots' search performance in aeronautical charts. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

20.
A target identification paradigm was used to study cross-modal spatial cuing effects on auditory and visual target identification. Each trial consisted of an auditory or visual spatial cue followed by an auditory or visual target. The cue and target could be either of the same modality (within-modality conditions) or of different modalities (between-modalities conditions). In 3 experiments, a larger cue validity effect was apparent on within-modality trials than on between-modalities trials. In addition, the likelihood of identifying a significant cross-modal cuing effect was observed to depend on the predictability of the cue-target relation. These effects are interpreted as evidence (a) of separate auditory and visual spatial attention mechanisms and (b) that target identification may be influenced by spatial cues of another modality but that this effect is primarily dependent on the engagement of endogenous attentional mechanisms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号