首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Subitizing, the enumeration of 1–4 items, is rapid (40–220 msec/item) and accurate. Counting, the enumeration of 5 items or more, is slow (250–350 msec/item) and error-prone. Why are small numbers of items enumerated differently from large numbers of items? It is suggested that subitizing relies on a preattentive mechanism. Ss could subitize heterogeneously sized multicontour items but not concentric multicontour items, which require attentional processing because preattentive gestalt processes misgroup contours from different items to form units. Similarly, Ss could subitize target items among distractors but only if the targets and distractors differed by a feature, a property derived through preattentive analysis. Thus, subitizing must rely on a mechanism that can handle a few items at once, which operates before attention but after preattentive operations of feature detection and grouping. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
Does spatial attention follow a full preattentive analysis of the visual field, or can attention select from ungrouped regions of the visual field? We addressed this question by testing an apperceptive agnosic patient, J. W., in tasks involving both spatial selection and preattentive grouping. Results suggest that J. W. had intact spatial attention: He was faster to detect targets appearing at cued locations relative to targets appearing at uncued locations. However, his preattentive processes were severely disrupted. Gestalt grouping and symmetry perception, both thought to involve preattentive processes, were impaired in J. W. Also, he could not use gestalt grouping cues to guide spatial attention. These results suggest that spatial attention is not completely dependent on preattentive grouping processes. We argue that preattentive grouping processes and spatial attention may mutually constrain one another in guiding the attentional selection of visual stimuli but that these 2 processes are isolated from one another. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
"Subitizing," the process of enumeration when there are fewer than 4 items, is rapid (40–200 msec/item), effortless, and accurate. "Counting," the process of enumeration when there are more than 4 items, is slow (250–350 msec/item), effortful, and error-prone. Why there is a difference in the way the small and large numbers of items are enumerated? A theory of enumeration is proposed that emerges from a general theory of vision, yet explains the numeric abilities of preverbal infants, children, and adults. It is argued that subitizing exploits a limited-capacity parallel mechanism for item individuation, the FINST mechanism, associated with the multiple target tracking task (Z. W. Pylyshyn, 1989; Pylyshyn and R. Storm, 1988). Two kinds of evidence support the claim that subitizing relies on preattentive information, whereas counting requires spatial attention. First, whenever spatial attention is needed to compute a spatial relation (cf. S. Ullman, 1984) or to perform feature integration (cf. A. Treisman and G. Gelade, 1980), subitizing does not occur (L. M. Trick and Pylyshyn, 1993). Second, the position of the attentional focus, as manipulated by cue validity, has a greater effect on counting than subitizing latencies (Trick & Pylyshyn, 1993). (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
Argues that the single focus of attention, generally assumed to represent the allocation of processing resources, does not totally explain how spatially local information is accessed in the visual field (VF). It is suggested that even if attention is unitary and spatially focused, there is also a more primitive mechanism for simultaneously indexing several places in a VF, thus individuating these places and making them directly accessible for further processing. Considerations suggesting the need for a multiple-locus indexing mechanism are discussed. Empirical evidence for particular properties (multiple-object tracking, cued search, subitizing, and illusory line motion) of the indexing mechanism is reviewed. Evidence suggests that there is an early preattentive stage in vision where a small number of salient items in the VF are indexed and thereby made readily accessible for a variety of visual tasks. (French abstract) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
Describes the role of the parietal cortex in visual attention, gathered from a series of 5 positron emission tomography (PET) experiments manipulating different aspects of selective attention. The parietal-occipital region showed enhancement of visual response when attention was tonically maintained to a specific spatial location. This region behaved in a manner similar to other visual association regions that showed enhanced visual responses when attention was directed to different attributes of visual stimuli (e.g., color, speed). The superior parietal region was activated across different tasks that involve shifts of spatial attention or visual gaze to peripheral locations. The activations in parietal cortex seemed to be related to spatial computations. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
A considerable body of recent evidence shows that preattentive processes can carve visual input into candidate objects. Borrowing and modifying terminology from Kahneman & Treisman (1984), this paper investigates the properties of these preattentive object files. Experiments 1-3 show that preattentive object files are loose collections of basic features. Thus, we can know preattentively that an object has the attributes "red" and "vertical" and yet have no idea if any part of the object is red and vertical. Experiment 4 shows that some information about the structure of an object is available preattentively, but Experiments 5-12 search for and fail to find any preattentive representation of overall shape. Appreciation of the overall shape of an object appears to require the binding together of local form features--a process that requires attention.  相似文献   

7.
To investigate the neural mechanisms involved in shifting attention we used positron emission tomography to examine regional cerebral blood flow (rCBF) during a task that demands shifting attention between color and shape. Significant activation was observed in the right dorsal prefrontal cortex and parieto-occipital cortex at all frequencies of attention shifts. The frequency of shifts between categories correlated significantly with rCBF in the rostral part of the supplementary motor area and the left precuneus, whereas the number of successive correct responses correlated with rCBF in the orbitofrontal cortex and the caudate nucleus. This study suggests that several prefrontal regions may participate in the processes of shifting attention in different ways.  相似文献   

8.
Much research has examined preattentive vision: visual representation prior to the arrival of attention. Most vision research concerns attended visual stimuli; very little research has considered postattentive vision. What is the visual representation of a previously attended object once attention is deployed elsewhere? The authors argue that perceptual effects of attention vanish once attention is redeployed. Experiments 1–6 were visual search studies. In standard search, participants looked for a target item among distractor items. On each trial, a new search display was presented. These tasks were compared to repeated search tasks in which the search display was not changed. On successive trials, participants searched the same display for new targets. Results showed that if search was inefficient when participants searched a display the first time, it was inefficient when the same, unchanging display was searched the second, fifth, or 350th time. Experiments 7 and 8 made a similar point with a curve tracing paradigm. The results have implications for an understanding of scene perception, change detection, and the relationship of vision to memory. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
The identification of targets in visual search arrays may be improved by suppressing competing information from the surrounding distractor items. The present study provided evidence that this hypothetical filtering process has a neural correlate, the N2pc component of the event-related potential waveform. The N2pc was observed when a target item was surrounded by competing distractor items but was absent when the array could be rejected as a nontarget on the basis of simple feature information. In addition, the N2pc was eliminated when filtering was discouraged by removing the distractor items, making the distractors relevant, or making all items within an array identical. Combined with previous topographic analyses, these results suggest that attentional filtering occurs in occipital cortex under the control of feedback from higher cortical regions after a preliminary feature-based analysis of the stimulus array. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
The duration of the visual search by human participants for visual features is independent of the number of targets being viewed. In contrast, search for targets formed by conjunction of features is characterized by reaction times (RTs) that increase as a linear function of the number of items viewed, suggesting that the target detection requires scrutiny of the search array by focal attention. Macaque (Macaca mulatta) and human performance on feature and conjunction search tasks was compared in 7 human Ss and 5 female monkeys by using color or motion, or by conjunctions of color and motion. Like human participants, monkeys exhibited a dichotomy between feature and conjunction search performance. This finding suggests that humans and macaques engage similar brain mechanisms for representation of feature and conjunction targets. This behavioral paradigm can thus be used in neurophysiological experiments directed at the mechanisms of feature integration and target selection. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

11.
Horizontal and vertical movements of the human eye bring new objects to the center of the visual field, but torsional movements rotate the visual world about its center. Ocular torsion stays near zero during head-fixed gaze shifts, and eye movements to visual targets are thought to be driven by purely horizontal and vertical commands. Here, analysis of eye-head gaze shifts revealed that gaze commands were three-dimensional, with a separate neural control system for torsion. Active torsion optimized gaze control as no two-dimensional system could have, stabilizing the retinal image as quickly as possible when it would otherwise have spun around the fixation point.  相似文献   

12.
Do people have to count to determine visual numerosity, or is there a fast "subitizing" procedure dedicated to small sets of 1 to 3 items? Numerosity naming time and errors were measured in 5 simultanagnosic patients who suffered from severe difficulties in serial counting. Although these patients made close to 100% errors in quantifying sets comprising more than 3 items, they were excellent at quantifying sets of 1, 2, and sometimes 3 items. Their performances in visual search tasks suggested that they suffered from a deficit of serial visual exploration, due to a fundamental inability to use spatial tags to keep track of previously explored locations. The present data suggest that the patients' preserved subitizing abilities were based not on serial processing but rather on a parallel algorithm dedicated to small numerosities. Several ways in which this parallel subitizing algorithm might function are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
Covert shifts of visual attention in space have been quantified by measuring the effects of visual cues on the detection of visual targets in humans and monkeys maintaining visual fixation. These observations of "covert orienting" have provided important information regarding the neurobiology of visual attention in primates. This article describes a cued spatial target detection task for physically unrestrained rats. Valid cues (spatially contiguous with the target) enhanced target detection, and invalid cues (spatially discontiguous with the target) degraded target detection. Both visual and auditory cues were effective. These validity effects could not be explained by stimulus additivity or response preparation mechanisms, whereas a cue-independent "alerting effect" appeared to reflect response preparation. The effects compare favorably with primate work and suggest that this method may enable assessment of visual attention shifts in rats. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
Notes that orienting attention involuntarily to the location of a sensory event influences responses to subsequent stimuli that appear in different modalities with one possible exception; orienting attention involuntarily to a sudden light sometimes fails to affect responses to subsequent sounds (e.g., C. Spence and J. Driver, 1997). Here the authors investigated the effects of involuntary attention to a brief flash on the processing of subsequent sounds in a design that eliminates stimulus–response compatibility effects and criterion shifts as confounding factors. 13 18–31 yr olds participated in the study. In addition, the neural processes mediating crossmodal attention were studied by recording event-related brain potentials. The data show that orienting attention to the location of a spatially nonpredictive visual cue modulates behavioral and neural responses to subsequent auditory targets when the stimulus onset asynchrony is short (between 100 and 300 ms). These findings are consistent with the hypothesis that involuntary shifts of attention are controlled by supramodal brain mechanisms rather than by modality-specific ones. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
This study investigated the simple reaction time (RT) and event-related potential (ERP) correlates of biasing attention towards a location in the visual field. RTs and ERPs were recorded to stimuli flashed randomly and with equal probability to the left and right visual hemifields in the three blocked, covert attention conditions: (i) attention divided equally to left and right hemifield locations; (ii) attention biased towards the left location; or (iii) attention biased towards the right location. Attention was biased towards left or right by instructions to the subjects, and responses were required to all stimuli. Relative to the divided attention condition, RTs were significantly faster for targets occurring where more attention was allocated (benefits), and slower to targets where less attention was allocated (costs). The early P1 (100-140 msec) component over the lateral occipital scalp regions showed attentional benefits. There were no amplitude modulations of the occipital N1 (125-180 msec) component with attention. Between 200 and 500 msec latency, a late positive deflection (LPD) showed both attentional costs and benefits. The behavioral findings show that when sufficiently induced to bias attention, human observers demonstrate RT benefits as well as costs. The corresponding P1 benefits suggest that the RT benefits of spatial attention may arise as the result of modulations of visual information processing in the extrastriate visual cortex.  相似文献   

16.
Certain local features induce preattentive texture segregation. Recently, components in the visual evoked potential (VEP) associated with preattentive texture segregation (tsVEPs) have been demonstrated. To assess the similarity and dissimilarity of visual processing across visual dimensions, we compared VEPs and tsVEPs in texture segregation by luminance, orientation, motion and stereo disparity. We found tsVEPs across these four visual dimensions to be remarkably similar when compared to the "low-level" VEPs. The tsVEPs were always negative; their implicit time, peak latency and amplitude were (in msec/msec/microV): 91/234/-5.7, luminance; 84/257/-3.9, orientation; 80/295/-8.3, motion; and 95/310/-5.0 for stereo. The cross-correlation function, as a quantitative measure for similarity, on average was higher for the tsVEPs by a factor of 4.2 as compared to the low-level VEPs (P < 0.0001). The results suggest (1) that the tsVEPs represent activity of neural mechanisms that have generalised to some degree across visual dimensions; and (2) that these hypothetical generalisation mechanisms might exist already in the primary visual cortex.  相似文献   

17.
Functional anatomical studies indicate that a set of neural signals in parietal and frontal cortex mediates the covert allocation of attention to visual locations across a wide variety of visual tasks. This frontoparietal network includes areas, such as the frontal eye field and supplementary eye field. This anatomical overlap suggests that shifts of attention to visual locations of objects recruit areas involved in oculomotor programming and execution. Finally, the fronto-parietal network may be the source of spatial attentional modulations in the ventral visual system during object recognition or discrimination.  相似文献   

18.
To investigate the functional neuroanatomy associated with retrieving semantic and episodic memories, we measured changes in regional cerebral blood flow (rCBF) with positron emission tomography (PET) while subjects generated single word responses to achromatic line drawings of objects. During separate scans, subjects either named each object, retrieved a commonly associated color of each object (semantic condition), or recalled a previously studied uncommon color of each object (episodic condition). Subjects were also scanned while staring at visual noise patterns to provide a low level perceptual baseline. Relative to the low level baseline, all three conditions revealed bilateral activations of posterior regions of the temporal lobes, cerebellum, and left lateralized activations in frontal regions. Retrieving semantic information, as compared to object naming, activated left inferior temporal, left superior parietal, and left frontal cortices. In addition, small regions of right frontal cortex were activated. Retrieving episodic information, as compared to object naming, activated bilateral medial parietal cortex, bilateral retrosplenial cortex, right frontal cortex, thalamus, and cerebellum. Direct comparison of the semantic and episodic conditions revealed bilateral activation in temporal and frontal lobes in the semantic task (left greater than right), and activation in medial parietal cortex, retrosplenial cortex, thalamus, and cerebellum (but not right frontal regions) in the episodic task. These results support the assertion that distinct neural structures mediate semantic and episodic memory retrieval. However, they also raise questions regarding the specific roles of left temporal and right frontal cortices during episodic memory retrieval, in particular.  相似文献   

19.
Most accounts of visual perception hold that the detection of primitive features occurs preattentively, in parallel across the visual field. Evidence that preattentive vision operates without attentional limitations comes from visual search tasks in which the detection of the presence or absence of a primitive feature is independent of the number of stimuli in a display. If the detection of primitive features occurs preattentively, in parallel and without capacity limitations, then it should not matter where attention is located in the visual field. The present study shows that even though the detection of a red element in an array of gray elements occurred in parallel without capacity limitations, the allocation of attention did have a large effect on search performance. If attention was directed to a particular region of the display and the target feature was presented elsewhere, response latencies increased. Results indicate that the classic view of preattentive vision requires revision. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
Describes 2 experiments in which visual and/or auditory location precues preceded visual or auditory targets. 30 observers were required to judge the location of the targets. Conditions were such that involuntary, stimulus-driven (SD) attention shifts were the only ones likely to occur and give rise to cueing effects. It was found that visual precues affected response time to localize both visual targets and auditory targets, but auditory precues affected only the time to localize auditory targets. Moreover, when visual and auditory cues conflicted, visual cues dominated in the visual task but were dominated by auditory cues in the auditory task. Results suggest that involuntary SD attention shifts might be controlled by a modality-specific mechanism for visual tasks, whereas SD shifts of auditory attention are controlled by a supramodal mechanism. (French abstract) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号