首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 234 毫秒
1.
Theories of image segmentation suggest that the human visual system may use two distinct processes to segregate figure from background: a local process that uses local feature contrasts to mark borders of coherent regions and a global process that groups similar features over a larger spatial scale. We performed psychophysical experiments to determine whether and to what extent the global similarity process contributes to image segmentation by motion and color. Our results show that for color, as well as for motion, segmentation occurs first by an integrative process on a coarse spatial scale, demonstrating that for both modalities the global process is faster than one based on local feature contrasts. Segmentation by motion builds up over time, whereas segmentation by color does not, indicating a fundamental difference between the modalities. Our data suggest that segmentation by motion proceeds first via a cooperative linking over space of local motion signals, generating almost immediate perceptual coherence even of physically incoherent signals. This global segmentation process occurs faster than the detection of absolute motion, providing further evidence for the existence of two motion processes with distinct dynamic properties.  相似文献   

2.
The posterior parietal cortex has long been considered an 'association' area that combines information from different sensory modalities to form a cognitive representation of space. However, until recently little has been known about the neural mechanisms responsible for this important cognitive process. Recent experiments from the author's laboratory indicate that visual, somatosensory, auditory and vestibular signals are combined in areas LIP and 7a of the posterior parietal cortex. The integration of these signals can represent the locations of stimuli with respect to the observer and within the environment. Area MSTd combines visual motion signals, similar to those generated during an observer's movement through the environment, with eye-movement and vestibular signals. This integration appears to play a role in specifying the path on which the observer is moving. All three cortical areas combine different modalities into common spatial frames by using a gain-field mechanism. The spatial representations in areas LIP and 7a appear to be important for specifying the locations of targets for actions such as eye movements or reaching; the spatial representation within area MSTd appears to be important for navigation and the perceptual stability of motion signals.  相似文献   

3.
Describes further evidence for a new neural network theory of biological motion perception. The theory clarifies why parallel streams V1→V2, V1→MT, and V1→V2→MT exist for static form and motion form processing among the areas V1, V2, and MT of visual cortex. It also suggests that the static form system generates emergent boundary segmentations whose outputs are insensitive to direction-of-contrast and to direction-of-motion, whereas the motion form system generates emergent boundary segmentations whose outputs are insensitive to direction-of-contrast but sensitive to direction-of-motion. Data on short- and long-range apparent motion percepts are explained including beta, split, gamma and reverse-contrast gamma, and delta motions, as well as visual inertia. Also included are the transition from group motion to element motion in response to a Ternus display as the interstimulus interval (ISI) decreases; group motion in response to a reverse-contrast Ternus display even at short ISIs; speed-up of motion velocity as interflash distance increases or flash duration decreases dependence of the transition from element motion to group motion on stimulus duration and size, various classical dependencies between flash duration, spatial separation, ISI, and motion threshold known as Korte's laws; dependence of motion strength on stimulus orientation and spatial frequency; short-range and long-range form–color interactions; and binocular interactions of flashes to different eyes. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
Current models of motion perception typically describe mechanisms that operate locally to extract direction and speed information. To deal with the movement of self or objects with respect to the environment, higher-level receptive fields are presumably assembled from the outputs of such local analyzers. We find that the apparent speed of gratings viewed through four spatial apertures depends on the interaction of motion directions among the apertures, even when the motion within each aperture is identical except for direction. Specifically, local motion consistent with a global pattern of radial motion appears 32% faster than that consistent with translational or rotational motion. The enhancement of speed is not reflected in detection thresholds and persists in spite of instructions to fixate a single local aperture and ignore the global configuration. We also find that a two-dimensional pattern of motion is necessary to elicit the effect and that motion contrast alone does not produce the enhancement. These results implicate at least two serial stages of motion-information processing: a mechanism to code the local direction and speed of motion, followed by a global mechanism that integrates such signals to represent meaningful patterns of movement, depending on the configuration of the local motions.  相似文献   

5.
Real-world objects are complex, containing information at multiple orientations and spatial scales. It is well established that at initial cortical stages of processing, local information about an image is separately represented at multiple spatial scales. However, it is not yet established how these early representations are later integrated across scale to signal useful information about complex stimulus features, such as edges and textures. In the studies reported here, we investigate the scale-integration processes involved in distinguishing among complex patterns. We use a concurrent-response paradigm in which observers simultaneously judge two components of compound gratings that differ widely in spatial frequency. In different experiments, each component takes one of two slightly different values along the dimensions of spatial frequency, contrast, or orientation. Using analyses developed within the framework of a multivariate extension of signal-detection theory, we ask how information about the frequency, contrast, or orientation of the components is or is not integrated across the two grating components. Our techniques permit us to isolate and identify interactions due to excitatory or inhibitory processes from effects due to noise, and to separately assess any attentional limitations that might occur in processing. Results indicate that orientation information is fully integrated across spatial scales within a limited orientation band and that decisions are based entirely on the summed information. Information about spatial frequency and contrast is not summed over spatial scale; cross-scale results show sensory independence. However, our results suggest that observers cannot simultaneously use information about frequency or contrast when it is presented at different spatial scales. Our results provide direct evidence for the existence of a higher-level summing circuit tailored to signal information about orientation. The properties of this mechanism differ substantially from edge-detector mechanisms proposed by Marr and others.  相似文献   

6.
7.
Investigated the phenomenon of representational momentum as reported by the 1st 2 authors (see record 1984-16934-001) in cases where visual memories are distorted by implied motions of the elements of a pattern, conducting 3 experiments with 48 undergraduates. It was predicted that these memory distortions should be sensitive not only to the direction of the implied motions but also to changes in the implied velocity. Ss observed a sequence of dot-pattern displays that implied that the dots were moving at either a constant velocity or constant acceleration, but in separate directions. Discrimination functions for recognizing the final pattern in the sequence revealed that Ss' memories had shifted forward, corresponding to small continuations of the implied motions. The induced memory shifts increased in size as the implied velocity and acceleration of the dots increased but were eliminated when the display sequence implied a deceleration of the dots to a final velocity of zero. It is suggested that mentally extrapolated motion may have some of the same inertial properties as actual physical motion. (30 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
9.
Six experiments investigated the role of global (shape) and local (contour) orientation in visual search for an orientation target. Experiment 1 demonstrated that search for a conjunction of local contours with a distinct global orientation was less efficient than search for a target featurally distinct in terms of both global and local contour orientation. However, Experiments 2 and 4 demonstrated that the presence of a unique line contour was neither sufficient nor necessary to allow efficient search. Experiment 5 found that search for a local orientation difference was strongly impeded by irrelevant variation in global orientation, arguing for a preeminent role for global orientation. Finally, Experiment 6 demonstrated that the orientation search asymmetry holds for the global orientation of stimuli. Taken together, the results are consistent with visual search processes guided predominantly by a representation of global orientation.  相似文献   

10.
Six experiments are reported that were aimed at demonstrating the presence in newborns of a perceptual dominance of global over local visual information in hierarchical patterns, similar to that observed in adults (D. Navon, 1977, 1981). The first four experiments showed that, even though both levels of visual information were detectable by the newborn (Experiments 1A and 1B), global cues enjoyed some advantage over local cues (Experiments 2 and 3). Experiments 4A and 4B demonstrated that the global bias was strictly dependent on the low spatial frequency content of the stimuli and vanished after selective removal of low spatial frequencies. The results are interpreted as suggesting parallels between newborns' visual processing and processing later in development. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

11.
Prior to the presentation of a test stimulus, subjects' attentional state was either narrowly focused on a particular location or broadly spread over a large spatial region. In previous studies, it was found that broadly spread attention enhances the sensitivity of relatively large spatial filters (increasing the perceiver's spatial scale), thereby diminishing spatial resolution and enhancing sensitivity to global stimulus structure. In this study it is shown that attentional spread also affects the self-organization of unidirectional versus oscillatory motion patterns for the directionally ambiguous, counterphase presentation of rows of evenly-spaced visual elements (lines segments; dots); i.e. qualitatively different motion patterns can be formed for the same stimulus at different spatial scales. Although the degree to which attention is spread along a spatial axis can be controlled by the perceiver, the effects of spread attention are not limited to a single axis. These results, as well as previously observed effects of attentional spread on spatial resolution, are accounted for by a neural model involving large, foveally-centered receptive fields with co-operatively interacting subunits (probably at the level of MST or higher).  相似文献   

12.
We test 3 theories of global and local scene information acquisition, defining global and local in terms of spatial frequencies. By independence theories, high- and low-spatial-frequency information are acquired over the same time course and combine additively. By global-precedence theories, global information acquisition precedes local information acquisition, but they combine additively. By interactive theories, global information also affects local-information acquisition rate. We report 2 digit-recall experiments. In the 1st, we confirmed independence theories. In the 2nd, we disconfirmed both independence theories and interactive theories, leaving global-precedence theories as the remaining alternative. We show that a specific global-precedence theory quantitatively accounted for Experiments 1-2 data as well as for past data. We discuss how their spatial-frequency definition of spatial scale comports with definitions used by others, and we consider the suggestion by P. G. Schyns and colleagues (e.g., D. J. Morrison & Schyns, 2001) that the visual system may act flexibly rather than rigidly in its use of spatial scales. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
To better understand how local motion detectors merge their responses so as to permit the global determination of objects' movements in the visual field, direction discrimination of performance was measured using a flexible class of moving dots--two sets of dots translating sinusoidally 90 deg out of phase along orthogonal axes. When dots' velocities are combined, a global motion along a circular trajectory emerges, clockwise or counter-clockwise depending on the sign of the phase lag. However, the results of the present experiments indicate that dot patterns are segregated into distinct, but interacting, streams when each dot motion can be accurately determined. In contrast, perceptual coherence of the global motion occurs when each local motion signal is "blurred" by a "motion noise". Direction discrimination performance then increases regularly with both noise amplitude and noise frequency, i.e., noise speed. Performance also increases when relative motion between dots is added. Testing different dot configurations indicates that performance is better for spatial arrangements that display structural properties (a square shape), as compared to overlapping random distributions. Interestingly, when the delay between stimulus onset and motion onset increases up to 300 msec, performance improves when dot patterns convey come form of structural organization but not when the dots are distributed at random. Relations of these results to existing models of motion integration are considered.  相似文献   

14.
It is suggested that the distinction between global versus local processing styles exists across sensory modalities. Activation of one-way of processing in one modality should affect processing styles in a different modality. In 12 studies, auditory, haptic, gustatory or olfactory global versus local processing was induced, and participants were tested with a measure of their global versus local visual attention; the content of this measure was unrelated to the inductions. In a different set of 4 studies, the effect of local versus global visual processing on the way people listen to a poem or touch, taste, and smell objects was examined. In all experiments, global/local processing in 1 modality shifted to global/local processing in the other modality. A final study found more pronounced shifts when compatible processing styles were induced in 2 rather than 1 modality. Moreover, the study explored mediation by relative right versus left hemisphere activation as measured with the line bisection task and accessibility of semantic associations. It is concluded that the effects reflect procedural rather than semantic priming effects that occurred out of participants' awareness. Because global/local processing has been shown to affect higher order processing, future research may activate processing styles in other sensory modalities to produce similar effects. Furthermore, because global/local processing is triggered by a variety of real world variables, one may explore effects on other sensory modalities than vision. The results are consistent with the global versus local processing model, a systems account (GLOMOsys; F?rster & Dannenberg, 2010). (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

15.
Visual search data are given a unified quantitative explanation by a model of how spatial maps in the parietal cortex and object recognition categories in the inferotemporal cortex deploy attentional resources as they reciprocally interact with visual representations in the prestriate cortex. The model visual representations are organized into multiple boundary and surface representations. Visual search in the model is initiated by organizing multiple items that lie within a given boundary or surface representation into a candidate search grouping. These items are compared with object recognition categories to test for matches or mismatches. Mismatches can trigger deeper searches and recursive selection of new groupings until a target object is identified. The model provides an alternative to Feature Integration and Guided Search models. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
Recently, R. Egly et al (see record 1994-34191-001) provided evidence for an object-based component of visual orienting in a simple cued reaction time task. However, the effects of objects on visual attention can be due to selection from either of 2 very different types of representations: (1) a truly object-based representation that codes for object structure or (2) a grouped array representation that codes for groups of spatial locations. Are Egly et al's results due to selection from an object-based representation or from a grouped array representation? This question was addressed by using a variant of Egly et al's task. The findings replicated those of Egly et al and demonstrated that the selection in this task is mediated through a grouped array representation. The implications of these results for studies of attentional selection are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
I present evidence on the nature of object coding in the brain and discuss the implications of this coding for models of visual selective attention. Neuropsychological studies of task-based constraints on: (i) visual neglect; and (ii) reading and counting, reveal the existence of parallel forms of spatial representation for objects: within-object representations, where elements are coded as parts of objects, and between-object representations, where elements are coded as independent objects. Aside from these spatial codes for objects, however, the coding of visual space is limited. We are extremely poor at remembering small spatial displacements across eye movements, indicating (at best) impoverished coding of spatial position per se. Also, effects of element separation on spatial extinction can be eliminated by filling the space with an occluding object, indicating that spatial effects on visual selection are moderated by object coding. Overall, there are separate limits on visual processing reflecting: (i) the competition to code parts within objects; (ii) the small number of independent objects that can be coded in parallel; and (iii) task-based selection of whether within- or between-object codes determine behaviour. Between-object coding may be linked to the dorsal visual system while parallel coding of parts within objects takes place in the ventral system, although there may additionally be some dorsal involvement either when attention must be shifted within objects or when explicit spatial coding of parts is necessary for object identification.  相似文献   

18.
Random-dot cinematograms (RDCs) consist of multiple local motion signals that can vary in direction and speed. These local motion signals can result in coherent motion: the percept of an overall direction and speed of motion in an RDC. Thresholds were obtained for discriminating differences in the strength of coherent motion. Observers were found to easily discriminate the strength of coherent motion on the basis of the elements' direction or speed under optimal conditions. However, a nonreciprocal relation was evident when this discrimination was performed under nonoptimal conditions. Discrimination of coherent motion that was based on the elements' direction was unaffected, but discrimination that was based on speed was impaired. Results indicate that humans are sensitive to small differences in coherent motion strength and suggest that the visual system processes direction and speed information nonreciprocally. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
Across two experiments, this study found that the barber pole illusion (i.e. grating pattern appearing to move in the direction of the long axis of a rectangular aperture) is perceived with stereoscopic (cyclopean) motion. The grating and aperture comprising the barber pole display were created from binocular disparity differences embedded in a dynamic random-dot stereogram or from luminance differences. In Experiment 1, observers viewed a square-wave grating moving through a rectangular aperture of 2:1 or 4:1 aspect ratio and indicated whether the grating appeared to move in a direction perpendicular to its orientation or in the direction of the long axis of the aperture. For both stereoscopic and luminance stimuli equally, the grating appeared to move in the direction of the aperture (i.e. the barber pole illusion) more often with the larger aspect ratio than with the smaller aspect ratio. The condition for which a stereoscopic grating moved through a luminance rectangular aperture was also examined: the grating appeared to move in the direction of the aperture (inter-attribute barber pole illusion). In Experiment 2, observers viewed a square-wave grating moving through a rectangular aperture of 3:1 aspect ratio whose sides were indented in order to change the local direction of motion of the line terminators. For both stereoscopic and luminance stimuli, the grating appeared to move more frequently in a direction perpendicular to its orientation with the indented aperture (i.e. the illusion was diminished). Thus, local velocity signals from moving stereoscopic line terminators play a role in the production of the barber pole illusion similar to that of luminance motion signals. This suggests that the generation and propagation of motion signals at cyclopean levels of vision play a part in the representation of coherently-moving rigid surfaces.  相似文献   

20.
Previous research has shown that the perception of motion within a local region is influenced by other motions within neighboring areas (eg induced motion). Here, a study is reported of the perceived speed of dots moving within a circular target region, which was surrounded by other motions within a larger surrounding area. The perceived speed of the central dots was found to be fastest when the surround was stationary; it became slower as the speed of motion in the surround was increased. This decrease in the perceived target speed with increases in surround velocity occurred regardless of whether the direction in which the surround moved was the same as or opposite to the motion of the target region. This result cannot be explained by using simple models of perceived speed that depend only upon such factors as the magnitude of relative motion between center and surround. The spatial area over which these motion interactions occur was also investigated.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号