首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 21 毫秒
1.
An observer moving forwards through the environment experiences a radial pattern of image motion on each retina. Such patterns of optic flow are a potential source of information about the observer's rate of progress, direction of heading and time to reach objects that lie ahead. As the viewing distance changes there must be changes in the vergence angle between the two eyes so that both foveas remain aligned on the object of interest in the scene ahead. Here we show that radial optic flow can elicit appropriately directed (horizontal) vergence eye movements with ultra-short latencies (roughly 80 ms) in human subjects. Centrifugal flow, signalling forwards motion, increases the vergence angle, whereas centripetal flow decreases the vergence angle. These vergence eye movements are still evident when the observer's view of the flow pattern is restricted to the temporal hemifield of one eye, indicating that these responses do not result from anisotropies in motion processing but from a mechanism that senses the radial pattern of flow. We hypothesize that flow-induced vergence is but one of a family of rapid ocular reflexes, mediated by the medial superior temporal cortex, compensating for translational disturbance of the observer.  相似文献   

2.
What visual information do we use to guide movement through our environment? Self-movement produces a pattern of motion on the retina, called optic flow. During translation, the direction of movement (locomotor direction) is specified by the point in the flow field from which the motion vectors radiate - the focus of expansion (FoE) [1-3]. If an eye movement is made, however, the FoE no longer specifies locomotor direction [4], but the 'heading' direction can still be judged accurately [5]. Models have been proposed that remove confounding rotational motion due to eye movements by decomposing the retinal flow into its separable translational and rotational components ([6-7] are early examples). An alternative theory is based upon the use of invariants in the retinal flow field [8]. The assumption underpinning all these models (see also [9-11]), and associated psychophysical [5,12,13] and neurophysiological studies [14-16], is that locomotive heading is guided by optic flow. In this paper we challenge that assumption for the control of direction of locomotion on foot. Here we have explored the role of perceived location by recording the walking trajectories of people wearing displacing prism glasses. The results suggest that perceived location, rather than optic or retinal flow, is the predominant cue that guides locomotion on foot.  相似文献   

3.
The ability to judge heading during tracking eye movements has recently been examined by several investigators. To assess the use of retinal-image and extra-retinal information in this task, the previous work has compared heading judgments with executed as opposed to simulated eye movements. For eye movement velocities greater than 1 deg/sec, observers seem to require the eye-velocity information provided by extra-retinal signals that accompany tracking eye movements. When those signals are not provided, such as with simulated eye movements, observers perceive their self-motion as curvilinear translation rather than the linear translation plus eye rotation being presented. The interpretation of the previous results is complicated, however, by the fact that the simulated eye movement condition may have created a conflict between two possible estimates of the heading: one based on extra-retinal solutions and the other based on retina-image solutions. In four experiments, we minimized this potential conflict by having observers judge heading in the presence of rotations consisting of mixtures of executed and simulated eye movements. The results showed that the heading is estimated more accurately when rotational flow is created by executed eye movements alone. In addition, the magnitude of errors in heading estimates is essentially proportional to the amount of rotational flow created by a simulated eye rotation (independent of the total magnitude of the rotational flow). The fact that error magnitude is proportional to the amount of simulated rotation suggests that the visual system attributes rotational flow unaccompanied by an eye movement to a displacement of the direction of translation in the direction of the simulated eye rotation.  相似文献   

4.
Four experiments were directed at understanding the influence of multiple moving objects on curvilinear (i.e., circular and elliptical) heading perception. Displays simulated observer movement over a ground plane in the presence of moving objects depicted as transparent, opaque, or black cubes. Objects either moved parallel to or intersected the observer's path and either retreated from or approached the moving observer. Heading judgments were accurate and consistent across all conditions. The significance of these results for computational models of heading perception and for information in the global optic flow field about observer and object motion is discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
A neural model is developed to explain how humans can approach a goal object on foot while steering around obstacles to avoid collisions in a cluttered environment. The model uses optic flow from a 3-dimensional virtual reality environment to determine the position of objects on the basis of motion discontinuities and computes heading direction, or the direction of self-motion, from global optic flow. The cortical representation of heading interacts with the representations of a goal and obstacles so that the goal acts as an attractor of heading and obstacles act as repellers. In addition, the model maintains fixation on the goal object by generating smooth pursuit eye movements. Eye rotations can distort the optic flow field, complicating heading perception, and the model uses extraretinal signals to correct for this distortion and accurately represent heading. The model explains how motion processing mechanisms in cortical areas middle temporal, medial superior temporal, and posterior parietal cortex can be used to guide steering. The model quantitatively simulates human psychophysical data about visually guided steering, obstacle avoidance, and route selection. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
When an individual moves through a cluttered environment, he or she often fixates an object relatively near his or her path in the middle distance and uses pursuit eye movements to follow it while moving forward. On the basis of previous evidence, either motion fields or displacement fields around the fixated object—two alternative representations of the same information—could be used to determine one's direction of self-movement, sometimes called heading or aimpoint. In a series of 5 experiments, the relationship between these representations was explored and it was found that the displacements of identifiable objects, not their motions, are most likely the direct inputs for wayfinding. It may be that these inputs are used in conjunction with a mental map to determine one's aimpoint. A mathematical framework for this process is proposed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
This study examined whether the perception of heading is determined by spatially pooling velocity information. Observers were presented displays simulating observer motion through a volume of 3-D objects. To test the importance of spatial pooling, the authors systematically varied the nonrigidity of the flow field using two types of object motion: adding a unique rotation or translation to each object. Calculations of the signal-to-noise (observer velocity-to-object motion) ratio indicated no decrements in performance when the ratio was .39 for object rotation and .45 for object translation. Performance also increased with the number of objects in the scene. These results suggest that heading is determined by mechanisms that use spatial pooling over large regions. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
This study investigated whether and how visual representations of individual objects are bound in memory to scene context. Participants viewed a series of naturalistic scenes, and memory for the visual form of a target object in each scene was examined in a 2-alternative forced-choice test, with the distractor object either a different object token or the target object rotated in depth. In Experiments 1 and 2, object memory performance was more accurate when the test object alternatives were displayed within the original scene than when they were displayed in isolation, demonstrating object-to-scene binding. Experiment 3 tested the hypothesis that episodic scene representations are formed through the binding of object representations to scene locations. Consistent with this hypothesis, memory performance was more accurate when the test alternatives were displayed within the scene at the same position originally occupied by the target than when they were displayed at a different position. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
In eight experiments, we examined the ability to judge heading during tracking eye movements. To assess the use of retinal-image and extra-retinal information in this task, we compared heading judgments with executed as opposed to simulated eye movements. In general, judgments were much more accurate during executed eye movements. Observers in the simulated eye movement condition misperceived their self-motion as curvilinear translation rather than the linear translation plus eye rotation that was simulated. There were some experimental conditions in which observers could judge heading reasonably accurately during simulated eye movements; these included conditions in which eye movement velocities were 1 deg/sec or less and conditions which made available a horizon cue that exists for locomotion parallel to a ground plane with a visible horizon. Overall, our results imply that extra-retinal, eye-velocity signals are used in determining heading under many, perhaps most, viewing conditions.  相似文献   

10.
How does the visual system retain and combine information about an object across time and space? This question was investigated by manipulating the spatiotemporal continuity and form continuity of 2 perceptual objects over time. In Experiment 1 the objects were viewed in central vision within a single eye fixation, in Experiment 2 they were viewed across a saccadic eye movement, and in Experiment 3 they were viewed at different spatial and retinal locations over time. In all 3 experiments some information about the object was found to be linked to its spatiotemporal continuity, and some information was found to be independent of spatiotemporal continuity. Form continuity was found to produce no effect. The results support a theory of dynamic visual identification according to which information is maintained over time by both episodic object representations and long-term memory representations, neither of which necessarily code specific sensory information. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

11.
Recent studies have suggested that humans cannot estimate their direction of forward translation (heading) from the resulting retinal motion (flow field) alone when rotation rates are higher than approximately 1 deg/sec. It has been argued that either oculomotor or static depth cues are necessary to disambiguate the rotational and translational components of the flow field and, thus, to support accurate heading estimation. We have re-examined this issue using visually simulated motion along a curved path towards a layout of random points as the stimulus. Our data show that, in this curvilinear motion paradigm, five of six observers could estimate their heading relatively accurately and precisely (error and uncertainty < approximately 4 deg), even for rotation rates as high as 16 deg/sec, without the benefit of either oculomotor or static depth cues signaling rotation rate. Such performance is inconsistent with models of human self-motion estimation that require rotation information from sources other than the flow field to cancel the rotational flow.  相似文献   

12.
Previous electrophysiological studies in pigeons have shown that the vestibulocerebellum can be divided into two parasagittal zones based on responses to optic flow stimuli. The medial zone responds best to optic flow resulting from self-translation, whereas the lateral zone responds best to optic flow resulting from self-rotation. This information arrives from the retina via a projection from the accessory optic system to the medial column of the inferior olive. In this study we investigated inferior olive projections to translational and rotational zones of the vestibulocerebellum using the retrograde tracer cholera toxin subunit B. Extracellular recordings of Purkinje cell activity (complex spikes) in response to large-field visual stimuli were used to identify the injection sites. We found a distinct segregation of inferior olive cells projecting to translational and rotational zones of the vestibulocerebellum. Translation zone injections resulted in retrogradely labeled cells in the ventrolateral area of the medial column, whereas rotation zone injections resulted in retrogradely labeled cells in the dorsomedial region of the medial column. Motion of any object through space, including self-motion of organisms, can be described with reference to translation and rotation in three-dimensional space. Our results show that, in pigeons, the brainstem visual systems responsible for detecting optic flow are segregated into channels responsible for the analysis of translational and rotational optic flow in the inferior olive, which is only two synapses from the retina.  相似文献   

13.
Nine experiments examined the means by which visual memory for individual objects is structured into a larger representation of a scene. Participants viewed images of natural scenes or object arrays in a change detection task requiring memory for the visual form of a single target object. In the test image, 2 properties of the stimulus were independently manipulated: the position of the target object and the spatial properties of the larger scene or array context. Memory performance was higher when the target object position remained the same from study to test. This same-position advantage was reduced or eliminated following contextual changes that disrupted the relative spatial relationships among contextual objects (context deletion, scrambling, and binding change) but was preserved following contextual change that did not disrupt relative spatial relationships (translation). Thus, episodic scene representations are formed through the binding of objects to scene locations, and object position is defined relative to a larger spatial representation coding the relative locations of contextual objects. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
What role does the initial glimpse of a scene play in subsequent eye movement guidance? In 4 experiments, a brief scene preview was followed by object search through the scene via a small moving window that was tied to fixation position. Experiment 1 demonstrated that the scene preview resulted in more efficient eye movements compared with a control preview. Experiments 2 and 3 showed that this scene preview benefit was not due to the conceptual category of the scene or identification of the target object in the preview. Experiment 4 demonstrated that the scene preview benefit was unaffected by changing the size of the scene from preview to search. Taken together, the results suggest that an abstract (size invariant) visual representation is generated in an initial scene glimpse and that this representation can be retained in memory and used to guide subsequent eye movements. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
How does the brain process visual information about self-motion? In monkey cortex, the analysis of visual motion is performed by successive areas specialized in different aspects of motion processing. Whereas neurons in the middle temporal (MT) area are direction-selective for local motion, neurons in the medial superior temporal (MST) area respond to motion patterns. A neural network model attempts to link these properties to the psychophysics of human heading detection from optic flow. It proposes that populations of neurons represent specific directions of heading. We quantitatively compared single-unit recordings in area MST with single-neuron simulations in this model. Predictions were derived from simulations and subsequently tested in recorded neurons. Neuronal activities depended on the position of the singular point in the optic flow. Best responses to opposing motions occurred for opposite locations of the singular point in the visual field. Excitation by one type of motion is paired with inhibition by the opposite motion. Activity maxima often occur for peripheral singular points. The averaged recorded shape of the response modulations is sigmoidal, which is in agreement with model predictions. We also tested whether the activity of the neuronal population in MST can represent the directions of heading in our stimuli. A simple least-mean-square minimization could retrieve the direction of heading from the neuronal activities with a precision of 4.3 degrees. Our results show good agreement between the proposed model and the neuronal responses in area MST and further support the hypothesis that area MST is involved in visual navigation.  相似文献   

16.
To understand better the range of conditions supporting stereoscopic vision, we explored the effects of speed, as well as specific optic flow patterns, on judgments of the depth, near or far of fixation, of large targets briefly presented in the upper periphery. They had large disparities (1-6 deg) and moved at high speeds (20-100 deg/sec). Motion was either vertical or horizontal, as well as either unidirectional or layered in bands of alternating directions (opponent-motion). High stimulus speeds can extend dmax. The effects are explained by models having linear filters that signal both faster speeds and larger disparities. Stereo depth localization can also be enhanced by opponent-motion even when kinetic depth itself is not apparent. Improvements are greatest with wide-field, horizontal opponent-motion. The results imply functions such as vection, posture-control, and vergence may benefit from disparity information enhanced by optic flow patterns that are commonly available to a moving, binocular observer.  相似文献   

17.
When moving through cluttered environments we use different forms of the same source of information to avoid stationary and moving objects. A stationary obstacle can be avoided by looking at it, registering the differential parallactic displacements on the retina around it during pursuit fixation, and then acting on that information. Such information also specifies one's general heading. A moving obstacle can be avoided by looking at it, registering the displacements reflecting constancy or change in one's gaze-movement angle, and then acting on that information. Such information, however, does not generally specify one's heading. Passing in front of a moving object entails retrograde motion of objects in the deep background; collisions entail the lamellar pattern of optical flow; and passing behind entails more nearly uniform flow against one's direction of motion. Accuracy in the laboratory compares favorably with that of real-world necessities. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

18.
The influence of stereoscopic vision on the perception of optic flow fields was investigated in experiments based on a recently described illusion. In this illusion, subjects perceive a shift of the center of an expanding optic flow field when it is transparently superimposed by a unidirectional motion pattern. This illusory shift can be explained by the visual system taking the presented flow pattern as a certain self-motion flow field. Here we examined the dependence of the illusory transformation on differences in depth between the two superimposed motion patterns. Presenting them with different relative binocular disparities, we found a strong variation in the magnitude of the illusory shift. Especially when translation was in front of expansion, a highly significant decrease of the illusory shift occurred, down to 25% of its magnitude at zero disparity. These findings confirm the assumption that the motion pattern is interpreted as a self-motion flow field. In a further experiment we presented monocular depth cues by changing dot size and dot density. This caused a reduction of the illusory shift which is distinctly smaller than under stereoscopic presentation. We conclude that the illusory optic flow transformation is modified by depth information, especially by binocular disparity. The findings are linked to the phenomenon of induced motion and are related to neurophysiology.  相似文献   

19.
Although considerable progress has been made in understanding how adults perceive their direction of self-motion, or heading, from optic flow, little is known about how these perceptual processes develop in infants. In 3 experiments, the authors explored how well 3- to 6-month-old infants could discriminate between optic flow patterns that simulated changes in heading direction. The results suggest that (a) prior to the onset of locomotion, the majority of infants discriminate between optic flow displays that simulate only large (> 22°) changes in heading, (b) there is minimal development in sensitivity between 3 and 6 months, and (c) optic flow alone is sufficient for infants to discriminate heading. These data suggest that spatial abilities associated with the dorsal visual stream undergo prolonged postnatal development and may depend on locomotor experience. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
Many cells in the dorsal part of the medial superior temporal (MST) region of visual cortex respond selectively to specific combinations of expansion/contraction, translation, and rotation motions. Previous investigators have suggested that these cells may respond selectively to the flow fields generated by self-motion of an observer. These patterns can also be generated by the relative motion between an observer and a particular object. We explored a neurally constrained model based on the hypothesis that neurons in MST partially segment the motion fields generated by several independently moving objects. Inputs to the model were generated from sequences of ray-traced images that simulated realistic motion situations, combining observer motion, eye movements, and independent object motions. The input representation was based on the response properties of neurons in the middle temporal area (MT), which provides the primary input to area MST. After applying an unsupervised optimization technique, the units became tuned to patterns signaling coherent motion, matching many of the known properties of MST cells. The results of this model are consistent with recent studies indicating that MST cells primarily encode information concerning the relative three-dimensional motion between objects and the observer.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号