首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The ability to judge heading during tracking eye movements has recently been examined by several investigators. To assess the use of retinal-image and extra-retinal information in this task, the previous work has compared heading judgments with executed as opposed to simulated eye movements. For eye movement velocities greater than 1 deg/sec, observers seem to require the eye-velocity information provided by extra-retinal signals that accompany tracking eye movements. When those signals are not provided, such as with simulated eye movements, observers perceive their self-motion as curvilinear translation rather than the linear translation plus eye rotation being presented. The interpretation of the previous results is complicated, however, by the fact that the simulated eye movement condition may have created a conflict between two possible estimates of the heading: one based on extra-retinal solutions and the other based on retina-image solutions. In four experiments, we minimized this potential conflict by having observers judge heading in the presence of rotations consisting of mixtures of executed and simulated eye movements. The results showed that the heading is estimated more accurately when rotational flow is created by executed eye movements alone. In addition, the magnitude of errors in heading estimates is essentially proportional to the amount of rotational flow created by a simulated eye rotation (independent of the total magnitude of the rotational flow). The fact that error magnitude is proportional to the amount of simulated rotation suggests that the visual system attributes rotational flow unaccompanied by an eye movement to a displacement of the direction of translation in the direction of the simulated eye rotation.  相似文献   

2.
When we make a smooth eye movement to track a moving object, the visual system must take the eye's movement into account in order to estimate the object's velocity relative to the head. This can be done by using extra-retinal signals to estimate eye velocity and then subtracting expected from observed retinal motion. Two familiar illusions of perceived velocity--the Filehne illusion and Aubert-Fleischl phenomenon--are thought to be the consequence of the extra-retinal signal underestimating eye velocity. These explanations assume that retinal motion is encoded accurately, which is questionable because perceived retinal speed is strongly affected by several stimulus properties. We develop and test a model of head-centric velocity perception that incorporates errors in estimating eye velocity and in retinal-motion sensing. The model predicts that the magnitude and direction of the Filehne illusion and Aubert-Fleischl phenomenon depend on spatial frequency and this prediction is confirmed experimentally.  相似文献   

3.
In eight experiments, we examined the ability to judge heading during tracking eye movements. To assess the use of retinal-image and extra-retinal information in this task, we compared heading judgments with executed as opposed to simulated eye movements. In general, judgments were much more accurate during executed eye movements. Observers in the simulated eye movement condition misperceived their self-motion as curvilinear translation rather than the linear translation plus eye rotation that was simulated. There were some experimental conditions in which observers could judge heading reasonably accurately during simulated eye movements; these included conditions in which eye movement velocities were 1 deg/sec or less and conditions which made available a horizon cue that exists for locomotion parallel to a ground plane with a visible horizon. Overall, our results imply that extra-retinal, eye-velocity signals are used in determining heading under many, perhaps most, viewing conditions.  相似文献   

4.
Recent studies have suggested that humans cannot estimate their direction of forward translation (heading) from the resulting retinal motion (flow field) alone when rotation rates are higher than approximately 1 deg/sec. It has been argued that either oculomotor or static depth cues are necessary to disambiguate the rotational and translational components of the flow field and, thus, to support accurate heading estimation. We have re-examined this issue using visually simulated motion along a curved path towards a layout of random points as the stimulus. Our data show that, in this curvilinear motion paradigm, five of six observers could estimate their heading relatively accurately and precisely (error and uncertainty < approximately 4 deg), even for rotation rates as high as 16 deg/sec, without the benefit of either oculomotor or static depth cues signaling rotation rate. Such performance is inconsistent with models of human self-motion estimation that require rotation information from sources other than the flow field to cancel the rotational flow.  相似文献   

5.
Human observers cannot judge heading accurately in the presence of simulated gaze rotations under many conditions [Royden et al. (1994). Vision Research, 34, 3197-3214]. They make errors in the direction of rotation with magnitudes proportional to the rotation rate. Two hypotheses have been advanced to explain this phenomenon. The extra-retinal-signal hypothesis states that the observer's estimate of gaze rotation is always based on an extra-retinal signal such as an efference copy. In the absence of such a signal, the observer assumes that no rotation has taken place and responds accordingly. The retinal-image hypothesis states that visual input dominates when the extra-retinal signal is small or absent; under this hypothesis, errors with simulated rotations are the consequence of faulty visual mechanisms. Perrone and Stone [(1994). Vision Research, 34, 2917-2938] proposed a model that purports to account for these errors using retinal-image information (optic flow) alone; its assumptions make it inefficient under some conditions. The most important of these assumptions is that the fixated target is stationary with respect to the world (the gaze-stabilization constraint). I compared the model's performance to human data from two experiments of Royden et al. [(1994). Vision Research, 34, 3197-3214]. One experiment simulated translation while tracking a target attached to the scene (gaze-stabilized), while the other simulated translation while tracking a target that was not attached (gaze-unstabilized). The incorporation of the gaze-stabilization constraint leads to a predicted asymmetry for the errors in the gaze-unstabilized experiment that is not observed in human data. I conclude that the model as it stands is not consistent with human behavior. It is possible, however, that the predicted asymmetry is masked in human data by a counteracting asymmetry in a hypothetical processing stage subsequent to the heading estimation that extrapolates the observer's future path of self-motion.  相似文献   

6.
When presented with random-dot displays with little depth information, observers cannot determine their direction of self-motion accurately in the presence of rotational flow without appropriate extra-retinal information (Royden CS et al. Vis Res 1994;34:3197-214.). On theoretical grounds, one might expect improved performance when depth information is added to the display (van den Berg AV and Brenner E. Nature 1994;371:700-2). We examined this possibility by having observers indicate perceived self-motion paths when the amount of depth information was varied. When stereoscopic cues and a variety of monocular depth cues were added, observers still misperceived the depicted self-motion when the rotational flow in the display was not accompanied by an appropriate extra-retinal, eye-velocity signal. Specifically, they perceived curved self-motion paths with the curvature in the direction of the simulated eye rotation. The distance to the response marker was crucial to the objective measurement of this misperception. When the marker distance was small, the observers' settings were reasonably accurate despite the misperception of the depicted self-motion. When the marker distance was large, the settings exhibited the errors reported previously by Royden CS et al. Vis Res 1994;34-3197-3214. The path judgement errors observers make during simulated gaze rotations appear to be the result of misattributing path-independent rotation to self-motion along a circular path with path-dependent rotation. An analysis of the information an observer could use to avoid such errors reveals that the addition of depth information is of little use.  相似文献   

7.
What visual information do we use to guide movement through our environment? Self-movement produces a pattern of motion on the retina, called optic flow. During translation, the direction of movement (locomotor direction) is specified by the point in the flow field from which the motion vectors radiate - the focus of expansion (FoE) [1-3]. If an eye movement is made, however, the FoE no longer specifies locomotor direction [4], but the 'heading' direction can still be judged accurately [5]. Models have been proposed that remove confounding rotational motion due to eye movements by decomposing the retinal flow into its separable translational and rotational components ([6-7] are early examples). An alternative theory is based upon the use of invariants in the retinal flow field [8]. The assumption underpinning all these models (see also [9-11]), and associated psychophysical [5,12,13] and neurophysiological studies [14-16], is that locomotive heading is guided by optic flow. In this paper we challenge that assumption for the control of direction of locomotion on foot. Here we have explored the role of perceived location by recording the walking trajectories of people wearing displacing prism glasses. The results suggest that perceived location, rather than optic or retinal flow, is the predominant cue that guides locomotion on foot.  相似文献   

8.
Many cells in the dorsal part of the medial superior temporal (MST) region of visual cortex respond selectively to specific combinations of expansion/contraction, translation, and rotation motions. Previous investigators have suggested that these cells may respond selectively to the flow fields generated by self-motion of an observer. These patterns can also be generated by the relative motion between an observer and a particular object. We explored a neurally constrained model based on the hypothesis that neurons in MST partially segment the motion fields generated by several independently moving objects. Inputs to the model were generated from sequences of ray-traced images that simulated realistic motion situations, combining observer motion, eye movements, and independent object motions. The input representation was based on the response properties of neurons in the middle temporal area (MT), which provides the primary input to area MST. After applying an unsupervised optimization technique, the units became tuned to patterns signaling coherent motion, matching many of the known properties of MST cells. The results of this model are consistent with recent studies indicating that MST cells primarily encode information concerning the relative three-dimensional motion between objects and the observer.  相似文献   

9.
PURPOSE: This study was designed to assess the correlation between flow velocity, the resistive index and visual field defects in patients with chronic open angle glaucoma in comparison with a nonglaucomatous control population. METHODS: Color Doppler imaging was used to study flow velocity in the central retinal artery and short posterior ciliary arteries of 76 patients with chronic open angle glaucoma and 28 normal subjects. Velocity and resistive index were correlated with visual field abnormalities. RESULTS: The chronic open angle glaucoma patients showed a statistically significant lowering of the end diastolic velocity and a raised resistive index in all vessels studied. The end diastolic velocity of the central retinal arteries of glaucoma patients were significantly correlated with the Mean Deviation of the visual field (right eye, p = 0.0041; left eye, p = 0.0167). The glaucoma patients showed a statistically significant lowering of the end diastolic velocity and a raised resistive index in all vessels supplying those parts of the optic disc that corresponded to visual hemifield defects. CONCLUSION: Open angle glaucoma is associated with changes in central retinal and ciliary artery flow velocity and resistive index which suggest a compromised circulation in this region.  相似文献   

10.
Cells in the dorsal division of the medial superior temporal area (MSTd) have large receptive fields and respond to expansion/contraction, rotation, and translation motions. These same motions are generated as we move through the environment, leading investigators to suggest that area MSTd analyzes the optical flow. One influential idea suggests that navigation is achieved by decomposing the optical flow into the separate and discrete channels mentioned above, that is, expansion/contraction, rotation, and translation. We directly tested whether MSTd neurons perform such a decomposition by examining whether there are cells that are preferentially tuned to intermediate spiral motions, which combine both expansion/contraction and rotation components. The finding that many cells in MSTd are preferentially selective for spiral motions indicates that this simple three-channel decomposition hypothesis for MSTd does not appear to be correct. Instead, there is a continuum of patterns to which MSTd cells are selective. In addition, we find that MSTd cells maintain their selectivity when stimuli are moved to different locations in their large receptive fields. This position invariance indicates that MSTd cells selective for expansion cannot give precise information about the retinal location of the focus of expansion. Thus, individual MSTd neurons cannot code, in a precise fashion, the direction of heading by using the location of the focus of expansion. The only way this navigational information could be accurately derived from MSTd is through the use of a coarse, population encoding. Positional invariance and selectivity for a wide array of stimuli suggest that MSTd neurons encode patterns of motion per se, regardless of whether these motions are generated by moving objects or by motion induced by observer locomotion.  相似文献   

11.
This report describes the dynamics of the horizontal optokinetic response of the goldfish, and compares them with those of other species. Eye rotational velocity in response to step and sinusoidal rotations of the visual surround was tested using goldfish that had both eyes free to view the surround and to rotate with it. The step response was tested by switching on a visual surround display that was rotating at constant velocity, and then switching off the display, leaving the goldfish in the dark. The step-onset response was characterized by rapid and gradual components; the latter rose with an almost linear trajectory for higher surround velocities. The response was more rapid at step-offset than at step-onset. The step-offset response overshot baseline eye velocity for most goldfish and was oscillatory for the others. The steady-state response increased with constant velocity surround rotation within the range +/- 40 deg/sec but saturated outside that range. Steady-state response gain was higher for nasally-directed that for temporally-directed surround rotations. The frequency response was essentially low-pass, with gain decreasing from about 0.9 and phase lag increasing from zero to 90 deg as surround rotational frequency increased from 0.01 to 3.0 Hz. Sinusoidal response gain decreased as a function of surround peak acceleration. The results indicate that the horizontal optokinetic response of the goldfish is nonlinear and resembles in many respects that of mammals. Models developed to simulate the dynamics of the optokinetic response of mammals can be applied to that of goldfish and reproduce its nonlinear features.  相似文献   

12.
A neural model is developed to explain how humans can approach a goal object on foot while steering around obstacles to avoid collisions in a cluttered environment. The model uses optic flow from a 3-dimensional virtual reality environment to determine the position of objects on the basis of motion discontinuities and computes heading direction, or the direction of self-motion, from global optic flow. The cortical representation of heading interacts with the representations of a goal and obstacles so that the goal acts as an attractor of heading and obstacles act as repellers. In addition, the model maintains fixation on the goal object by generating smooth pursuit eye movements. Eye rotations can distort the optic flow field, complicating heading perception, and the model uses extraretinal signals to correct for this distortion and accurately represent heading. The model explains how motion processing mechanisms in cortical areas middle temporal, medial superior temporal, and posterior parietal cortex can be used to guide steering. The model quantitatively simulates human psychophysical data about visually guided steering, obstacle avoidance, and route selection. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
This article describes a self-organizing neural network architecture that transforms optic flow and eye position information into representations of heading, scene depth, and moving object locations. These representations are used to navigate reactively in simulations involving obstacle avoidance and pursuit of a moving target. The network's weights are trained during an action-perception cycle in which self-generated eye and body movements produce optic flow information, thus allowing the network to tune itself without requiring explicit knowledge of sensor geometry. The confounding effect of eye movement during translation is suppressed by learning the relationship between eye movement outflow commands and the optic flow signals that they induce. The remaining optic flow field is due to only observer translation and independent motion of objects in the scene. A self-organizing feature map categorizes normalized translational flow patterns, thereby creating a map of cells that code heading directions. Heading information is then recombined with translational flow patterns in two different ways to form maps of scene depth and moving object locations. Most of the learning processes take place concurrently and evolve through unsupervised learning. Mapping the learned heading representations onto heading labels or motor commands requires additional structure. Simulations of the network verify its performance using both noise-free and noisy optic flow information.  相似文献   

14.
The progressive frontalization of both eyes in mammals causes overlap of the left and right visual fields, having as a consequence a region of binocular field with single vision and stereopsis. The horizontal separation of the eyes makes the retinal images of the objects lying in this binocular field have slight horizontal and vertical differences, termed disparities. Horizontal disparities are the main cue for stereopsis. In the past decades numerous physiological studies made on monkeys, which have in many aspects a similar visual system to humans, showed that a population of visual cells are capable of encoding the amplitude and sign of horizontal disparity. Such disparity detectors were found in cortical visual areas V1, V2, V3, V3A, VP, MT (V5) and MST of monkeys and in the superior colliculus of the cat and opossum. According to their disparity tuning function, these cells were first grouped into tuned excitatory, tuned inhibitory, near and far sub-groups. Subsequent studies added two more categories, tuned near and tuned far cells. Asymmetries between left and right receptive field position, on and off regions, and intra-receptive field wiring are believed to be the neural mechanisms of disparity detection. Because horizontal disparity alone is insufficient to compute reliable stereopsis, additional information about fixation distance and angle of gaze is required. Thus, while there is unequivocal evidence of cells capable of detecting horizontal disparities, it is not known how horizontal disparity is calibrated. Sensitivity to vertical disparity and information about the vergence angle or eye position may be the source of this additional information.  相似文献   

15.
During locomotion, retinal flow, gaze angle, and vestibular information can contribute to one's perception of self-motion. Their respective roles were investigated during active steering: Retinal flow and gaze angle were biased by altering the visual information during computer-simulated locomotion, and vestibular information was controlled through use of a motorized chair that rotated the participant around his or her vertical axis. Chair rotation was made appropriate for the steering response of the participant or made inappropriate by rotating a proportion of the veridical amount. Large steering errors resulted from selective manipulation of retinal flow and gaze angle, and the pattern of errors provided strong evidence for an additive model of combination. Vestibular information had little or no effect on steering performance, suggesting that vestibular signals are not integrated with visual information for the control of steering at these speeds. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

16.
Experimental data have shown that vibratory roller compactors often exhibit rotational kinematics in addition to translation during operation. This rotation is not considered in roller-integrated measurement systems that estimate soil stiffness based on drum vibration. To model and explore the effect of rotation, a lumped parameter roller/soil model was developed. The machine parameters for this model were tuned from suspended drum testing that isolated the drum from the ground. The model was then verified using field data collected over a range of excitation frequencies on spatially homogenous soil, and over transversely heterogeneous soil using one excitation frequency. Rotational motion was found to significantly influence roller-integrated measurement of soil stiffness based on single position drum vibration data. Rotational motion causes single position measurement system results to be nonunique and to vary depending on the direction of roller travel. Using the model, various alternative measurement schemes were investigated. The directional dependence was eliminated by deriving a measurement at the drum’s center of gravity, and dual-sided measurement is proposed to gain a measure of heterogeneity. A more theoretical approach was also created wherein the contact force between the drum and soil are measured rather then being calculated.  相似文献   

17.
BACKGROUND: Although mounting evidence supports the idea that smooth pursuit abnormality marks the genetic liability to schizophrenia, the precise ocular motor mechanism underlying the abnormality remains unknown. Based on recent findings in schizophrenia, we hypothesize that subtle deficits in the ability to hold online and/or use extraretinal motion information underlie the pursuit abnormality in vulnerable individuals. METHODS: The hypothesis was tested in 69 first-degree, biological relatives of probands with schizophrenia; 26 relatives had schizophrenia spectrum personalities (SSP). Subjects recruited from the community (n=71; 29 with SSP), without a known family history of psychosis, constituted the comparison groups. The traditional smooth pursuit gain measure, which is a ratio of smooth pursuit eye velocity in response to both retinal and extraretinal motion signals and the target velocity, was obtained. In addition, newly developed measures of predictive smooth pursuit (ie, in the presence of only extraretinal motion signals) were obtained. The latter measures were evaluated after the current retinal motion signals were made unavailable by briefly making the target invisible. RESULTS: Relatives, particularly those with SSP, showed significantly poorer predictive pursuit response to extraretinal motion signals (F(2,136)=6.51, P<.005), compared with the community subjects. However, the traditional smooth pursuit gain in response to both retinal and extraretinal motion signals was not different between groups. CONCLUSIONS: These results suggest that relatives of patients with schizophrenia, particularly those with SSP, have specific deficits in predictive pursuit based on only extraretinal motion signals. Normal smooth pursuit gain in response to both retinal and extraretinal motion signals is likely due to compensation based on retinal motion information. The latter suggests normal retinal motion processing and smooth pursuit motor output.  相似文献   

18.
The aim of this work was to study the effect of eye position on the activity of neurons of area PO (V6), a cortical region located in the most posterior part of the superior parietal lobule. Experiments were carried out on three awake macaque monkeys. Animals sat in a primate chair in front of a large screen, and fixated a small spot of light projected in different screen locations while the activity of single neurons was extracellularly recorded. Both visual and non-visual neurons were found. About 48% of visual and 32% of non-visual neurons showed eye position-related activity in total darkness, while in approximately 61% of visual response was modulated by eye position in the orbit. Eye position fields and/or gain fields were different from cell to cell, going from large and quite planar fields up to peak-shaped fields localized in more or less restricted regions of the animal's field of view. The spatial distribution of fixation point locations evoking peak activity in the eye position-sensitive population did not show any evident laterality effect, or significant top/bottom asymmetry. Moreover, the cortical distribution of eye position-sensitive neurons was quite uniform all over the cortical region studied, suggesting the absence of segregation for this property within area PO (V6). In the great majority of visual neurons, the receptive field 'moved' with gaze according to eye displacements, remaining at the same retinotopic coordinates, as is usual for visual neurons. In some cases, the receptive field did not move with gaze, remaining anchored to the same spatial location regardless of eye movements ('real-position cells'). A model is proposed suggesting how eye position-sensitive visual neurons might build up real-position cells in local networks within area PO (V6). The presence in area PO (V6) of real-position cells together with a high percentage of eye position-sensitive neurons, most of them visual in nature, suggests that this cortical area is engaged in the spatial encoding of extrapersonal visual space. Since lesions of the superior parietal lobule in humans produce deficits in visual localization of targets as well as in arm-reaching for them, and taking into account that the monkey's area PO (V6) is reported to be connected with the premotor area 6, we suggest that area PO (V6) supplies the premotor cortex with the visuo-spatial information required for the visual control of arm-reaching movements.  相似文献   

19.
Step-ramp target motion evokes a characteristic sequence of presaccadic smooth eye movement in the direction of the target ramp, catch-up targets to bring eye position close to the position of the moving target, and postsaccadic eye velocities that nearly match target velocity. I have analyzed this sequence of eye movements in monkeys to reveal a strong postsaccadic enhancement of pursuit eye velocity and to document the conditions that lead to that enhancement. Smooth eye velocity was measured in the last 10 ms before and the first 10 ms after the first saccade evoked by step-ramp target motion. Plots of eye velocity as a function of time after the onset of the target ramp revealed that eye velocity at a given time was much higher if measured after versus before the saccade. Postsaccadic enhancement of pursuit was recorded consistently when the target stepped 3 degrees eccentric on the horizontal axis and moved upward, downward, or away from the position of fixation. To determine whether postsaccadic enhancement of pursuit was invoked by smear of the visual scene during a saccade, I recorded the effect of simulated saccades on the presaccadic eye velocity for step-ramp target motion. The 3 degrees simulated saccade, which consisted of motion of a textured background at 150 degrees/s for 20 ms, failed to cause any enhancement of presaccadic eye velocity. By using a strategically selected set of oblique target steps with horizontal ramp target motion, I found clear enhancement for saccades in all directions, even those that were orthogonal to target motion. When the size of the target step was varied by up to 15 degrees along the horizontal meridian, postsaccadic eye velocity did not depend strongly either on the initial target position or on whether the target moved toward or away from the position of fixation. In contrast, earlier studies and data in this paper show that presaccadic eye velocity is much stronger when the target is close to the center of the visual field and when the target moves toward versus away from the position of fixation. I suggest that postsaccadic enhancement of pursuit reflects activation, by saccades, of a switch that regulates the strength of transmission through the visual-motor pathways for pursuit. Targets can cause strong visual motion signals but still evoke low presaccadic eye velocities if they are ineffective at activating the pursuit system.  相似文献   

20.
1. In young kittens, cortical neurones, which are usually binocularly driven, have their binocularity reduced if one eye is covered, or if the eyes are made strabismic or alternately occluded. Some of the factors causing these changes were analysed. 2. If the contrast of one retinal image is abolished with no difference in mean illumination, the input from that eye is virtually lost. 3. If one eye merely has its mean retinal illumination attenuated, that eye does not specifically lose its influence in the cortex, although there is a reduction in the proportion of binocular units. This change might partly be due to a difference in the timing of signals from the two eyes but is more likely to be caused by a difference in the strength of the discharges. 4. There is little change in binocularity if one image is dimmed but contrast is absent from both. 5. If contours of very different orientation fall simultaneously on corresponding retinal regions, binocularity breaks down, as in the case of strabismus or when different patterns are presented to the two eyes. But as long as the patterns on corresponding retinal points have similar orientation, even if the visual axes are misaligned, binocularity can be maintained. 6. If the eyes are not stimulated simultaneously, binocularity is reduced, even if the contours falling on the two retinae (at different times) are identical. 7. Roughly simultaneous stimulation, with roughly congruent patterns on the two receptive fields, are needed for the upkeep of binocular connexions on to cortical cells.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号