首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The ability to judge heading during tracking eye movements has recently been examined by several investigators. To assess the use of retinal-image and extra-retinal information in this task, the previous work has compared heading judgments with executed as opposed to simulated eye movements. For eye movement velocities greater than 1 deg/sec, observers seem to require the eye-velocity information provided by extra-retinal signals that accompany tracking eye movements. When those signals are not provided, such as with simulated eye movements, observers perceive their self-motion as curvilinear translation rather than the linear translation plus eye rotation being presented. The interpretation of the previous results is complicated, however, by the fact that the simulated eye movement condition may have created a conflict between two possible estimates of the heading: one based on extra-retinal solutions and the other based on retina-image solutions. In four experiments, we minimized this potential conflict by having observers judge heading in the presence of rotations consisting of mixtures of executed and simulated eye movements. The results showed that the heading is estimated more accurately when rotational flow is created by executed eye movements alone. In addition, the magnitude of errors in heading estimates is essentially proportional to the amount of rotational flow created by a simulated eye rotation (independent of the total magnitude of the rotational flow). The fact that error magnitude is proportional to the amount of simulated rotation suggests that the visual system attributes rotational flow unaccompanied by an eye movement to a displacement of the direction of translation in the direction of the simulated eye rotation.  相似文献   

2.
Eye or head rotation would influence perceived heading direction if it were coded by cells tuned only to retinal flow patterns that correspond to linear self-movement. We propose a model for heading detection based on motion templates that are also Gaussian-tuned to the amount of rotational flow. Such retinal flow templates allow explicit use of extra-retinal signals to create templates tuned to head-centric flow as seen by the stationary eye. Our model predicts an intermediate layer of 'eye velocity gain fields' in which 'rate-coded' eye velocity is multiplied with responses of templates sensitive to specific retinal flow patterns. By combination of the activities of one retinal flow template and many units with an eye velocity gain field, a new type of unit appears: its preferred retinal flow changes dynamically in accordance with the eye rotation velocity. This unit's activity becomes thereby approximately invariant to the amount of eye rotation. The units with eye velocity gain fields from the motion-analogue of the units with eye position gain fields found in area 7a, which according to our general approach, are needed to transform position from retino-centric to head-centric coordinates. The rotation-tuned templates can also provide rate-coded visual estimates of eye rotation to allow a pure visual compensation for rotational flow. Our model is consistent with psychophysical data that indicate a role for extra-retinal as well as visual rotation signals in the correct perception of heading.  相似文献   

3.
In a series of 6 experiments, two hypotheses were tested: that nominal heading perception is determined by the relative motion of images of objects positioned at different depths (R. F. Wang and J. E. Cutting, 1999) and that static depth information contributes to this determination. By manipulating static depth information while holding retinal-image motion constant during simulated self-movement, the authors found that static depth information played a role in determining perceived heading. Some support was also found for the involvement of R. F. Wang and J. E. Cutting's (1999) categories of object-image relative motion in determining perceived heading. However, results suggested an unexpected functional dominance of information about heading relative to apparently near objects. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
When presented with random-dot displays with little depth information, observers cannot determine their direction of self-motion accurately in the presence of rotational flow without appropriate extra-retinal information (Royden CS et al. Vis Res 1994;34:3197-214.). On theoretical grounds, one might expect improved performance when depth information is added to the display (van den Berg AV and Brenner E. Nature 1994;371:700-2). We examined this possibility by having observers indicate perceived self-motion paths when the amount of depth information was varied. When stereoscopic cues and a variety of monocular depth cues were added, observers still misperceived the depicted self-motion when the rotational flow in the display was not accompanied by an appropriate extra-retinal, eye-velocity signal. Specifically, they perceived curved self-motion paths with the curvature in the direction of the simulated eye rotation. The distance to the response marker was crucial to the objective measurement of this misperception. When the marker distance was small, the observers' settings were reasonably accurate despite the misperception of the depicted self-motion. When the marker distance was large, the settings exhibited the errors reported previously by Royden CS et al. Vis Res 1994;34-3197-3214. The path judgement errors observers make during simulated gaze rotations appear to be the result of misattributing path-independent rotation to self-motion along a circular path with path-dependent rotation. An analysis of the information an observer could use to avoid such errors reveals that the addition of depth information is of little use.  相似文献   

5.
Previous studies have generally considered heading perception to be a visual task. However, since judgments of heading direction are required only during self-motion, there are several other relevant senses which could provide supplementary and, in some cases, necessary information to make accurate and precise judgments of the direction of self-motion. We assessed the contributions of several of these senses using tasks chosen to reflect the reference system used by each sensory modality. Head-pointing and rod-pointing tasks were performed in which subjects aligned either the head or an unseen pointer with the direction of motion during whole body linear motion. Passive visual and vestibular stimulation was generated by accelerating subjects at sub- or supravestibular thresholds down a linear track. The motor-kinesthetic system was stimulated by having subjects actively walk along the track. A helmet-mounted optical system, fixed either on the cart used to provide passive visual or vestibular information or on the walker used in the active walking conditions, provided a stereoscopic display of an optical flow field. Subjects could be positioned at any orientation relative to the heading, and heading judgments were obtained using unimodal visual, vestibular, or walking cues, or combined visual-vestibular and visual-walking cues. Vision alone resulted in reasonably precise and accurate head-pointing judgments (0.3 degrees constant errors, 2.9 degrees variable errors), but not rod-pointing judgments (3.5 degrees constant errors, 5.9 degrees variable errors). Concordant visual-walking stimulation slightly decreased the variable errors and reduced constant pointing errors to close to zero, while head-pointing errors were unaffected.  相似文献   

6.
This study examined whether the perception of heading is determined by spatially pooling velocity information. Observers were presented displays simulating observer motion through a volume of 3-D objects. To test the importance of spatial pooling, the authors systematically varied the nonrigidity of the flow field using two types of object motion: adding a unique rotation or translation to each object. Calculations of the signal-to-noise (observer velocity-to-object motion) ratio indicated no decrements in performance when the ratio was .39 for object rotation and .45 for object translation. Performance also increased with the number of objects in the scene. These results suggest that heading is determined by mechanisms that use spatial pooling over large regions. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
In eight experiments, we examined the ability to judge heading during tracking eye movements. To assess the use of retinal-image and extra-retinal information in this task, we compared heading judgments with executed as opposed to simulated eye movements. In general, judgments were much more accurate during executed eye movements. Observers in the simulated eye movement condition misperceived their self-motion as curvilinear translation rather than the linear translation plus eye rotation that was simulated. There were some experimental conditions in which observers could judge heading reasonably accurately during simulated eye movements; these included conditions in which eye movement velocities were 1 deg/sec or less and conditions which made available a horizon cue that exists for locomotion parallel to a ground plane with a visible horizon. Overall, our results imply that extra-retinal, eye-velocity signals are used in determining heading under many, perhaps most, viewing conditions.  相似文献   

8.
To understand better the range of conditions supporting stereoscopic vision, we explored the effects of speed, as well as specific optic flow patterns, on judgments of the depth, near or far of fixation, of large targets briefly presented in the upper periphery. They had large disparities (1-6 deg) and moved at high speeds (20-100 deg/sec). Motion was either vertical or horizontal, as well as either unidirectional or layered in bands of alternating directions (opponent-motion). High stimulus speeds can extend dmax. The effects are explained by models having linear filters that signal both faster speeds and larger disparities. Stereo depth localization can also be enhanced by opponent-motion even when kinetic depth itself is not apparent. Improvements are greatest with wide-field, horizontal opponent-motion. The results imply functions such as vection, posture-control, and vergence may benefit from disparity information enhanced by optic flow patterns that are commonly available to a moving, binocular observer.  相似文献   

9.
The influence of stereoscopic vision on the perception of optic flow fields was investigated in experiments based on a recently described illusion. In this illusion, subjects perceive a shift of the center of an expanding optic flow field when it is transparently superimposed by a unidirectional motion pattern. This illusory shift can be explained by the visual system taking the presented flow pattern as a certain self-motion flow field. Here we examined the dependence of the illusory transformation on differences in depth between the two superimposed motion patterns. Presenting them with different relative binocular disparities, we found a strong variation in the magnitude of the illusory shift. Especially when translation was in front of expansion, a highly significant decrease of the illusory shift occurred, down to 25% of its magnitude at zero disparity. These findings confirm the assumption that the motion pattern is interpreted as a self-motion flow field. In a further experiment we presented monocular depth cues by changing dot size and dot density. This caused a reduction of the illusory shift which is distinctly smaller than under stereoscopic presentation. We conclude that the illusory optic flow transformation is modified by depth information, especially by binocular disparity. The findings are linked to the phenomenon of induced motion and are related to neurophysiology.  相似文献   

10.
According to Einstein's equivalence principle, inertial accelerations during translational motion are physically indistinguishable from gravitational accelerations experienced during tilting movements. Nevertheless, despite ambiguous sensory representation of motion in primary otolith afferents, primate oculomotor responses are appropriately compensatory for the correct translational component of the head movement. The neural computational strategies used by the brain to discriminate the two and to reliably detect translational motion were investigated in the primate vestibulo-ocular system. The experimental protocols consisted of either lateral translations, roll tilts, or combined translation-tilt paradigms. Results using both steady-state sinusoidal and transient motion profiles in darkness or near target viewing demonstrated that semicircular canal signals are necessary sensory cues for the discrimination between different sources of linear acceleration. When the semicircular canals were inactivated, horizontal eye movements (appropriate for translational motion) could no longer be correlated with head translation. Instead, translational eye movements totally reflected the erroneous primary otolith afferent signals and were correlated with the resultant acceleration, regardless of whether it resulted from translation or tilt. Therefore, at least for frequencies in which the vestibulo-ocular reflex is important for gaze stabilization (>0.1 Hz), the oculomotor system discriminates between head translation and tilt primarily by sensory integration mechanisms rather than frequency segregation of otolith afferent information. Nonlinear neural computational schemes are proposed in which not only linear acceleration information from the otolith receptors but also angular velocity signals from the semicircular canals are simultaneously used by the brain to correctly estimate the source of linear acceleration and to elicit appropriate oculomotor responses.  相似文献   

11.
This article describes a self-organizing neural network architecture that transforms optic flow and eye position information into representations of heading, scene depth, and moving object locations. These representations are used to navigate reactively in simulations involving obstacle avoidance and pursuit of a moving target. The network's weights are trained during an action-perception cycle in which self-generated eye and body movements produce optic flow information, thus allowing the network to tune itself without requiring explicit knowledge of sensor geometry. The confounding effect of eye movement during translation is suppressed by learning the relationship between eye movement outflow commands and the optic flow signals that they induce. The remaining optic flow field is due to only observer translation and independent motion of objects in the scene. A self-organizing feature map categorizes normalized translational flow patterns, thereby creating a map of cells that code heading directions. Heading information is then recombined with translational flow patterns in two different ways to form maps of scene depth and moving object locations. Most of the learning processes take place concurrently and evolve through unsupervised learning. Mapping the learned heading representations onto heading labels or motor commands requires additional structure. Simulations of the network verify its performance using both noise-free and noisy optic flow information.  相似文献   

12.
The motion of objects during motion parallax can be decomposed into 2 observer-relative components: translation and rotation. The depth ratio of objects in the visual field is specified by the inverse ratio of their angular displacement (from translation) or equivalently by the inverse ratio of their rotations. Despite the equal mathematical status of these 2 information sources, it was predicted that observers would be far more sensitive to the translational than rotational components. Such a differential sensitivity is implicitly assumed by the computer graphics technique billboarding, in which 3-dimensional (3-D) objects are drawn as planar forms (i.e., billboards) maintained normal to the line of sight. In 3 experiments, observers were found to be consistently less sensitive to rotational anomalies. The implications of these findings for kinetic depth effect displays and billboarding techniques are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
What visual information do we use to guide movement through our environment? Self-movement produces a pattern of motion on the retina, called optic flow. During translation, the direction of movement (locomotor direction) is specified by the point in the flow field from which the motion vectors radiate - the focus of expansion (FoE) [1-3]. If an eye movement is made, however, the FoE no longer specifies locomotor direction [4], but the 'heading' direction can still be judged accurately [5]. Models have been proposed that remove confounding rotational motion due to eye movements by decomposing the retinal flow into its separable translational and rotational components ([6-7] are early examples). An alternative theory is based upon the use of invariants in the retinal flow field [8]. The assumption underpinning all these models (see also [9-11]), and associated psychophysical [5,12,13] and neurophysiological studies [14-16], is that locomotive heading is guided by optic flow. In this paper we challenge that assumption for the control of direction of locomotion on foot. Here we have explored the role of perceived location by recording the walking trajectories of people wearing displacing prism glasses. The results suggest that perceived location, rather than optic or retinal flow, is the predominant cue that guides locomotion on foot.  相似文献   

14.
The accuracy of depth judgments that are based on binocular disparity or structure from motion (motion parallax and object rotation) was studied in 3 experiments. In Experiment 1, depth judgments were recorded for computer simulations of cones specified by binocular disparity, motion parallax, or stereokinesis. In Experiment 2, judgments were recorded for real cones in a structured environment, with depth information from binocular disparity, motion parallax, or object rotation about the y-axis. In both of these experiments, judgments from binocular disparity information were quite accurate, but judgments on the basis of geometrically equivalent or more robust motion information reflected poor recovery of quantitative depth information. A 3rd experiment demonstrated stereoscopic depth constancy for distances of 1 to 3 m using real objects in a well-illuminated, structured viewing environment in which monocular depth cues (e.g., shading) were minimized. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
The 2D projection of a rotating Necker cube yields an ambiguous 3D interpretation based on both 2D shape and kinetic depth information. The present study shows that the alternation rate of the two 3D interpretations is constant with the rotation speed up to some critical value (around 25 turns/min for a cube whose sides subtend 2.5 deg) and increases monotonically thereafter. It is proposed that the additional perceptual reversals (PRs) observed at high rotation speeds are due to the increased frequency of the crossovers of the cube's edges. These crossovers yield 2D motion "aliasing" (or discontinuity) and "veridical" (or continuity) motion components. The motion aliasing (or crossover) hypothesis states that, in addition to the inherent ambiguity of the dynamic 2D projection of 3D objects, perceptual motion/perspective reversals will occur any time the discontinuity speed takes over the continuity speed. It is proposed that the relative strengths of the two components depend on the linear speed of the projected edges and that the discontinuity components take over the continuity one in the speed range where contrast sensitivity (or, above threshold, efficiency) is a decreasing function of speed. The motion aliasing hypothesis was tested and supported in a series of independent experiments showing that, for rotation speeds higher than 25 turns/min the PR rate increases with the crossover frequency at a constant speed, with linear speed at a constant crossover frequency and with the similarity of the crossing bars in terms of their orientation, polarity and spatial overlap. In addition, some of these experiments suggest that 2D shape and kinetic depth 3D-cues combine in such a way that the average PR rate they yield together is the same as the PR rate yielded by each of them independently. In the Discussion section we elaborate on issues related to the perceptual combination of ambiguous shape and kinetic depth, 3D cues.  相似文献   

16.
We introduce an objective shape-identification task (SIT) for measuring the kinetic depth effect (KDE). A rigidly rotating surface consisting of hills and valleys on an otherwise flat ground was defined by 300 randomly positioned dots. On each trial, Ss identified 1 of 53 shapes and its direction of rotation. Identification accuracy was an objective measure of Ss' perceptual ability to extract 3D structure from 2D motion via KDE. Objective accuracy data were consistent with previous subjective rating judgments of depth and coherence. Along with motion cues, rotating real 3D dot-defined shapes produced a cue of changing dot density. Shortening dot lifetimes to control dot density showed that changing density was neither necessary nor sufficient to account for accuracy; motion alone sufficed. Our SIT was solvable with motion cues from the 6 most relevant locations. We used the dots from these locations in a simplified 2D direction-labeling motion task with 6 perceptually flat flow fields. Ss' performance in the 2D and 3D tasks was equivalent, indicating that the information processing capacity of KDE is not unique. Our proposed structure-from-motion algorithm for the SIT finds relative minima and maxima of local velocity and then assigns 3D depths proportional to velocity. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
Random-dot kinematograms were used to estimate infants' thresholds for shearing motion in the absence of flicker and position cues. The principal advantage of these stimuli is that changes in dot position are camouflaged by the presence of numerous matching dots, thus necessitating the detection of motion before extraction of local pattern features. 13- and 20-wk-old infants were tested with a forced-choice preferential looking technique. The target stimulus resembled a vertically oriented corrugated pattern that oscillated at 1 Hz, if, and only if, shearing motion was detected. Infants were tested at different velocities, ranging from 0.7°/sec to 5.6°/sec, and the results revealed minimum velocity thresholds of 3.5°/sec and 1.2°/sec for 13- and 20-wk-old infants, respectively. Possible interpretations for these results based on position- or flicker-sensitive mechanisms are considered and are found inconsistent with the overall pattern of results. It is concluded that infants detect shearing motion in random-dot displays with a motion-sensitive mechanism. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

18.
Four experiments were directed at understanding the influence of multiple moving objects on curvilinear (i.e., circular and elliptical) heading perception. Displays simulated observer movement over a ground plane in the presence of moving objects depicted as transparent, opaque, or black cubes. Objects either moved parallel to or intersected the observer's path and either retreated from or approached the moving observer. Heading judgments were accurate and consistent across all conditions. The significance of these results for computational models of heading perception and for information in the global optic flow field about observer and object motion is discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
Most theoretical approaches to perception of heading rely on the directions of image velocity vectors as the primary source of heading information. The research described in this article examined an additional source of information for determining heading: distributions of image velocity magnitudes. Displays simulated observer motion relative to rigid three-dimensional environments. Depth was distributed nonuniformly such that image velocity magnitudes provided, for some display conditions, conflicting heading information relative to the radial directions of the flow field. Results indicated that image velocity magnitudes influenced heading performance, suggesting that the perception of heading is not based solely on the radial structure of the directions of image flow.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号