首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 765 毫秒
1.
The prediction of future positions of moving objects occurs in cases of actively produced and passively observed movement. Additionally, the moving object may or may not be tracked with the eyes. The authors studied the difference between active and passive movement prediction by asking observers to estimate displacements of an occluded moving target, where the movement was driven by the observer's manual action or was passively observed. In the absence of eye tracking, they found that in the active condition, estimates are more anticipatory than in the passive conditions. Decreasing the congruence between motor action and visual feedback diminished but did not eliminate the anticipatory effect of action. When the target was tracked with the eyes, the effect of manual action disappeared. Results indicate distinct contributions of hand and eye movement signals to the prediction of trajectories of moving objects. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
When rotating stripes or other periodic stimuli cross the retina at a critical rate, a reversal in the direction of motion of the stimuli is often seen. This illusion of motion perception was used to explore the roles of retinal and perceived motion in the generation of optokinetic nystagmus. Here we show that optokinetic nystagmus is disrupted during the perception of this illusion. Thus, when perceived and actual motion are in conflict, subjects fail to track the veridical movement. This observation suggests that the perception of motion can directly influence optokinetic nystagmus, even in the presence of a moving retinal image. A conflict in the neural representation of motion in different brain areas may explain these findings.  相似文献   

3.
It is well known that dynamic visual information influences movement control, whereas the role played by background visual information is still largely unknown. Evidence coming mainly from eye movement and manual tracking studies indicates that background visual information modifies motion perception and might influence movement control. The goal of the present study was to test this hypothesis. Subjects had to apply pressure on a strain gauge to displace in a single action a cursor shown on a video display and to immobilize it on a target shown on the same display. In some instances, the visual background against which the cursor moved was unexpectedly perturbed in a direction opposite to (Experiment 1), or in the same direction as (Experiment 2) the cursor controlled by the subject. The results of both experiments indicated that the introduction of a visual perturbation significantly affected aiming accuracy. These results suggest that background visual information is used to evaluate the velocity of the aiming cursor, and that this perceived velocity is fed back to the control system, which uses it for on-line corrections.  相似文献   

4.
It is well known that during volitional sinusoidal tracking the long-latency reflex modulates in parallel with the volitional EMG activity. In this study, a series of experiments are reported demonstrating several conditions in which an uncoupling of reflex from volitional activity occurs. The paradigm consists of a visually guided task in which the subject tracked a sinusoid with the wrist. The movement was perturbed by constant torque or controlled velocity perturbations at 45 degrees intervals of the tracking phase. Volitional and reflex-evoked EMG and wrist displacement as functions of the tracking phase were recorded. The relationship of both short-latency (30-60 ms) and longer-latency (60-100 ms) reflex components to the volitional EMG was evaluated. In reflex tracking, the peak reflex amplitude occurs at phases of tracking which correspond to a maximum of wrist joint angular velocity in the direction of homonymous muscle shortening and a minimum of wrist compliance. Uncoupling of the reflex and volitional EMG was observed in three situations. First, during passive movement of the wrist through the sinusoidal tracking cycle perturbation-evoked long-latency stretch reflex peak is modulated as for normal, volitional tracking. However, with passive joint movement the volitional EMG modulation is undetectable. Second, a subset of subjects demonstrate a normally modulated and positioned long-latency reflex with a single peak. However, these subjects have distinct bimodal peaks of volitional EMG. Third, the imposition of an anti-elastic load (positive position feedback) shifts the volitional EMG envelope by as much as 180 degrees along the tracking phase when compared with conventional elastic loading. Yet the long-latency reflex peak remains at its usual phase in the tracking cycle, corresponding to the maximal velocity in the direction of muscle shortening. Furthermore, comparison of the results from elastic and anti-elastic loads reveals a dissociation of short- and long-latency reflex activity, with the short-latency reflex shifting with the volitional EMG envelope. Comparable results were also obtained for controlled velocity perturbations used to control for changes in joint compliance. The uncoupling of the reflex and volitional EMG activity in the present series of experiments points to a flexible relationship between reflex and volitional control systems, altered by peripheral input and external load.  相似文献   

5.
When viewing a moving object, details may appear blurred if the object's motion is not compensated for by the eyes. Smooth pursuit is a voluntary eye movement that is used to stabilize a moving object. Most studies of smooth pursuit have used small, foveal targets as stimuli (e.g. Lisberger SG and Westbrook LE. J Neurosci 1985;5:1662-1673.). However, in the laboratory, smooth pursuit is poorer when a small object is tracked across a background, presumably due to a conflict between the primitive optokinetic reflex and smooth pursuit. Functionally, this could occur if the motion signal arising from the target and its surroundings were averaged, resulting in a smaller net motion signal. We asked if the smooth pursuit system could spatially summate coherent motion, i.e. if its response would improve when motion in the peripheral retina was in the same direction as motion in the fovea. Observers tracked random-dot cinematograms (RDC) which were devoid of consistent position cues to isolate the motion response. Either the height or the density of the display was systematically varied. Eye speed at the end of the open-loop period was greater for cinematograms than for a single spot. In addition, eye acceleration increased and latency decreased as the size of the aperture increased. Changes in the density produced similar but smaller effects on both acceleration and latency. The improved pursuit for larger motion stimuli suggests that neuronal mechanisms subserving smooth pursuit spatially average motion information to obtain a stronger motion signal.  相似文献   

6.
A stereoscopic rotational movement aftereffect (MAE) and a stereoscopic bi-directional MAE were generated by rotation of a cyclopean random dot cylinder in depth and by movement of two cyclopean random dot planes in opposite directions, respectively. Cross-adaptational MAEs were also generated on each other, but not with stimuli lacking any disparity. Cross-adaptation MAEs were generated between stereoscopic and non-stereoscopic random dot stimuli moving in the one X/Y plane. Spontaneous reversals in direction of movement were observed with bistable stimuli lacking disparity. Two models of the middle temporal area were considered which might explain both the stereoscopic MAEs and the spontaneous reversals.  相似文献   

7.
This research focused on the response of neurons in the inferior colliculus of the unanesthetized mustached bat, Pteronotus parnelli, to apparent auditory motion. We produced the apparent motion stimulus by broadcasting pure-tone bursts sequentially from an array of loudspeakers along horizontal, vertical, or oblique trajectories in the frontal hemifield. Motion direction had an effect on the response of 65% of the units sampled. In these cells, motion in opposite directions produced shifts in receptive field locations, differences in response magnitude, or a combination of the two effects. Receptive fields typically were shifted opposite the direction of motion (i.e., units showed a greater response to moving sounds entering the receptive field than exiting) and shifts were obtained to horizontal, vertical, and oblique motion orientations. Response latency also shifted as a function of motion direction, and stimulus locations eliciting greater spike counts also exhibited the shortest neural latency. Motion crossing the receptive field boundaries appeared to be both necessary and sufficient to produce receptive field shifts. Decreasing the silent interval between successive stimuli in the apparent motion sequence increased both the probability of obtaining a directional effect and the magnitude of receptive field shifts. We suggest that the observed directional effects might be explained by "spatial masking," where the response of auditory neurons after stimulation from particularly effective locations in space would be diminished. The shift in auditory receptive fields would be expected to shift the perceived location of a moving sound and may explain shifts in localization of moving sources observed in psychophysical studies. Shifts in perceived target location caused by auditory motion might be exploited by auditory predators such as Pteronotus in a predictive tracking strategy to capture moving insect prey.  相似文献   

8.
When 2 targets for pursuit eye movements move in different directions, the eye velocity follows the vector average (S. G. Lisberger & V. P. Ferrera, 1997). The present study investigates the mechanisms of target selection when observers are instructed to follow a predefined horizontal target and to ignore a moving distractor stimulus. Results show that at 140 ms after distractor onset, horizontal eye velocity is decreased by about 25%. Vertical eye velocity increases or decreases by 1°/s in the direction opposite from the distractor. This deviation varies in size with distractor direction, velocity, and contrast. The effect was present during the initiation and steady-state tracking phase of pursuit but only when the observer had prior information about target motion. Neither vector averaging nor winner-take-all models could predict the response to a moving to-be-ignored distractor during steady-state tracking of a predefined target. The contributions of perceptual mislocalization and spatial attention to the vertical deviation in pursuit are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
A neural network model based on the anatomy and physiology of the cerebellum is presented that can generate both simple and complex predictive pursuit, while also responding in a feedback mode to visual perturbations from an ongoing trajectory. The model allows the prediction of complex movements by adding two features that are not present in other pursuit models: an array of inputs distributed over a range of physiologically justified delays, and a novel, biologically plausible learning rule that generated changes in synaptic strengths in response to retinal slip errors that arrive after long delays. To directly test the model, its output was compared with the behavior of monkeys tracking the same trajectories. There was a close correspondence between model and monkey performance. Complex target trajectories were created by summing two or three sinusoidal components of different frequencies along horizontal and/or vertical axes. Both the model and the monkeys were able to track these complex sum-of-sines trajectories with small phase delays that averaged 8 and 20 ms in magnitude, respectively. Both the model and the monkeys showed a consistent relationship between the high- and low-frequency components of pursuit: high-frequency components were tracked with small phase lags, whereas low-frequency components were tracked with phase leads. The model was also trained to track targets moving along a circular trajectory with infrequent right-angle perturbations that moved the target along a circle meridian. Before the perturbation, the model tracked the target with very small phase differences that averaged 5 ms. After the perturbation, the model overshot the target while continuing along the expected nonperturbed circular trajectory for 80 ms, before it moved toward the new perturbed trajectory. Monkeys showed similar behaviors with an average phase difference of 3 ms during circular pursuit, followed by a perturbation response after 90 ms. In both cases, the delays required to process visual information were much longer than delays associated with nonperturbed circular and sum-of-sines pursuit. This suggests that both the model and the eye make short-term predictions about future events to compensate for visual feedback delays in receiving information about the direction of a target moving along a changing trajectory. In addition, both the eye and the model can adjust to abrupt changes in target direction on the basis of visual feedback, but do so after significant processing delays.  相似文献   

10.
As a step toward understanding the mechanism by which targets are selected for smooth-pursuit eye movements, we examined the behavior of the pursuit system when monkeys were presented with two discrete moving visual targets. Two rhesus monkeys were trained to select a small moving target identified by its color in the presence of a moving distractor of another color. Smooth-pursuit eye movements were quantified in terms of the latency of the eye movement and the initial eye acceleration profile. We have previously shown that the latency of smooth pursuit, which is normally around 100 ms, can be extended to 150 ms or shortened to 85 ms depending on whether there is a distractor moving in the opposite or same direction, respectively, relative to the direction of the target. We have now measured this effect for a 360 deg range of distractor directions, and distractor speeds of 5-45 deg/s. We have also examined the effect of varying the spatial separation and temporal asynchrony between target and distractor. The results indicate that the effect of the distractor on the latency of pursuit depends on its direction of motion, and its spatial and temporal proximity to the target, but depends very little on the speed of the distractor. Furthermore, under the conditions of these experiments, the direction of the eye movement that is emitted in response to two competing moving stimuli is not a vectorial combination of the stimulus motions, but is solely determined by the direction of the target. The results are consistent with a competitive model for smooth-pursuit target selection and suggest that the competition takes place at a stage of the pursuit pathway that is between visual-motion processing and motor-response preparation.  相似文献   

11.
In the stroboscopic version of the Pulfrich effect a filter is able to induce depth shifts in a target as if the latter were moving continuously, rather than merely occupying a series of discrete positions. This was examined in a further series of experiments, in which a visual alignment technique was used to measure the perceived visual direction of an apparently moving target in intervals between its presentations. Results showed that the target has approximately the visual direction that it would have if it were moving continuously. This "filling in" of apparent motion was shown to occur before the level of stereopsis. The possible influence of tracking eye movements is discussed.  相似文献   

12.
The magnitudes of cerebral somatosensory evoked potentials (SEPs), following stimulation of cutaneous or muscle afferents in the upper limb, are reduced during active and passive movements of the fingers. The generalizability of such a movement effect was tested for lower limb events. We measured SEP magnitudes following activation of cutaneous (sural) and mixed (tibial) nerves during the flexion phase of active and passive rhythmic movements of the human lower limb. In eight volunteers, 150 SEPs per condition were recorded from Cz' referenced to Fpz'. Compared to stationary controls, both active and passive movements significantly depressed the early SEP components (P1-N1) [mean values, to 12.8%, 9.9% respectively for tibial nerve and to 29.6%, 25.6% for sural nerve stimulation, p < 0.05]. The attenuation was still observed when only one leg was moved and with stimulation at an earlier point in the flexion phase of movement. Visual fixation did not significantly affect P1-N1 amplitudes, compared to eyes closed. As previously shown, soleus H reflexes with stable M waves were significantly depressed during the movements (p < 0.05). The general construct may be that centripetal flow initiated from somatosensory receptors during limb movement leads to modulation of both spinal and cortical responses following large diameter cutaneous or muscle afferent activation.  相似文献   

13.
Where do we perceive an object to be when it is moving? Nijhawan [1] has reported that if a stationary test pattern is briefly flashed in spatial alignment with a moving one, the moving element actually appears displaced in the direction in which it is moving. Nijhawan postulates that this may be the result of a mechanism that predicts the future position of the moving element so as to compensate for the fact that the element will have moved position from the time at which the light left it to the time at which the observer becomes aware of it (as a result of the finite time taken for neural transmission). There is an alternative explanation of this effect, however. Changes in the stimulus presentation could affect perceptual latency [2], and therefore the perceived position if in motion (as suggested for the Pulfrich pendulum effect [3] [4]). In other words, if the flashed probe of the Nijhawan demonstration takes longer to reach perceptual awareness than the moving stimulus, the latter will appear to be ahead of the probe. Here, I demonstrate an alternative way of testing this hypothesis. When an illusory movement is induced (via the motion aftereffect) within a stationary pattern, it can be shown that this also produces a change in its perceived spatial position. As the pattern is stationary, one cannot account for this result via the notion of perceptual lags.  相似文献   

14.
Administered 2 tasks bearing on the perception and concept of relative velocity to 7-, 9-, 11-, and 13-yr-old children (24 at each age level). The perceptual task presented an illusion created by the movement of a target on a moving background, while the conceptual task was a modified version of Piaget's technique. In each task the 2 movements involved were either in the same (MS) or in the opposite direction. It was found that the illusion appeared earlier than the corresponding concept under the MS condition. The possibility of a retroactive effect of the concept on the percept is suggested. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
To follow visually a small object moving in front of a textured background, insects and vertebrates can employ a similar strategy: saccadic tracking. In the case of vertebrates, the neural components that generate this behavior are not known in detail. The neural substrate of optomotor behavior in Diptera is relatively well understood. Here a model developed from the dipteran data is found to be capable of saccadic tracking. It is characterized by the following components and functions: (1) Two subsystems contribute to the response, a small-field tracking system and a large-field compensatory optomotor system, as suggested previously (Egelhaaf et al. 1988). (2) Both systems need to be suppressed during saccadic rotation. In the small-field system, the suppression, close to the visual input, is mediated by the activity of the large-field system. In the large-field system, suppression, close to the motor output, is due to efferent signals from the saccade generator. A similar model could also apply to vertebrates. Two implications of the present model are that saccadic tracking does not require object identification, and under saccadic tracking it is the background rather than the object that is stabilized on the retina. If objects are identified under these conditions, this must occur even though their image is not stabilized on the retina.  相似文献   

16.
Previous research on adaptation to visual-motor rearrangement suggests that the CNS represents accurately only 1 visual–motor mapping at a time. This idea was examined in 3 experiments where Ss tracked a moving target under repeated alternations between 2 initially interfering mappings (the "normal" mapping characteristic of computer input devices and a 108° rotation of the normal mapping). Alternation between the 2 mappings led to significant reduction in error under the rotated mapping and significant reduction in the adaptation aftereffect ordinarily caused by switching between mappings. Color as a discriminative cue, interference vs decay in adaptation aftereffect, and intermanual transfer were also examined. The results reveal a capacity for multiple concurrent visual–motor mappings, possibly controlled by a parametric process near the motor output stage of processing. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
This study investigates how mechanisms for amplifying 2-D motion contrast influence the assignment of 3-D depth values. The authors found that the direction of movement of a random-dot conveyor belt strongly inclined observers to report that the front surface of a superimposed, transparent, rotating, random-dot sphere moved in a direction opposite to the belt. This motion-contrast effect was direction selective and demonstrated substantial spatial integration. Varying the stereo depth of the belt did not compromise the main effect, precluding a mechanical interpretation (sphere rolling belt). Varying the speed of the surfaces of the sphere also did not greatly affect the interpretation of rotation direction. These results suggest that 2-D center-surround interactions influence 3-D depth assignment by differentially modulating the strength of response to the moving surfaces of an object (their prominence) without affecting featural specificity. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

18.
Previous studies have shown that directionally selective (DS) retinal ganglion cells cannot only discriminate the direction of a moving object but they can also discriminate the sequence of two flashes of light at neighboring locations in the visual field: that is, the cells elicit a DS response to both real and apparent motion. This study examines whether a DS response can be elicited in DS ganglion cells by simply stimulating two neighboring areas of the retina with high external K+. Extracellular recordings were made from ON-OFF DS ganglion cells in superfused rabbit retinas, and the responses of these cells to focal applications of 100 mM KCl to the vitreal surface of the retina were measured. All cells produced a burst of spikes (typically lasting 50-200 ms) when a short pulse (10-50 ms duration) of KCl was ejected from the tip of a micropipette that was placed within the cell's receptive field. When KCl was ejected successively from the tips of two micropipettes that were aligned along the preferred-null axis of a cell, sequence-dependent responses were observed. The response to the second micropipette was suppressed when mimicking motion in the cell's null direction, whereas an enhancement during apparent motion in the opposite direction frequently occurred. Sequence discrimination in these cells was eliminated by the GABA antagonist picrotoxin and by the Ca(2+)-channel blocker omega-conotoxin MVIIC, two drugs that are known to abolish directional selectivity in these ganglion cells. The spatiotemporal properties of the K(+)-evoked sequence-dependent responses are described and compared with previous findings on apparent motion responses of ON-OFF DS ganglion cells.  相似文献   

19.
If subjects adapt to an unambiguous version of a Necker cube, a subsequent ambiguous cube tends to be seen in the opposing perspective. The present experiment shows that this adaptation effect depends on whether the adapting cube is attended. During the adaptation phase, 12 Ss saw 2 superimposed cubes of opposite perspective and different sizes and colors centered on fixation. Ss detected color changes in line segments that defined either the small or large cube. The perception of the subsequent ambiguous cube depended on which of the adapting cubes was task relevant. This attentional effect showed a strong asymmetry. When Ss attended to the small adapting cube, an aftereffect appropriate to the perspective of the cube was found, but when the large adapting cube was attended, no aftereffect was present. This asymmetry may relate to constraints on the spatial distribution of attention. (French abstract) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
When participants are asked to localize the 1st position of a moving stimulus, they mislocalize it in the direction of the movement (Fr?hlich effect; F. W. Fr?hlich, 1923). This mislocalization points to a delay in the temporal sensation of a moving stimulus. However, the delay is in contrast to findings indicating a faster processing of moving stimuli. This potential dissociation was studied in 6 experiments. After establishing the effect spatially, different temporal tasks were examined under otherwise identical conditions. Simple as well as choice reaction times were shorter to moving than to stationary stimuli. Other tasks (choice reaction to structural features, temporal order judgement, and synchronization), however, produced opposite effects. Results support a view that the output of early stimulus processing directly feeds into the motor system, whereas the processing stages used, for example, for localization judgements are based on later integrative mechanisms. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号