首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Observers briefly viewed random dots moving in a given direction and subsequently recalled that direction. When required to remember a single direction, observers performed accurately for memory intervals of up to 8 s; this high-fidelity memory for motion was maintained when observers executed a vigilance task during the memory interval. When observers tried to remember multiple directions of motion, performance deteriorated with increasing number of directions. Still, memory for multiple directions was unchanged over delays of up to 30 s. In a forced-choice experiment, observers viewed 2 successive animation sequences separated by a memory interval; for both sequences, dots moved in any direction within a limited bandwidth. Observers accurately judged which animation sequence was more coherent, even with memory intervals of 30 s. The findings are considered within the context of cognitive bias and memory for other aspects of perception.  相似文献   

2.
1. Our goal was to assess whether visual motion signals related to changes in image velocity contribute to pursuit eye movements. We recorded the smooth eye movements evoked by ramp target motion at constant speed. In two different kinds of stimuli, the onset of target motion provided either an abrupt, step change in target velocity or a smooth target acceleration that lasted 125 ms followed by prolonged target motion at constant velocity. We measured the eye acceleration in the first 100 ms of pursuit. Because of the 100-ms latency from the onset of visual stimuli to the onset of smooth eye movement, the eye acceleration in this 100-ms interval provides an estimate of the open-loop response of the visuomotor pathways that drive pursuit. 2. For steps of target velocity, eye acceleration in the first 100 ms of pursuit depended on the "motion onset delay," defined as the interval between the appearance of the target and the onset of motion. If the motion onset delay was > 100 ms, then the initial eye movement consisted of separable early and late phases of eye acceleration. The early phase dominated eye acceleration in the interval from 0 to 40 ms after pursuit onset and was relatively insensitive to image speed. The late phase dominated eye acceleration in the interval 40-100 ms after the onset of pursuit and had an amplitude that was proportional to image speed. If there was no delay between the appearance of the target and the onset of its motion, then the early component was not seen, and eye acceleration was related to target speed throughout the first 100 ms of pursuit. 3. For step changes of target velocity, the relationship between eye acceleration in the first 40 ms of pursuit and target velocity saturated at target speeds > 10 degrees /s. In contrast, the relationship was nearly linear when eye acceleration was measured in the interval 40-100 ms after the onset of pursuit. We suggest that the first 40 ms of pursuit are driven by a transient visual motion input that is related to the onset of target motion (motion onset transient component) and that the next 60 ms are driven by a sustained visual motion input (image velocity component). 4. When the target accelerated smoothly for 125 ms before moving at constant speed, the initiation of pursuit resembled that evoked by steps of target velocity. However, the latency of pursuit was consistently longer for smooth target accelerations than for steps of target velocity.(ABSTRACT TRUNCATED AT 400 WORDS)  相似文献   

3.
We evaluated the image quality of three-dimensional (3D) spiral CT using a Somatom Plus CT scanner (Siemens, Germany). A T-shaped acrylate plastic model was made and scanning was performed with a table speed of 5 mm/sec for 24 seconds. The thickness of the X-ray beam was 5mm with one second per rotation. The various images of the model were created based on the shaded surface display (SSD) method and the effect of threshold and rotation (spin and tilt) on the quality of the 3D images was studied. The increase of threshold caused a rapid decrease in the diameter of the stick observed on the film, and this phenomenon was particularly remarkable on the transverse stick and the junctional portion of the longitudinal stick. The change of spin or tilt did not affect the diameter of the stick on the film. It was found that the depth perception of the stick could be achieved with the gray scale technique. We concluded that the interpretation of the 3D spiral CT image obtained with SSD needs caution and the diagnosis should not be made only on this image.  相似文献   

4.
Describes further evidence for a new neural network theory of biological motion perception. The theory clarifies why parallel streams V1→V2, V1→MT, and V1→V2→MT exist for static form and motion form processing among the areas V1, V2, and MT of visual cortex. It also suggests that the static form system generates emergent boundary segmentations whose outputs are insensitive to direction-of-contrast and to direction-of-motion, whereas the motion form system generates emergent boundary segmentations whose outputs are insensitive to direction-of-contrast but sensitive to direction-of-motion. Data on short- and long-range apparent motion percepts are explained including beta, split, gamma and reverse-contrast gamma, and delta motions, as well as visual inertia. Also included are the transition from group motion to element motion in response to a Ternus display as the interstimulus interval (ISI) decreases; group motion in response to a reverse-contrast Ternus display even at short ISIs; speed-up of motion velocity as interflash distance increases or flash duration decreases dependence of the transition from element motion to group motion on stimulus duration and size, various classical dependencies between flash duration, spatial separation, ISI, and motion threshold known as Korte's laws; dependence of motion strength on stimulus orientation and spatial frequency; short-range and long-range form–color interactions; and binocular interactions of flashes to different eyes. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
A number of early findings supported the notion that there is a correspondence between eye movements and the visual imagery of dreams. Subsequent studies, however, yielded contradictory results. It is suggested that eye movements might occasionally be related to the visual imagery of dreams, but that the notion of a constant isomorphic relationship between the 2 is untenable. It is also noted that REM bursts might correspond in other ways to dream content, i.e., as an integral and parallel part of the total picture of Stage 1 REM activation. (36 ref.) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
Experimental psychologists have recently amassed a great deal of evidence supporting the hypothesis that the visual system can select a particular location over other locations in the visual field for further analysis without overtly orienting the eyes to the selected location. At the same time, we know that during reading and scene perception, the eyes are overtly directed to new regions of the visual field every 200 to 300 msec on average. How are covert shifts of attention and overt movements of the eyes related during complex visual-cognitive tasks? The available evidence from studies of the perceptual span in reading suggests that attention is allocated asymmetrically around the fixation point, with more information acquired in the direction that the eyes are moving. Based on this evidence and evidence from explorations of eye movement control in reading, the author presents a tentative model of the relationship between attention and eye movements, called the Sequential Attention Model. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
Eye or head rotation would influence perceived heading direction if it were coded by cells tuned only to retinal flow patterns that correspond to linear self-movement. We propose a model for heading detection based on motion templates that are also Gaussian-tuned to the amount of rotational flow. Such retinal flow templates allow explicit use of extra-retinal signals to create templates tuned to head-centric flow as seen by the stationary eye. Our model predicts an intermediate layer of 'eye velocity gain fields' in which 'rate-coded' eye velocity is multiplied with responses of templates sensitive to specific retinal flow patterns. By combination of the activities of one retinal flow template and many units with an eye velocity gain field, a new type of unit appears: its preferred retinal flow changes dynamically in accordance with the eye rotation velocity. This unit's activity becomes thereby approximately invariant to the amount of eye rotation. The units with eye velocity gain fields from the motion-analogue of the units with eye position gain fields found in area 7a, which according to our general approach, are needed to transform position from retino-centric to head-centric coordinates. The rotation-tuned templates can also provide rate-coded visual estimates of eye rotation to allow a pure visual compensation for rotational flow. Our model is consistent with psychophysical data that indicate a role for extra-retinal as well as visual rotation signals in the correct perception of heading.  相似文献   

8.
We measured perceived velocity as a function of contrast for luminance and isoluminant sinusoidal gratings, luminance and isoluminant plaids, and second-order, amplitude-modulated, drift-balanced stimuli. For all types of stimuli perceived velocity was contrast-invariant for fast moving patterns at or above 4 deg/sec. For slowly moving stimuli the log of perceived velocity was a linear function of the log of the contrast. The slope of this perceived velocity-vs-contrast line (velocity gain) was relatively shallow for luminance gratings and luminance plaids, but was steep for isoluminant gratings and isoluminant plaids, as well as for drift-balanced stimuli. Independent variation of spatial and temporal frequency showed that these variables, and not velocity alone, determine the velocity gain. Overall, the results indicate that slow moving stimuli defined by chromaticity or by second-order statistics are processed in a different manner from luminance defined stimuli. We propose that there are a number of independent mechanisms processing motion targets and it is the interplay of these mechanisms that is responsible for the final percept.  相似文献   

9.
Studied visual masking and visual integration across saccadic eye movements in 4 experiments. In a 5th experiment, 4 randomly chosen dots from a 3?×?5 dot matrix were presented in 1 fixation, and 4 different dots from the matrix were presented in a 2nd fixation. Ss reported the location of the missing dot. When the 1st display was presented just before the saccade (as in Exps I–III), Ss accurately specified the missing dot location when the dots were presented to the same region of the retina but not when they were presented in the same place in space. When the 1st display was presented well before the saccade (as in Exp IV), Ss performed poorly regardless of retinal or spatial overlap. Results indicate the existence of a short-lived retinotopic visual persistence but provide no support for a spatiotopic visual persistence capable of fusing the contents of successive fixations. It is concluded that transsaccadic integration depends instead on an abstract memory that accumulates position and identity information about the contents of successive fixations. Results are discussed in relation to the work by M. L. Davidson et al (see record 1974-10245-001). (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
A line, presented instantaneously, is perceived to be drawn from one end when a dot is flashed at that end prior to the presentation of the line. Although this phenomenon, called illusory line motion, has been attributed to accelerated processing at the locus of attention, preattentive (stimulus-driven) motion mechanisms might also contribute to the line-motion sensation. We tested this possibility in an odd-target-search task. The stimulus display consisted of two, four, or eight pairs of dots and lines. All lines were presented on the same side of the dots (eg right), except for the target line, which was presented on the opposite side (left). Subjects were asked to report the presence or absence of the target, which was presented in half of the trials. Low error rates for target detection (about 10%) even when the display consisted of eight dot-line pairs (ie display size was eight) indicated that illusory line motion could be perceived simultaneously at many locations. The interstimulus interval (ISI) between the dots and lines (0-2176 ms) and the contrast polarity (both dots and lines were brighter than the background, or dots were darker and lines were brighter) were also manipulated. When an ISI of a few hundred milliseconds was inserted, target detection was nearly impossible with larger display sizes. When the contrast polarity was changed, the target-detection performance was impaired significantly, even with no ISI. Moreover, it was found that the effects of display size, ISI, and contrast polarity were comparable in searches for a two-dot apparent-motion target. These results support the idea that preattentive, apparent-motion mechanisms, as well as attentional mechanisms, contribute to illusory line motion.  相似文献   

11.
We examined two ways in which the neural control system for eye-head saccades constrains the motion of the eye in the head. The first constraint involves Listing's law, which holds ocular torsion at zero during head-fixed saccades. During eye-head saccades, does this law govern the eye's motion in space or in the head? Our subjects, instructed to saccade between space-fixed targets with the head held still in different positions, systematically violated Listing's law of the eye in space in a way that approximately, but not perfectly, preserved Listing's law of the eye in head. This finding implies that the brain does not compute desired eye position based on the desired gaze direction alone but also considers head position. The second constraint we studied was saturation, the process where desired-eye-position commands in the brain are "clipped" to keep them within an effective oculomotor range (EOMR), which is smaller than the mechanical range of eye motion. We studied the adaptability of the EOMR by asking subjects to make head-only saccades. As predicted by current eye-head models, subjects failed to hold their eyes still in their orbits. Unexpectedly, though, the range of eye-in-head motion in the horizontal-vertical plane was on average 31% smaller in area than during normal eye-head saccades, suggesting that the EOMR had been reduced by effort of will. Larger reductions were possible with altered visual input: when subjects donned pinhole glasses, the EOMR immediately shrank by 80%. But even with its reduced EOMR, the eye still moved into the "blind" region beyond the pinhole aperture during eye-head saccades. Then, as the head movement brought the saccade target toward the pinhole, the eyes reversed their motion, anticipating or roughly matching the target's motion even though it was still outside the pinhole and therefore invisible. This finding shows that the backward rotation of the eye is timed by internal computations, not by vision. When subjects wore slit glasses, their EOMRs shrank mostly in the direction perpendicular to the slit, showing that altered vision can change the shape as well as the size of the EOMR. A recent, three-dimensional model of eye-head coordination can explain all these findings if we add to it a mechanism for adjusting the EOMR.  相似文献   

12.
13.
The effects of foveal and peripheral visual, as well as vestibular, cues on the performance and control behaviour of subjects in two different roll control tasks were studied in a moving base flight simulator with low noise motion characteristics. Two different roll control tasks were used, one being a following task (or compensatory tracking task) where a displayed random signal was to be tracked, the other being a disturbance task in which a random signal perturbed the controlled system and the roll angle was to be kept at zero. Consistent improvement in controller performance was found after adding visual peripheral or vestibular (motion) cues to the basic configuration consisting of a central CRT display. Control behaviour, as expressed by controller transfer functions was also markedly influenced by the addition of these extra motion cues, the changes in control behaviour being dependent on the type of control task. Some possible causes for this dependence are discussed.  相似文献   

14.
(This reprinted article originally appeared in Psychological Review, 1954, Vol 61, 304–314. The following abstract of the original article appeared in PA, Vol 29:5103.) The question of movement involves at least 3 closely related questions: How do we see the motion of an object? How do we see the stability of the environment? How do we perceive ourselves as moving in a stable environment? The author draws together the experimental evidence on the 3 questions and draws out its implications, including a hypothesis for research. The article concludes with a discussion of the requirements for a psychophysics of kinetic impressions. 19 references. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
An analysis of monkey eye movements in classic conjunction and feature search tasks was made. The task was to find and fixate a target in an array of stimuli. Saccades targeted stimuli accurately (red and green bars, 1.25 x 0.25 degrees), landing most of the time within 1.0 degree of the stimulus center and rarely in blank areas far from any stimulus. Monkeys used target color, but not orientation, to selectively guide search. Saccades moved the point of fixation on the average just beyond the area that could be examined by focal attentive mechanisms during the current fixation, as described in a previous paper (Motter BC, Belky EJ. The zone of focal attention during active visual search. Vis Res 1998;38:1007-22). This distance scales with the density of relevant stimuli in the scene. The saccade targeting data suggest that the locations of items of a particular color, but apparently not of a particular orientation, are available outside the region of focal attention. Color feature selection can apparently block the distracting effects of color unique distractors during search.  相似文献   

16.
Subjects were required to execute saccadic eye movements in the horizontal plane which passed through primary gaze. During the saccades, visual images were projected onto a screen which subtended 40 degrees horizontaloy and 26 degrees vertically and was centered on primary gaze. Content, contrast, and intensity of the stimulus patterns and level of illumination of the laboratory background were manipulated to maximise pattern recognition. Little or no detail of the projected images could be discerned under any conditions. Only horizontal laminations were perceived as blurs of appropriate colour. It is concluded that there is no useful perception of the everyday environment during saccades.  相似文献   

17.
The ability to generate voluntary pursuit eye movements in the absence of retinal-contour motion cues was assessed on the basis of observers' perceptions of depth and motion when they viewed dynamic visual noise with a filter over one eye. The results indicated that the depth-movement phenomenon yielded robust pursuit with the velocity an inverse function of filter density. These data suggest that retinal-contour motion cues are not necessary and that perceived motion is sufficient to drive pursuit.  相似文献   

18.
We measured motion-detection and motion-discrimination performance for different directions of motion, using stochastic motion sequences. Random-dot cinematograms containing 200 dots in a circular aperture were used as stimuli in a two-interval forced-choice procedure. In the motion-detection experiment, observers judged which of two intervals contained weak coherent motion, the other internal containing random motion only. In the direction-discrimination experiment, observers viewed a standard direction of motion followed by comparison motion in a slightly different direction. Observers indicated whether the comparison was clockwise or counterclockwise, relative to the standard. Twelve directions of motion were tested in the detection task and five standard directions (three cardinal directions and two oblique directions) in the discrimination task. Detection thresholds were invariant with direction of motion, but direction-discrimination thresholds were significantly higher for motion in oblique directions, even at low-coherence levels. Results from control conditions ruled out monitor artifacts and indicate that the oblique effect is relative to retinal coordinates. These results have broad implications for computational and physiological models of motion perception.  相似文献   

19.
Examined the effects of spatially directed attention on the temporal characteristics of information transmission in the visual system. A 2-stimulus, 2-flash, long-range motion display was used. The results showed that attending to 1 of the 2 stimuli altered the perceived direction of motion, the pattern of motion and no-motion responses, and ratings of motion quality. The results were compared with predictions derived from 2 conceptual frameworks of attention: a temporal-profile model and an additive account. The results support the temporal-profile model. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
Where do we perceive an object to be when it is moving? Nijhawan [1] has reported that if a stationary test pattern is briefly flashed in spatial alignment with a moving one, the moving element actually appears displaced in the direction in which it is moving. Nijhawan postulates that this may be the result of a mechanism that predicts the future position of the moving element so as to compensate for the fact that the element will have moved position from the time at which the light left it to the time at which the observer becomes aware of it (as a result of the finite time taken for neural transmission). There is an alternative explanation of this effect, however. Changes in the stimulus presentation could affect perceptual latency [2], and therefore the perceived position if in motion (as suggested for the Pulfrich pendulum effect [3] [4]). In other words, if the flashed probe of the Nijhawan demonstration takes longer to reach perceptual awareness than the moving stimulus, the latter will appear to be ahead of the probe. Here, I demonstrate an alternative way of testing this hypothesis. When an illusory movement is induced (via the motion aftereffect) within a stationary pattern, it can be shown that this also produces a change in its perceived spatial position. As the pattern is stationary, one cannot account for this result via the notion of perceptual lags.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号