首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Where do we perceive an object to be when it is moving? Nijhawan [1] has reported that if a stationary test pattern is briefly flashed in spatial alignment with a moving one, the moving element actually appears displaced in the direction in which it is moving. Nijhawan postulates that this may be the result of a mechanism that predicts the future position of the moving element so as to compensate for the fact that the element will have moved position from the time at which the light left it to the time at which the observer becomes aware of it (as a result of the finite time taken for neural transmission). There is an alternative explanation of this effect, however. Changes in the stimulus presentation could affect perceptual latency [2], and therefore the perceived position if in motion (as suggested for the Pulfrich pendulum effect [3] [4]). In other words, if the flashed probe of the Nijhawan demonstration takes longer to reach perceptual awareness than the moving stimulus, the latter will appear to be ahead of the probe. Here, I demonstrate an alternative way of testing this hypothesis. When an illusory movement is induced (via the motion aftereffect) within a stationary pattern, it can be shown that this also produces a change in its perceived spatial position. As the pattern is stationary, one cannot account for this result via the notion of perceptual lags.  相似文献   

2.
When a moving object abruptly disappears, this profoundly influences its localization by the visual system. In Experiment 1, 2 aligned objects moved across the screen, and 1 of them abruptly disappeared. Observers reported seeing the objects misaligned at the time of the offset, with the continuing object leading. Experiment 2 showed that the perceived forward displacement of the moving object depended on speed and that offsets were localized accurately. Two competing representations of position for moving objects are proposed: 1 based on a spatially extrapolated internal model, and the other based on transient signals elicited by sudden changes in the object trajectory that can correct the forward-shifted position. Experiment 3 measured forward displacements for moving objects that disappeared only for a short time or abruptly reduced contrast by various amounts. Manipulating the relative strength of the 2 position representations in this way resulted in intermediate positions being perceived, with weaker motion signals or stronger transients leading to less forward displacement. This 2-process mechanism is advantageous because it uses available information about object position to maximally reduce spatio-temporal localization errors. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
Visual information about time-to-collision between two objects.   总被引:1,自引:0,他引:1  
Evaluated human observers' sensitivity to visual information specifying a moving object's future time of arrival at a designated position in the field of view in a forced-choice paradigm. A geometrical analysis demonstrated that information specifying a 1st-order temporal relationship (i.e., without taking changes in velocity into account) is available in the combination of the relative rate of dilation of the optical contour of the moving object and the relative rate of constriction of the optical gap separating the moving object from the target position. Observers were sensitive to information contained in the relative rate of constriction of the optical gap if no contour dilation component was present and to the combination of information contained in the relative rates of dilation of the optical contour of the moving object and constriction of the optical gap if both were present albeit with a differential weighting of the 2 components. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
Asound presented in close temporal proximity to a visual stimulus can alter the perceived temporal dimensions of the visual stimulus (temporal ventriloquism). In this article, the authors demonstrate temporal ventriloquism in the flash-lag effect (FLE), a visual illusion in which a flash appears to lag relative to a moving object. In Experiment 1, the magnitude and the variability of the FLE were reduced, relative to a silent condition, when a noise burst was synchronized with the flash. In Experiment 2, the sound was presented before, at, or after the flash (± ~100 ms), and the size of the FLE varied linearly with the delay of the sound. These findings demonstrate that an isolated sound can sharpen the temporal boundaries of a flash and attract its temporal occurrence. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
When a visual stimulus is flashed at a given location the moment a second moving stimulus arrives at the same location, observers report the flashed stimulus as spatially lagging behind the moving stimulus (the flash-lag effect). The authors investigated whether the global configuration (perceptual organization) of the moving stimulus influences the magnitude of the flash-lag effect. The results indicate that a flash presented near the leading portion of a moving stimulus lags significantly more than a flash presented near the trailing portion. This result also holds for objects consisting of several elements that group to form a unitary percept of an object in motion. The present study demonstrates a novel interaction between the global configuration of moving objects and the representation of their spatial position and may provide a new and useful tool for the study of perceptual organization. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
When moving through cluttered environments we use different forms of the same source of information to avoid stationary and moving objects. A stationary obstacle can be avoided by looking at it, registering the differential parallactic displacements on the retina around it during pursuit fixation, and then acting on that information. Such information also specifies one's general heading. A moving obstacle can be avoided by looking at it, registering the displacements reflecting constancy or change in one's gaze-movement angle, and then acting on that information. Such information, however, does not generally specify one's heading. Passing in front of a moving object entails retrograde motion of objects in the deep background; collisions entail the lamellar pattern of optical flow; and passing behind entails more nearly uniform flow against one's direction of motion. Accuracy in the laboratory compares favorably with that of real-world necessities. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
Humans see whole objects from input fragmented in space and time, yet spatiotemporal object perception is poorly understood. The authors propose the theory of spatiotemporal relatability (STR), which describes the visual information and processes that allow visible fragments revealed at different times and places, due to motion and occlusion, to be assembled into unitary perceived objects. They present a formalization of STR that specifies spatial and temporal relations for object formation. Predictions from the theory regarding conditions that lead to unit formation were tested and confirmed in experiments with dynamic and static, occluded and illusory objects. Moreover, the results support the identity hypothesis of a common process for amodal and modal contour interpolation and provide new evidence regarding the relative efficiency of static and dynamic object formation. STR postulates a mental representation, the dynamic visual icon, that briefly maintains shapes and updates positions of occluded fragments to connect them with visible regions. The theory offers a unified account of interpolation processes for static, dynamic, occluded, and illusory objects. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
This study describes the discharges of central units in the medulla of the goldfish, Carassius auratus, to hydrodynamic stimuli received by the lateral line. We stimulated the animal with a small object moving in the water and recorded activity of 85 medullary lateral line units in response to different motion directions and to various object distances, velocities, accelerations and sizes. All but one unit increased discharge rate when the moving object passed the fish laterally. Five response types were distinguished based on temporal patterns of unit responses. Ten units were recorded which encoded motion direction by different temporal discharge patterns. In general, discharge rates decreased when object distance was increased and when object speed was decreased. When object size was decreased, discharge rates decreased systematically in one group of units, but they were comparable for all but the smallest object tested in a second group of units. Units responded about equally well whether an object was moved at a constant velocity or was accelerated when it passed the fish. The data indicate that medullary lateral line units in the goldfish can encode motion direction but are not tuned to other aspects of an object moving in the water. The functional properties of units in the medulla of goldfish are similar to those reported for medullary units in the catfish Ancistrus sp., suggesting that the central mechanisms for processing complex hydrodynamic stimuli may be quite similar in fish species that occupy habitats with different hydrodynamic conditions.  相似文献   

9.
The sequence of images generated by motion between observer and object specifies a spatiotemporal signature for that object. Evidence is presented that such spatiotemporal signatures are used in object recognition. Subjects learned novel, three-dimensional, rotating objects from image sequences in a continuous recognition task. During learning, the temporal order of images of a given object was constant. During testing, the order of images in each sequence was reversed, relative to its order during learning. This image sequence reversal produced significant reaction time increases and recognition rate decreases. Results are interpreted in terms of object-specific spatiotemporal signatures.  相似文献   

10.
Four experiments are reported that investigated the role of the perceived coplanarity of a moving target with respect to a frame of reference in the third dimension on the perceived path of that target. When a target dot and small moving frame appeared coplanar, the dot's perceived trajectory was governed entirely by its changing position relative to the moving frame. However, when the target and a large stationary frame appeared in a different plane than the small moving frame, the motion of the dot was seen independently of the moving frame. The results support a belongingness principle of motion perception: The displacement of an object relative to a frame of reference to which it belongs governs its perceived path of motion.  相似文献   

11.
An observer moving forwards through the environment experiences a radial pattern of image motion on each retina. Such patterns of optic flow are a potential source of information about the observer's rate of progress, direction of heading and time to reach objects that lie ahead. As the viewing distance changes there must be changes in the vergence angle between the two eyes so that both foveas remain aligned on the object of interest in the scene ahead. Here we show that radial optic flow can elicit appropriately directed (horizontal) vergence eye movements with ultra-short latencies (roughly 80 ms) in human subjects. Centrifugal flow, signalling forwards motion, increases the vergence angle, whereas centripetal flow decreases the vergence angle. These vergence eye movements are still evident when the observer's view of the flow pattern is restricted to the temporal hemifield of one eye, indicating that these responses do not result from anisotropies in motion processing but from a mechanism that senses the radial pattern of flow. We hypothesize that flow-induced vergence is but one of a family of rapid ocular reflexes, mediated by the medial superior temporal cortex, compensating for translational disturbance of the observer.  相似文献   

12.
The judged final position of a moving stimulus has been suggested to be shifted in the direction of motion because of mental extrapolation (representational momentum). However, a perceptual explanation is possible: The eyes overshoot the final position of the target, and because of a foveal bias, the judged position is shifted in the direction of motion. To test this hypothesis, the authors replicated previous studies, but instead of having participants indicate where the target vanished, the authors probed participants' perceptual focus by presenting probe stimuli close to the vanishing point. Identification of probes in the direction of target motion was more accurate immediately after target offset than it was with a delay. Another experiment demonstrated that judgments of the final position of a moving target are affected by whether the eyes maintain fixation or follow the target. The results are more consistent with a perceptual explanation than with a memory account. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
Searching for an object within a cluttered, continuously changing environment can be a very time-consuming process. The authors show that a simple auditory pip drastically decreases search times for a synchronized visual object that is normally very difficult to find. This effect occurs even though the pip contains no information on the location or identity of the visual object. The experiments also show that the effect is not due to general alerting (because it does not occur with visual cues), nor is it due to top-down cuing of the visual change (because it still occurs when the pip is synchronized with distractors on the majority of trials). Instead, we propose that the temporal information of the auditory signal is integrated with the visual signal, generating a relatively salient emergent feature that automatically draws attention. Phenomenally, the synchronous pip makes the visual object pop out from its complex environment, providing a direct demonstration of spatially nonspecific sounds affecting competition in spatial visual processing. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
A method is developed which makes it possible to scan and reconstruct an object with cone beam x-rays in a spiral scan path with area detectors much shorter than the length of the object. The method is mathematically exact. If only a region of interest of the object is to be imaged, a top circle scan at the top level of the region of interest and a bottom circle scan at the bottom level of the region of interest are added. The height of the detector is required to cover only the distance between adjacent turns in the spiral projected at the detector. To reconstruct the object, the Radon transform for each plane intersecting the object is computed from the totality of the cone beam data. This is achieved by suitably combining the cone beam data taken at different source positions on the scan path; the angular range of the cone beam data required at each source position can be determined easily with a mask which is the spiral scan path projected on the detector from the current source position. The spiral scan algorithm has been successfully validated with simulated cone beam data.  相似文献   

15.
A flashed stimulus is perceived as spatially lagging behind a moving stimulus when they are spatially aligned. When several elements are perceptually grouped into a unitary moving object, a flash presented at the leading edge of the moving stimulus suffers a larger spatial lag than a flash presented at the trailing edge (K. Watanabe. R. Nijhawan. B. Khurana, & S. Shimojo. 2001). By manipulation of the flash onset relative to the motion onset, the present study investigated the order of perceptual operations of visual motion grouping and relative visual localization. It was found that the asymmetric mislocalization was observed irrespective of physical and/or perceptual temporal order between the motion and flash onsets. Thus, grouping by motion must be completed to define the leading-trailing relation in a moving object before the visual system explicitly represents the relative positions of moving and flashed stimuli. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
To test the effects of complex visual motion stimuli on the responses of single neurons in the middle temporal visual area (MT) and the medial superior temporal area (MST) of the macaque monkey, we compared the response elicited by one object in motion through the receptive field with the response of two simultaneously presented objects moving in different directions through the receptive field. There was an increased response to a stimulus moving in a direction other than the best direction when it was paired with a stimulus moving in the best direction. This increase was significant for all directions of motion of the non-best stimulus and the magnitude of the difference increased as the difference in the directions of the two stimuli increased. Similarly, there was a decreased response to a stimulus moving in a non-null direction when it was paired with a stimulus moving in the null direction. This decreased response in MT did not reach significance unless the second stimulus added to the null direction moved in the best direction, whereas in MST the decrease was significant when the second stimulus direction differed from the null by 90 degrees or more. Further analysis showed that the two-object responses were better predicted by taking the averaged response of the neuron to the two single-object stimuli than by summation, multiplication, or vector addition of the responses to each of the two single-object stimuli. Neurons in MST showed larger modulations than did neurons in MT with stimuli moving in both the best direction and in the null direction and the average better predicted the two-object response in area MST than in area MT. This indicates that areas MT and MST probably use a similar integrative mechanisms to create their responses to complex moving visual stimuli, but that this mechanism is further refined in MST. These experiments show that neurons in both MT and MST integrate the motion of all directions in their responses to complex moving stimuli. These results with the motion of objects were in sound agreement with those previously reported with the use of random dot patterns for the study of transparent motion in MT and suggest that these neurons use similar computational mechanisms in the processing of object and global motion stimuli.  相似文献   

17.
When participants are asked to localize the 1st position of a moving stimulus, they mislocalize it in the direction of the movement (Fr?hlich effect; F. W. Fr?hlich, 1923). This mislocalization points to a delay in the temporal sensation of a moving stimulus. However, the delay is in contrast to findings indicating a faster processing of moving stimuli. This potential dissociation was studied in 6 experiments. After establishing the effect spatially, different temporal tasks were examined under otherwise identical conditions. Simple as well as choice reaction times were shorter to moving than to stationary stimuli. Other tasks (choice reaction to structural features, temporal order judgement, and synchronization), however, produced opposite effects. Results support a view that the output of early stimulus processing directly feeds into the motor system, whereas the processing stages used, for example, for localization judgements are based on later integrative mechanisms. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

18.
This review integrates and extends research on the nature and sources of changing perceptions of work in childhood and adolescence by (a) treating those perceptions as a form of social cognition, (b) considering 3 work settings (home, school, and paid work) and 3 aspects of understanding (of categories, procedures, and interconnections among forms of work), and (c) dealing with several correlates: age, gender, cohort, and social position. The review specifies changes and presents a general picture of change and variation based on the accessibility of information, the individual's ability to deal with that information, and the individual's interest. This picture, it is proposed, is extendable to changing perceptions of work at later ages and to content areas other than work. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
This paper presents a method for segmentation and tracking of cardiac structures in ultrasound image sequences. The developed algorithm is based on the active contour framework. This approach requires initial placement of the contour close to the desired position in the image, usually an object outline. Best contour shape and position are then calculated, assuming that at this configuration a global energy function, associated with a contour, attains its minimum. Active contours can be used for tracking by selecting a solution from a previous frame as an initial position in a present frame. Such an approach, however, fails for large displacements of the object of interest. This paper presents a technique that incorporates the information on pixel velocities (optical flow) into the estimate of initial contour to enable tracking of fast-moving objects. The algorithm was tested on several ultrasound image sequences, each covering one complete cardiac cycle. The contour successfully tracked boundaries of mitral valve leaflets, aortic root and endocardial borders of the left ventricle. The algorithm-generated outlines were compared against manual tracings by expert physicians. The automated method resulted in contours that were within the boundaries of intraobserver variability.  相似文献   

20.
Previous research indicates that, when shown a collision between a moving and a stationary object, 11-month-old infants believe that the size of the moving object affects how far the stationary object is displaced. The present experiments examined whether 6.5- and 5.5-month-old infants hold the same belief. The infants sat in front of a horizontal track; to the left of the track was an inclined ramp. A wheeled toy bug rested on the track at the bottom of the ramp. The infants were habituated to an event in which a medium-size cylinder rolled down the ramp and hit the bug, propelling it to the middle of the track. Next, the infants saw two test events in which novel cylinders propelled the bug to the end of the track. The two novel cylinders were identical to the habituation cylinder in material but not in size: one was larger (large-cylinder event) and one was smaller (small-cylinder event) than the habituation cylinder. The 6.5-month-old infants, and the 5.5-month-old female infants, looked reliably longer at the small- than at the large-cylinder event. These and control results indicated that the infants (a) believed that the size of the cylinder affected the length of the bug's trajectory and (b) used the habituation event to calibrate their predictions about the test events. Unlike the other infants, the 5.5-month-old male infants tended to look equally at the small- and large-cylinder events. Further results indicated that this negative finding was not due to the infants' (a) failure to remember how far the bug rolled in the habituation event or (b) inability to use the habituation event to calibrate predictions about novel test events. Together, the present results suggest the following conclusions. First, when shown a collision between a moving and a stationary object, infants aged 5.5-6.5 months (a) believe that there is a proportional relation between the size of the moving object and the distance traveled by the stationary object and (b) can engage in calibration-based reasoning about this size/distance relation. Second, female infants precede males by a few weeks in this development, for reasons that may be related to sex differences in the maturation of depth perception.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号