首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 24 毫秒
1.
This study investigated multisensory interactions in the perception of auditory and visual motion. When auditory and visual apparent motion streams are presented concurrently in opposite directions, participants often fail to discriminate the direction of motion of the auditory stream, whereas perception of the visual stream is unaffected by the direction of auditory motion (Experiment 1). This asymmetry persists even when the perceived quality of apparent motion is equated for the 2 modalities (Experiment 2). Subsequently, it was found that this visual modulation of auditory motion is caused by an illusory reversal in the perceived direction of sounds (Experiment 3). This "dynamic capture" effect occurs over and above ventriloquism among static events (Experiments 4 and 5), and it generalizes to continuous motion displays (Experiment 6). These data are discussed in light of related multisensory phenomena and their support for a "modality appropriateness" interpretation of multisensory integration in motion perception. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
A model of visual apparent motion is derived from 4 observations on path selection in ambiguous displays in which apparent motion of illuminated dots could, in principle, be perceived along many possible paths: (a) Whereas motion over each path is clearly visible when its stimulus is presented in isolation, motion is usually seen over only 1 path when 2 or more such stimuli are combined (competition). (b) Path selection is nearly independent of viewing distance (scale invariance). (c) At transition points between paths i and j (where apparent motion is equally likely to be perceived along i and j), the time t and distance d between successive points along the paths are described by a log linear d/t relationship. (d) When successive elements along a path differ in orientation or size, the perceived motion along this path is not necessarily weaker than motion along a path composed entirely of identical elements. The model is a form of strength theory in which the path with greatest strength becomes the dominant path. (27 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
Human subjects can perceive global motion or motions in displays containing diverse local motions, implying representation of velocity at multiple scales. The phenomena of flexible global direction judgments, and especially of motion transparency, also raise the issue of whether the representation of velocity at any one scale is single-valued or multi-valued. A new performance-based measure of transparency confirms that the visual system represents directional information for each component of a transparent display. However, results with the locally paired random-dot display introduced by Qian et al, show that representations of multiple velocities do not coexist at the finest spatial scale of motion analysis. Functionally distinct scales of motion processing may be associated with (i) local motion detectors which show a strong winner-take-all interaction; (ii) spatial integration of local signals to disambiguate velocity; (iii) selection of reliable velocity signals as proposed in the model of Nowlan and Sejnowski; (iv) object-based or surface-based representations that are not necessarily organised in a fixed spatial matrix. These possibilities are discussed in relation to the neurobiological organisation of the visual motion pathway.  相似文献   

4.
Eight participants were presented with auditory or visual targets and then indicated the target's remembered positions relative to their head eight seconds after actively moving their eyes, head or body to pull apart head, retinal, body, and external space reference frames. Remembered target position was indicated by repositioning sounds or lights. Localization errors were found related to head-on-body position but not of eye-in-head or body-in-space for both auditory (0.023 dB/deg in the direction of head displacement) and visual targets (0.068 deg/deg in the direction opposite to head displacement). The results indicate that both auditory and visual localization use head-on-body each information, suggesting a common coding into body coordinates--the only conversion that requires this information. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
In 3 experiments, the authors investigated the bidirectional coupling of perception and action in the context of object manipulations and motion perception. Participants prepared to grasp an X-shaped object along one of its 2 diagonals and to rotate it in a clockwise- or a counterclockwise direction. Action execution had to be delayed until the appearance of a visual go signal, which induced an apparent rotational motion in either a clockwise- or a counterclockwise direction. Stimulus detection was faster when the direction of the induced apparent motion was consistent with the direction of the concurrently intended manual object rotation. Responses to action-consistent motions were also faster when the participants prepared the manipulation actions but signaled their stimulus detections with another motor effector (i.e., with a foot response). Taken together, the present study demonstrates a motor-visual priming effect of prepared object manipulations on visual motion perception, indicating a bidirectional functional link between action and perception beyond object-related visuomotor associations. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
A visual target (T?) containing either 1 or 2 letters, or a random 10-sided polygon, was presented after an auditory target (T?) at a stimulus onset asynchrony (SOA) of either 50, 150, 250, or 600 ms. Task? was a speeded pitch discrimination to the tone, and across experiments, T? was either 1 of 2 tones (2-alternative discrimination [2AD]) or 1 of 4 tones (4-alternative discrimination [4AD]). Memory for the visual information decreased as SOA was reduced when a mask was used, but not when there was no mask. The effects of SOA were larger for the 4AD Task? than the 2AD Task?. The results demonstrate cross-modal, dual-task interference on visual encoding and suggest central interference with the short-term consolidation of visual information in short-term memory. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
Studies of visual apparent motion have relied on observers' subjective self-reports of experienced motion, for which there is no objective criterion of right or wrong. A new method of phase discrimination is reported that may offer an objective indicator of apparent motion. Ss discriminated the direction of an objective 75-ms phase shift, away from strict temporal alternation of 2 stimulus dots. Accuracy increased from 50% to 100% correct as rate of alternation and distance between the dots was decreased, in conformity with Korte's 3rd law of apparent motion. This and additional evidence suggests that phase discrimination may be mediated by asymmetries between the experienced strengths of leftward and rightward motion. Phase discrimination may also be adaptable to the study of apparent motion and related phenomena in other sensory modalities and other animal species. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
Perceived position depends on many factors, including motion present in a visual scene. Convincing evidence shows that high-level motion perception-which is driven by top-down processes such as attentional tracking or inferred motion-can influence the perceived position of an object. Is high-level motion sufficient to influence perceived position, and is attention to or awareness of motion direction necessary to displace objects' perceived positions? Consistent with previous reports, the first experiment revealed that the perception of motion, even when no physical motion was present, was sufficient to shift perceived position. A second experiment showed that when subjects were unable to identify the direction of a physically present motion stimulus, the apparent locations of other objects were still influenced. Thus, motion influences perceived position by at least two distinct processes. The first involves a passive, preattentive mechanism that does not depend on perceptual awareness; the second, a top-down process that depends on the perceptual awareness of motion direction. Each contributes to perceived position, but independently of the other. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
We examined two ways in which the neural control system for eye-head saccades constrains the motion of the eye in the head. The first constraint involves Listing's law, which holds ocular torsion at zero during head-fixed saccades. During eye-head saccades, does this law govern the eye's motion in space or in the head? Our subjects, instructed to saccade between space-fixed targets with the head held still in different positions, systematically violated Listing's law of the eye in space in a way that approximately, but not perfectly, preserved Listing's law of the eye in head. This finding implies that the brain does not compute desired eye position based on the desired gaze direction alone but also considers head position. The second constraint we studied was saturation, the process where desired-eye-position commands in the brain are "clipped" to keep them within an effective oculomotor range (EOMR), which is smaller than the mechanical range of eye motion. We studied the adaptability of the EOMR by asking subjects to make head-only saccades. As predicted by current eye-head models, subjects failed to hold their eyes still in their orbits. Unexpectedly, though, the range of eye-in-head motion in the horizontal-vertical plane was on average 31% smaller in area than during normal eye-head saccades, suggesting that the EOMR had been reduced by effort of will. Larger reductions were possible with altered visual input: when subjects donned pinhole glasses, the EOMR immediately shrank by 80%. But even with its reduced EOMR, the eye still moved into the "blind" region beyond the pinhole aperture during eye-head saccades. Then, as the head movement brought the saccade target toward the pinhole, the eyes reversed their motion, anticipating or roughly matching the target's motion even though it was still outside the pinhole and therefore invisible. This finding shows that the backward rotation of the eye is timed by internal computations, not by vision. When subjects wore slit glasses, their EOMRs shrank mostly in the direction perpendicular to the slit, showing that altered vision can change the shape as well as the size of the EOMR. A recent, three-dimensional model of eye-head coordination can explain all these findings if we add to it a mechanism for adjusting the EOMR.  相似文献   

10.
Extrastriate cortical area MT is thought to process behaviorally important visual motion signals. Psychophysical studies suggest that visual motion signals may be analyzed by multiple mechanisms, a "first-order" one based on luminance, and a "second-order" one based upon higher level cues (e.g. contrast, flicker). Second-order motion is visible to human observers, but should be invisible to first-order motion sensors. To learn if area MT is involved in the analysis of second-order motion, we measured responses to first- and second-order gratings of single neurons in area MT (and in one experiment, in area V1) in anesthetized, paralyzed macaque monkeys. For each neuron, we measured directional and spatio-temporal tuning with conventional first-order gratings and with second-order gratings created by spatial modulation of the flicker rate of a random texture. A minority of MT and V1 neurons exhibited significant selectivity for direction or orientation of second-order gratings. In nearly all cells, response to second-order motion was weaker than response to first-order motion. MT cells with significant selectivity for second-order motion tended to be more responsive and more sensitive to luminance contrast, but were in other respects similar to the remaining MT neurons; they did not appear to represent a distinct subpopulation. For those cells selective for second-order motion, we found a correlation between the preferred directions of first- and second-order motion, and weak correlations in preferred spatial frequency. These cells preferred lower temporal frequencies for second-order motion than for first-order motion. A small proportion of MT cells seemed to remain selective and responsive for second-order motion. None of our small sample of V1 cells did. Cells in this small population, but not others, may perform "form-cue invariant" motion processing (Albright, 1992).  相似文献   

11.
The effects of frequency differences between the lead and lag stimuli on auditory apparent motion (AAM--the perception of continuous changes in the location of a sound image over time) were examined in two experiments. In experiment 1, three standard frequencies (500, 1000, and 5000 Hz) and three SOAs (40, 60, and 100 ms) were tested. Both standard frequency and stimulus onset asynchrony (SOA) were constant throughout a session. Eleven comparison frequencies were tested within each session, with the range dependent on the standard frequency. At standard frequencies of 500 and 1000 Hz, AAM was heard when the frequencies of the lead and lag stimuli were within 100 Hz of each other. At 5000 Hz, the range of frequencies producing AAM increased with SOA. In experiment 2, two standards (500 and 5000 Hz) were tested with a wider range of SOAs (10-210 ms) varied within a session, and a narrower range of comparison frequencies. Here, comparison frequency was constant throughout a session. At 500 Hz, the SOAs producing AAM did not depend on comparison frequency. At 5000 Hz, the SOAs producing AAM increased with comparison frequency, consistent with Korte's third law of visual apparent motion.  相似文献   

12.
Previous studies have generally considered heading perception to be a visual task. However, since judgments of heading direction are required only during self-motion, there are several other relevant senses which could provide supplementary and, in some cases, necessary information to make accurate and precise judgments of the direction of self-motion. We assessed the contributions of several of these senses using tasks chosen to reflect the reference system used by each sensory modality. Head-pointing and rod-pointing tasks were performed in which subjects aligned either the head or an unseen pointer with the direction of motion during whole body linear motion. Passive visual and vestibular stimulation was generated by accelerating subjects at sub- or supravestibular thresholds down a linear track. The motor-kinesthetic system was stimulated by having subjects actively walk along the track. A helmet-mounted optical system, fixed either on the cart used to provide passive visual or vestibular information or on the walker used in the active walking conditions, provided a stereoscopic display of an optical flow field. Subjects could be positioned at any orientation relative to the heading, and heading judgments were obtained using unimodal visual, vestibular, or walking cues, or combined visual-vestibular and visual-walking cues. Vision alone resulted in reasonably precise and accurate head-pointing judgments (0.3 degrees constant errors, 2.9 degrees variable errors), but not rod-pointing judgments (3.5 degrees constant errors, 5.9 degrees variable errors). Concordant visual-walking stimulation slightly decreased the variable errors and reduced constant pointing errors to close to zero, while head-pointing errors were unaffected.  相似文献   

13.
Reversals in perceived direction of motion of a grating when its spatial frequency exceeds half that of the sampling mosaic provide a potential tool for estimating sampling frequency in peripheral retina. We used two-alternative forced-choice tasks to measure performance of three observers detecting or discriminating direction of motion of high contrast horizontal or vertical sinusoidal luminance gratings presented either 20 or 40 deg from the fovea along the horizontal meridian. A foveal target at a comfortable viewing distance aided fixation and accommodation. A Maxwellian view optometer with 3 mm artificial pupil was used to correct the refraction of the peripheral grating, which was presented in a circular patch, 1.8 deg in diameter, in a surround of similar colour and mean luminance (47.5 cd.m-2). The refractive correction at each eccentricity was measured by recording the aerial image of a point after a double pass through the eye. The highest frequency which can reliably be detected (7-14 c/deg at 20 deg, 5.5-7.5 c/deg at 40 deg) depends critically on refraction. Refraction differs by up to 5 D from the fovea to periphery, and by up to 6 D from horizontal to vertical. Direction discrimination performance shows no consistent reversals, and depends less on refraction. It falls to chance at frequencies as low as one-third of the highest that can be detected. Gratings which can be detected but whose direction of motion cannot be discriminated appear as irregular speckle patterns whose direction of motion varies from trial to trial.(ABSTRACT TRUNCATED AT 250 WORDS)  相似文献   

14.
During conversation, women tend to nod their heads more frequently and more vigorously than men. An individual speaking with a woman tends to nod his or her head more than when speaking with a man. Is this due to social expectation or due to coupled motion dynamics between the speakers? We present a novel methodology that allows us to randomly assign apparent identity during free conversation in a videoconference, thereby dissociating apparent sex from motion dynamics. The method uses motion-tracked synthesized avatars that are accepted by naive participants as being live video. We find that 1) motion dynamics affect head movements but that apparent sex does not; 2) judgments of sex are driven almost entirely by appearance; and 3) ratings of masculinity and femininity rely on a combination of both appearance and dynamics. Together, these findings are consistent with the hypothesis of separate perceptual streams for appearance and biological motion. In addition, our results are consistent with a view that head movements in conversation form a low level perception and action system that can operate independently from top–down social expectations. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

15.
Measured the time course of visual signals arising from each eye of 4 strabismic and/or anisometropic amblyopes and 2 visually normal Ss using monoptic metacontrast masking. The amblyope Ss had 1 nonamblyopic eye, clear ocular media, and normal fundi. The method involved the brightness estimation of a high-contrast disk target whose visibility varied as a function of the stimulus onset asynchrony (SOA) of a subsequent annular mask. Results indicate that the SOA of optimal masking was delayed in the amblyopic eye compared to that of the fellow nonamblyopic eye or with normal eyes. The smaller the target, the greater was this SOA difference and the broader was the amblyopic U-shaped masking function. This finding is discussed in terms of the current model of metacontrast and represents the differential effect of the amblyopic process on human sustained and transient neurons. (French abstract) (32 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
The present study quantified nasalward/temporalward biases in monocular optokinetic nystagmus (MOKN) and perceived velocity in patients with either early onset esotropia, late onset esotropia and in normals. MOKN was measured with low spatial frequency, small-field gratings drifting at 9.4 degrees/s. MOKN bias was quantified as the ratio of nasalward slow-phase velocity divided by the sum of temporalward and nasalward slow-phase velocities (N/(N + T)). Observers also rated the perceived velocity of gratings moving in nasalward and temporalward directions (3 or 9.4 degrees/s) using a two interval forced choice task. MOKN and perceived velocity biases were correlated negatively in both early onset and late onset groups in the perceptual task--nasalward moving targets were rated as slower than temporalward targets, but in the MOKN task, slow-phase gain was higher for nasalward than for temporalward targets. Oscillatory-motion, visual evoked potentials (VEPs), were recorded in response to 1 c/deg gratings undergoing apparent motion at 10 Hz in a subset of the observers. VEP direction biases were quantified by calculating the ratio of first harmonic response amplitudes to the sum of first and second harmonic amplitudes. Significant correlations were found between the direction biases obtained on all three measures. Perceived velocity and MOKN bias measures were also correlated negatively. Patients with early onset esotropia (infantile esotropia) had larger biases than late onset esotropes or normals on each measure and the biases were more frequently bilateral in the early onset patients. The pattern of result is consistent with early critical periods for the mechanism(s) underlying MOKN, perceived velocity and cortical responsiveness. A single site model for all three asymmetries is unlikely, at least in simple form, because of the negative correlation between MOKN and perceived velocity biases and because of the differences in relative magnitude between the perceptual and MOKN biases.  相似文献   

17.
This research focused on the response of neurons in the inferior colliculus of the unanesthetized mustached bat, Pteronotus parnelli, to apparent auditory motion. We produced the apparent motion stimulus by broadcasting pure-tone bursts sequentially from an array of loudspeakers along horizontal, vertical, or oblique trajectories in the frontal hemifield. Motion direction had an effect on the response of 65% of the units sampled. In these cells, motion in opposite directions produced shifts in receptive field locations, differences in response magnitude, or a combination of the two effects. Receptive fields typically were shifted opposite the direction of motion (i.e., units showed a greater response to moving sounds entering the receptive field than exiting) and shifts were obtained to horizontal, vertical, and oblique motion orientations. Response latency also shifted as a function of motion direction, and stimulus locations eliciting greater spike counts also exhibited the shortest neural latency. Motion crossing the receptive field boundaries appeared to be both necessary and sufficient to produce receptive field shifts. Decreasing the silent interval between successive stimuli in the apparent motion sequence increased both the probability of obtaining a directional effect and the magnitude of receptive field shifts. We suggest that the observed directional effects might be explained by "spatial masking," where the response of auditory neurons after stimulation from particularly effective locations in space would be diminished. The shift in auditory receptive fields would be expected to shift the perceived location of a moving sound and may explain shifts in localization of moving sources observed in psychophysical studies. Shifts in perceived target location caused by auditory motion might be exploited by auditory predators such as Pteronotus in a predictive tracking strategy to capture moving insect prey.  相似文献   

18.
In this study, the authors combined the cross-modal dynamic capture task (involving the horizontal apparent movement of visual and auditory stimuli) with spatial cuing in the vertical dimension to investigate the role of spatial attention in cross-modal interactions during motion perception. Spatial attention was manipulated endogenously, either by means of a blocked design or by predictive peripheral cues, and exogenously by means of nonpredictive peripheral cues. The results of 3 experiments demonstrate a reduction in the magnitude of the cross-modal dynamic capture effect on cued trials compared with uncued trials. The introduction of neutral cues (Experiments 4 and 5) confirmed the existence of both attentional costs and benefits. This attention-related reduction in cross-modal dynamic capture was larger when a peripheral cue was used compared with when attention was oriented in a purely endogenous manner. In sum, the results suggest that spatial attention reduces illusory binding by facilitating the segregation of unimodal signals, thereby modulating audiovisual interactions in information processing. Thus, the effect of spatial attention occurs prior to or at the same time as cross-modal interactions involving motion information. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
Many neurons in the rat anterodorsal thalamus (ADN) and postsubiculum (PoS) fire selectively when the rat points its head in a specific direction in the horizontal plane, independent of the animal's location and ongoing behavior. The lateral mammillary nuclei (LMN) are interconnected with both the ADN and PoS and, therefore, are in a pivotal position to influence ADN/PoS neurophysiology. To further understand how the head direction (HD) cell signal is generated, we recorded single neurons from the LMN of freely moving rats. The majority of cells discharged as a function of one of three types of spatial correlates: (1) directional heading, (2) head pitch, or (3) angular head velocity (AHV). LMN HD cells exhibited higher peak firing rates and greater range of directional firing than that of ADN and PoS HD cells. LMN HD cells were modulated by angular head velocity, turning direction, and anticipated the rat's future HD by a greater amount of time (approximately 95 msec) than that previously reported for ADN HD cells (approximately 25 msec). Most head pitch cells discharged when the rostrocaudal axis of the rat's head was orthogonal to the horizontal plane. Head pitch cell firing was independent of the rat's location, directional heading, and its body orientation (i.e., the cell discharged whenever the rat pointed its head up, whether standing on all four limbs or rearing). AHV cells were categorized as fast or slow AHV cells depending on whether their firing rate increased or decreased in proportion to angular head velocity. These data demonstrate that LMN neurons code direction and angular motion of the head in both horizontal and vertical planes and support the hypothesis that the LMN play an important role in processing both egocentric and allocentric spatial information.  相似文献   

20.
Contrast thresholds were measured for discriminating left vs right motion of a vertical, 1 c/deg luminance grating lasting for one cycle of motion. This test was presented on a 1 c/deg stationary grating (pedestal) of twice-threshold, flashed for the duration of the test motion. Lu and Sperling [(1995). Vision Research, 35, 2697-2722] argue that the visual system detects the underlying, first-order motion of the test and is immune to the presence of the stationary pedestal (and the 'feature wobble' which it induces). On the contrary, we observe that the stationary pedestal has large effects on motion detection at 7 and 15 Hz, and smaller effects at 0.9-3.7 Hz, evidenced by a spatial phase dependency between the stationary pedestal and moving test. At 15 Hz the motion threshold drops as much as five-fold, with the stationary pedestal in the optimal spatial phase (i.e., pedestal and test spatially in phase at middle of motion), and the perceived direction of the test motion reverses with the pedestal in the opposite phase. Phase dependency was also explored using a very brief (approximately 1 msec) static pedestal presented with the moving test. The pedestal of Lu and Sperling (flashed for the duration of the test) has a broad spectrum of left and right moving components which interact with the moving test. The pedestal effects can be explained by the visual system's much higher sensitivity to the difference of the contrast of right vs left moving components than to either component alone.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号