首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Reviews studies of eye movements in reading and other information-processing tasks such as picture viewing, visual search, and problem solving. The major emphasis of the review is on reading as a specific example of the more general phenomenon of cognitive processing. Basic topics discussed are the perceptual span, eye guidance, integration across saccades, control of fixation durations, individual differences, and eye movements as they relate to dyslexia and speed reading. In addition, eye movements and the use of peripheral vision and scan paths in picture perception, visual search, and pattern recognition are discussed, as is the role of eye movements in visual illusion. The basic theme of the review is that eye movement data reflect the cognitive processes occurring in a particular task. Theoretical and practical considerations concerning the use of eye movement data are also presented. (7? p ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
Eye movements in reading and information processing: 20 years of research   总被引:5,自引:0,他引:5  
Recent studies of eye movements in reading and other information processing tasks, such as music reading, typing, visual search, and scene perception, are reviewed. The major emphasis of the review is on reading as a specific example of cognitive processing. Basic topics discussed with respect to reading are (a) the characteristics of eye movements, (b) the perceptual span, (c) integration of information across saccades, (d) eye movement control, and (e) individual differences (including dyslexia). Similar topics are discussed with respect to the other tasks examined. The basic theme of the review is that eye movement data reflect moment-to-moment cognitive processes in the various tasks examined. Theoretical and practical considerations concerning the use of eye movement data are also discussed.  相似文献   

3.
The posterior parietal cortex has long been considered an 'association' area that combines information from different sensory modalities to form a cognitive representation of space. However, until recently little has been known about the neural mechanisms responsible for this important cognitive process. Recent experiments from the author's laboratory indicate that visual, somatosensory, auditory and vestibular signals are combined in areas LIP and 7a of the posterior parietal cortex. The integration of these signals can represent the locations of stimuli with respect to the observer and within the environment. Area MSTd combines visual motion signals, similar to those generated during an observer's movement through the environment, with eye-movement and vestibular signals. This integration appears to play a role in specifying the path on which the observer is moving. All three cortical areas combine different modalities into common spatial frames by using a gain-field mechanism. The spatial representations in areas LIP and 7a appear to be important for specifying the locations of targets for actions such as eye movements or reaching; the spatial representation within area MSTd appears to be important for navigation and the perceptual stability of motion signals.  相似文献   

4.
The Banff Annual Seminar in Cognitive Science (BASICS) was founded in 1982 and has met each spring since then in Banff, Alberta. BASICS was originated to provide an informal atmosphere for the in-depth discussion of a wide variety of research topics within the broadly defined domain of cognitive psychology. Topics covered in this year's seminar included visual spatial attention and target detection, attention and eye movements during reading, the integration of information across eye movements, language production and its dependency on structure, and parallel distributed processing models. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
Mammalian sensory systems must continuously select the events to be noticed from the ongoing stream of information, filtering out or habituating to the insignificant. This selective noticing is the province of central mechanisms of orientation and attention that are represented in structures along the neuraxis from brain stem to neocortex. One of the structures is the frontal eye field, long known to be implicated in visual attention through its involvement in horizontal movements of the eyes. The author reviews the relevant neuroanatomy and behavioral and electrophysiological research that decisively show that this small but complex region does more than regulate eye movements. Theories of frontal eye field function are considered, concluding with 2 contemporary views of attention, one based on arousal and the other on the processes of representation of stimulus input, that offer special promise for understanding the role of frontal cortex in the direction of attention. (5 p ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
7.
Studied visual masking and visual integration across saccadic eye movements in 4 experiments. In a 5th experiment, 4 randomly chosen dots from a 3?×?5 dot matrix were presented in 1 fixation, and 4 different dots from the matrix were presented in a 2nd fixation. Ss reported the location of the missing dot. When the 1st display was presented just before the saccade (as in Exps I–III), Ss accurately specified the missing dot location when the dots were presented to the same region of the retina but not when they were presented in the same place in space. When the 1st display was presented well before the saccade (as in Exp IV), Ss performed poorly regardless of retinal or spatial overlap. Results indicate the existence of a short-lived retinotopic visual persistence but provide no support for a spatiotopic visual persistence capable of fusing the contents of successive fixations. It is concluded that transsaccadic integration depends instead on an abstract memory that accumulates position and identity information about the contents of successive fixations. Results are discussed in relation to the work by M. L. Davidson et al (see record 1974-10245-001). (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
The presence of location-dependent and location-independent benefits on object identification in an eye movement contingent preview paradigm has been taken as support for the transsaccadic integration of object types and object tokens (J. M. Henderson, 1994). A recent study, however, suggests a critical role for saccade targeting in the generation of the 2 preview effects (F. Germeys, De Gr?f, & Verfaillie, 2002). In the present study, eye movements were monitored in a preview paradigm, and both location-independent and location-dependent preview benefits were observed regardless of the saccade target status of the preview object. The findings support the view that type and token representational systems contribute independently to the integration of object information across eye movements. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
Used a "transsaccadic" partial report procedure to measure memory for position and identity information across saccades. Delaying the partial-report cue after the eye movement had little effect on report accuracy. Mask presentation hindered recall only at the shortest delay. Accuracy was much higher when the letter array contained 6 letters than when it contained 10 letters. Intra-array errors were much more frequent than extra-array errors. These results suggest that memory across eye movements decays slowly, has a limited capacity, is maskable for a brief time, and retains identity information better than position information. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
Conducted 2 experiments to determine the impact of visual target information, visual limb information, and a no vision target-pointing delay on manual aiming accuracy. In Exp I, 10 undergraduates made target pointing movements with a stylus from a home switch to the center of a target pad. Movement time was recorded under different lighting conditions. In Exp II, 10 undergraduates performed the same task under stable lighting conditions. Results indicate that visual target information was more important than limb information in determining movement accuracy and demonstrate that it was not necessary for target information to be physically present, since a visual representation of the movement environment persisted for a brief period after visual occlusion. Results contradict the findings of L. G. Carlton (see record 1982-02570-001). (French abstract) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

11.
The authors explored the role of phonological representations in the integration of lexical information across saccadic eye movements. Study participants executed a saccade to a preview letter string that was presented extrafoveally. In Experiment 1, the preview string was replaced by a target string during the saccade, and the participants performed a lexical decision. Targets with phonologically regular initial trigrams benefited more from a preview than did targets with irregular initial trigrams. In Experiment 2, words with regularly pronounced initial trigrams were more likely to be correctly identified from the preview alone. In Experiment 3, participants were more likely to detect a change across a saccade from regular to irregular initial trigrams than from irregular to regular trigrams. The results suggest that phonological representations are activated from an extrafoveal preview and that this phonological information can be integrated with foveal information following a saccade. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

12.
What role does the initial glimpse of a scene play in subsequent eye movement guidance? In 4 experiments, a brief scene preview was followed by object search through the scene via a small moving window that was tied to fixation position. Experiment 1 demonstrated that the scene preview resulted in more efficient eye movements compared with a control preview. Experiments 2 and 3 showed that this scene preview benefit was not due to the conceptual category of the scene or identification of the target object in the preview. Experiment 4 demonstrated that the scene preview benefit was unaffected by changing the size of the scene from preview to search. Taken together, the results suggest that an abstract (size invariant) visual representation is generated in an initial scene glimpse and that this representation can be retained in memory and used to guide subsequent eye movements. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
Eye movements are often misdirected toward a distractor when it appears abruptly, an effect known as oculomotor capture. Fundamental differences between eye movements and attention have led to questions about the relationship of oculomotor capture to the more general effect of sudden onsets on performance, known as attentional capture. This study explores that issue by examining the time course of eye movements and manual localization responses to targets in the presence of sudden-onset distractors. The results demonstrate that for both response types, the proportion of trials on which responses are erroneously directed to sudden onsets reflects the quality of information about the visual display at a given point in time. Oculomotor capture appears to be a specific instance of a more general attentional capture effect. Differences and similarities between the two types of capture can be explained by the critical idea that the quality of information about a visual display changes over time and that different response systems tend to access this information at different moments in time. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
The ability to judge heading during tracking eye movements has recently been examined by several investigators. To assess the use of retinal-image and extra-retinal information in this task, the previous work has compared heading judgments with executed as opposed to simulated eye movements. For eye movement velocities greater than 1 deg/sec, observers seem to require the eye-velocity information provided by extra-retinal signals that accompany tracking eye movements. When those signals are not provided, such as with simulated eye movements, observers perceive their self-motion as curvilinear translation rather than the linear translation plus eye rotation being presented. The interpretation of the previous results is complicated, however, by the fact that the simulated eye movement condition may have created a conflict between two possible estimates of the heading: one based on extra-retinal solutions and the other based on retina-image solutions. In four experiments, we minimized this potential conflict by having observers judge heading in the presence of rotations consisting of mixtures of executed and simulated eye movements. The results showed that the heading is estimated more accurately when rotational flow is created by executed eye movements alone. In addition, the magnitude of errors in heading estimates is essentially proportional to the amount of rotational flow created by a simulated eye rotation (independent of the total magnitude of the rotational flow). The fact that error magnitude is proportional to the amount of simulated rotation suggests that the visual system attributes rotational flow unaccompanied by an eye movement to a displacement of the direction of translation in the direction of the simulated eye rotation.  相似文献   

15.
16.
When human subjects are presented with visual displays consisting of random dots moving sideways at different velocities, they perceive transparent surfaces, moving in the same direction but located at different distances from themselves. They perceive depth from motion parallax, without any additional cues to depth, such as relative size, occlusion or binocular disparity. Simultaneously, large-field visual motion triggers compensatory eye movements which tend to offset such motion, in order to stabilize the visual image of the environment. In a series of experiments, we investigated how such reflexive eye movements are controlled by motion parallax displays, that is, in a situation where a complete stabilization of the visual image is never possible. Results show that optokinetic nystagmus, and not merely active visual pursuit of singular elements, is triggered by such displays. Prior to the detection of depth from motion parallax, eye tracking velocity is equal to the average velocity of the visual image. After detection, eye tracking velocity spontaneously matches the slowest velocity in the visual field, but can be controlled by attentional factors. Finally, for a visual stimulation containing more than three velocities, subjects are no longer able to perceptually dissociate between different surfaces in depth, and eye tracking velocity remains equal to the average velocity of the visual image. These data suggest that, in the presence of flow fields containing motion parallax, optokinetic eye movements are modulated by perceptual and attentional factors.  相似文献   

17.
Task analytic theories of graph comprehension account for the perceptual and conceptual processes required to extract specific information from graphs. Comparatively, the processes underlying information integration have received less attention. We propose a new framework for information integration that highlights visual integration and cognitive integration. During visual integration, pattern recognition processes are used to form visual clusters of information; these visual clusters are then used to reason about the graph during cognitive integration. In 3 experiments, the processes required to extract specific information and to integrate information were examined by collecting verbal protocol and eye movement data. Results supported the task analytic theories for specific information extraction and the processes of visual and cognitive integration for integrative questions. Further, the integrative processes scaled up as graph complexity increased, highlighting the importance of these processes for integration in more complex graphs. Finally, based on this framework, design principles to improve both visual and cognitive integration are described. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

18.
Presents a visual–spatial approach to the study of attention dysfunction. The hypotheses of broadened and narrowed attention were tested by comparing peripheral visual discrimination of 10 acute schizophrenic and 11 chronic schizophrenic inpatients and 16 normal Ss (hospital staff) within 2 regions of the functional visual field. Pairs of visual stimuli were presented at 4 display angles. Measures of response accuracy, response latency, and latency of eye movement of peripheral stimuli were obtained. Results indicate that acute schizophrenics generally discriminated peripheral signals more accurately than chronic schizophrenics or normals. Normals discriminated signals more accurately than chronic schizophrenics. Results suggest the differential use of selective strategies. Limitations in the use of peripheral information among chronic schizophrenics implies a reduction in the amount of information transmitted in a selective act and a reduction in the economy of selective activities. In contrast to normals, acute schizophrenics utilized more efficient selective strategies over a greater spatial area, implying greater transmission of information within discrete selective acts. Results also indicate that schizophrenics initiated eye movements earlier than normals and that response latency was greater for acute schizophrenics than for normals. Results are interpreted as providing partial support for P. H. Venable's (1964) theory of input dysfunction. (24 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
Experimental psychologists have recently amassed a great deal of evidence supporting the hypothesis that the visual system can select a particular location over other locations in the visual field for further analysis without overtly orienting the eyes to the selected location. At the same time, we know that during reading and scene perception, the eyes are overtly directed to new regions of the visual field every 200 to 300 msec on average. How are covert shifts of attention and overt movements of the eyes related during complex visual-cognitive tasks? The available evidence from studies of the perceptual span in reading suggests that attention is allocated asymmetrically around the fixation point, with more information acquired in the direction that the eyes are moving. Based on this evidence and evidence from explorations of eye movement control in reading, the author presents a tentative model of the relationship between attention and eye movements, called the Sequential Attention Model. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号