首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This study investigates the characteristics of eye movements during a camouflaged target search task. Camouflaged targets were randomly presented on two natural landscapes. The performance of each camouflage design was assessed by target detection hit rate, detection time, number of fixations on display, first saccade amplitude to target, number of fixations on target, fixation duration on target, and subjective ratings of search task difficulty. The results showed that the camouflage patterns could significantly affect the eye-movement behavior, especially first saccade amplitude and fixation duration, and the findings could be used to increase the sensitivity of the camouflage assessment. We hypothesized that the assessment could be made with regard to the differences in detectability and discriminability of the camouflage patterns. These could explain less efficient search behavior in eye movements. Overall, data obtained from eye movements can be used to significantly enhance the interpretation of the effects of different camouflage design.  相似文献   

2.
Eye movement and pupillary response measures (in addition to search time and accuracy) were collected as indices of visual workload during two experiments designed to evaluate the addition of colour coding to a symbolic tactical display. Displays also varied with regard to symbol density and the type of information participants were required to abstract from the display. These variables were factorially manipulated to examine the effects of colour coding in conditions of varying difficulty. In Experiment 1 (n = 8), search time and the number of eye fixations were affected by all variables and in a similar manner; fixation dwell time and the pupillary response dissociated from the other measures. Compared to monochrome displays, colour coding facilitated search (reduced search time, but not accuracy) during exhaustive search, but had no effect during self-terminating search. Experiment 2 (n = 8) was a replication of Experiment 1 with a pseudo-search control condition added to examine further the pupillary response measures: in particular, to assess the effects of the physical parameters of the displays, and to verify the findings of Experiment 1. Pupillary response measures were sensitive to the information processing demands of the search task, not merely to the physical parameters of the display. Further, the search time, accuracy, and eye movement results from the active search condition generally replicated Experiment 1, but the fixation dwell time data did not. These between-study differences were interpreted as indicating the importance of participant search strategy.  相似文献   

3.
《Ergonomics》2012,55(7):789-799
Abstract

The purpose of this study was to provide a basis for improving individual visual performance of inspectors. The relationship between the correct count rate and eye movements of subjects when they counted dots arranged on samples presented for different lengths of time were analysed mainly to determine individual differences. Subjects' eye movements were measured with a corneal reflectance eye camera and analysed frame by frame with a video motion analyser. It was found that accuracy of visual inspection does not depend on length of search time and that a fast search time is not incompatible with a slow search speed. Furthermore, fixation time and number of fixations were considered the main factors governing accuracy of visual inspection. When limited time is allowed for search, a search strategy of prolonging the fixation time leads to high performance and consequently shorter inspection time. Several other findings were obtained which appear important in obtaining accurate information rapidly.  相似文献   

4.
M Akamatsu 《Ergonomics》1992,35(5-6):647-660
Adaptation experiments in shape tracing were conducted to investigate finger and eye movements in various conditions of visual and tactile information. Maximum velocity, mean velocity, maximum acceleration and reacceleration point were calculated from finger movements. Number of eye fixations and lead time of eye fixation to finger position were calculated from eye movements. The results showed that for the finger movement the values of the indices studied were higher in the combined visual and tactile condition than in the visual only condition. The number of eye fixations decreased when subjects repeated the tracing and was more marked in the combined visual and tactile condition than in the visual only condition. The results suggest that finger movements become faster and use of vision is reduced when both visual and tactile information are given.  相似文献   

5.
《Ergonomics》2012,55(5-6):647-660
Abstract

Adaptation experiments in shape tracing were conducted to investigate finger and eye movements in various conditions of visual and tactile information. Maximum velocity, mean velocity, maximum acceleration and reacceleration point were calculated from finger movements. Number of eye fixations and lead time of eye fixation to finger position were calculated from eye movements. The results showed that for the finger movement the values of the indices studied were higher in the combined visual and tactile condition than in the visual only condition. The number of eye fixations decreased when subjects repeated the tracing and was more marked in the combined visual and tactile condition than in the visual only condition. The results suggest that finger movements become faster and use of vision is reduced when both visual and tactile information are given.  相似文献   

6.
《Ergonomics》2012,55(12):1667-1681
Abstract

This study employed an eye-tracking technique to investigate the influence of social presence on eye movements in visual search tasks. A total of 20 male subjects performed visual search tasks in a 2 (target presence: present vs. absent) × 2 (task complexity: complex vs. simple) × 2 (social presence: alone vs. a human audience) within-subject experiment. Results indicated that the presence of an audience could evoke a social facilitation effect on response time in visual search tasks. Compared with working alone, the participants made fewer and shorter fixations, larger saccades and shorter scan path in simple search tasks and more and longer fixations, smaller saccades and longer scan path in complex search tasks when working with an audience. The saccade velocity and pupil diameter in the audience-present condition were larger than those in the working-alone condition. No significant change in target fixation number was observed between two social presence conditions.

Practitioner Summary: This study employed an eye-tracking technique to examine the influence of social presence on eye movements in visual search tasks. Results clarified the variation mechanism and characteristics of oculomotor scanning induced by social presence in visual search.  相似文献   

7.
Today, with the advancements in the eye-tracking technology, it has become possible to follow surgeons’ eye movements while performing surgical tasks. Despite the availability of studies providing a better understanding of surgeons’ eye movements, research in the particular field of endoneurosurgery is very limited. Analysing surgeons’ eye-movement data can provide general insights into how to improve surgical education programmes. In this study, four simulation-based task-oriented endoscopic surgery training scenarios were developed and implemented by 23 surgical residents using three different hand conditions; dominant, non-dominant, and both. The participants’ recorded eye data comprised fixation number, fixation duration, saccade number, saccade duration, pursuit number, pursuit duration, and pupil size. This study has two main contributions: First, it reports on the eye-movement behaviours of surgical residents, demonstrating that novice residents tended to make more fixations and saccades than intermediate residents. They also had a higher fixation duration and followed the objects more frequently compared to the intermediates. Furthermore, hand conditions significantly affected the eye movements of the participants. Based on these results, it can be concluded that eye-movement data can be used to assess the skill levels of surgical residents and would be an important measure to better guide trainees in surgical education programmes. The second contribution of this study is the eye-movement event classifications of 10 different algorithms. Although the algorithms mostly provided similar results, there were a few conflicted values for some classifications, which offers a clue as to how researchers can utilise these algorithms with low sampling frequency eye trackers.  相似文献   

8.
M Maltz  D Shinar 《Human factors》1999,41(1):15-25
This 2-part study focuses on eye movements to explain driving-related visual performance in younger and older persons. In the first task, participants' eye movements were monitored as they viewed a traffic scene image with a numeric overlay and visually located the numbers in their sequential order. The results showed that older participants had significantly longer search episodes than younger participants, and that the visual search of older adults was characterized by more fixations and shorter saccades, although the average fixation durations remained the same. In the second task, participants viewed pictures of traffic scenes photographed from the driver's perspective. Their task was to assume the role of the driver and regard the image accordingly. Results in the second task showed that older participants allocated a larger percentage of their visual scan time to a small subset of areas in the image, whereas younger participants scanned the images more evenly. Also, older participants revisited the same areas and younger participants did not. The results suggest how aging might affect the efficacy of visual information processing. Potential applications of this research include training older drivers for a more effective visual search, and providing older drivers with redundant information in case some information is missed.  相似文献   

9.
Visual fixation on one's tool(s) takes much attention away from one's primary task. Following the belief that the best tools 'disappear' and become invisible to the user, we present a study comparing visual fixations (eye gaze within locations on a graphical display) and performance for mouse, pen, and physical slider user interfaces. Participants conducted a controlled, yet representative, color matching task that required user interaction representative of many data exploration tasks such as parameter exploration of medical or fuel cell data. We demonstrate that users may spend up to 95% fewer visual fixations on physical sliders versus standard mouse and pen tools without any loss in performance for a generalized visual performance task.  相似文献   

10.
The goal of this study is to examine the effects of time pressure and feedback on learning performance, as mediated by eye movement. Time pressure is one of main causes of human error in the workplace. Providing participants with feedback about their performance before task completion has been shown to reduce human error in diverse domains. Since both time pressure and feedback induce motivation, which is closely related to attention, we measured participants' eye movements to trace their attention and information acquisition coupled with a visual display. Time-to-deadline (long and short) and the presence of feedback were the independent factors used while measuring participants’ performance and eye movements as they learned new information about the subject of project management and answered multiple-choice questions via self-paced online learning systems. Using structural equation modeling, we found a mediating effect of eye movement on the relationships among time-to-deadline, feedback, and learning performance. Insufficient time-to-deadline accelerated the number of fixations on the screen, which resulted in longer task completion times and increased correct rates for participants learning about project management. The models in this study suggest the possibility of predicting performance from eye movement under time-to-deadline and feedback conditions. The structural equation model in the study can be applied to online and remote learning systems, in which time management is one of the main challenges for individual learners.  相似文献   

11.
This pilot study explores the use of combining multiple data sources (subjective, physical, physiological, and eye tracking) in understanding user cost and behavior. Specifically, we show the efficacy of such objective measurements as heart rate variability (HRV), and pupillary response in evaluating user cost in game environments, along with subjective techniques, and investigate eye and hand behavior at various levels of user cost. In addition, a method for evaluating task performance at the micro-level is developed by combining eye and hand data. Four findings indicate the great potential value of combining multiple data sources to evaluate interaction: first, spectral analysis of HRV in the low frequency band shows significant sensitivity to changes in user cost, modulated by game difficulty—the result is consistent with subjective ratings, but pupillary response fails to accord with user cost in this game environment; second, eye saccades seem to be more sensitive to user cost changes than eye fixation number and duration, or scanpath length; third, a composite index based on eye and hand movements is developed, and it shows more sensitivity to user cost changes than a single eye or hand measurement; finally, timeline analysis of the ratio of eye fixations to mouse clicks demonstrates task performance changes and learning effects over time. We conclude that combining multiple data sources has a valuable role in human–computer interaction (HCI) evaluation and design.  相似文献   

12.
This study focuses on the comparison of traditional engineering drawings with a CAD (computer aided design) visualization in terms of user performance and eye movements in an applied context. Twenty-five students of mechanical engineering completed search tasks for measures in two distinct depictions of a car engine component (engineering drawing vs. CAD model). Besides spatial dimensionality, the display types most notably differed in terms of information layout, access and interaction options. The CAD visualization yielded better performance, if users directly manipulated the object, but was inferior, if employed in a conventional static manner, i.e. inspecting only predefined views. An additional eye movement analysis revealed longer fixation durations and a stronger increase of task-relevant fixations over time when interacting with the CAD visualization. This suggests a more focused extraction and filtering of information. We conclude that the three-dimensional CAD visualization can be advantageous if its ability to manipulate is used.  相似文献   

13.
Understanding and predicting a driver’s behaviors in a vehicle is a prospective function embedded in a smart car. Beyond the patterns of observable behaviors, driver’s intention could be identified based on goal-driven behaviors. A computational model to classify driver intention in visual search which is finding a target with one’s eyes as moving selective attention across a search field, could improve the level of intelligence that a smart car could demonstrate. To develop a computational cognitive that explains the underlying cognitive process and reproduces drivers’ behaviors, particular parameters in human cognitive process should be specified. In this study, 2 issues are considered as influential factors on a driver’s eye movements: a driver’s visual information processing characteristics (VIPCs) and the purpose of visual search. To assess an individual’s VIPC, 4 psychological experiments—Donders’s reaction time, mental rotation, signal detection, and Stroop experiments—were utilized. Upon applying k-means clustering method, 114 drivers were divided into 9 driver groups. To investigate the influence of task goal on a driver’s eye movement, driving simulation was conducted to collect a driver’s eye movement data under the given purpose of visual search (perceptual and cognitive tasks). The empirical data showed that there were significant differences in a driver’s oculomotor behavior, such as response time, average fixation time, and average glance duration between the driver groups and the purposes of visual search. The effectiveness of using VIPC for grouping drivers was tested with task goal classification model by comparing the models’ performance when drivers were grouped by typical demographic data such as gender. Results show that grouping based on VIPC improves accuracy and stability of prediction of the model on a driver’s intention underlying visual search behaviors. This study would benefit future studies focusing on personalization and adaptive interfaces in the development of smart car.  相似文献   

14.
In practice, many visual search tasks are performed under dynamic conditions. An experiment was conducted here to test visual search strategy adopted by a person in a dynamic visual search and to investigate the effects of display movement velocity on search time and detection accuracy in it. Thirty‐five participants were randomly tested with all 10 angular velocities of 0, 2, 4, 6, 8, 10, 12, 14, 16, and 32 deg/s. The data obtained fitted the random search model well. The results revealed that observers utilized a random search strategy during the dynamic visual search process and that display movement velocity influenced search performance. In comparison with static visual search, an angular velocity faster than 4 deg/s resulted in a significant decrement in search performance. The variations of duration of individual fixations, the probability of target detection in a single fixation and visual lobe area were discussed. The obtained relationships between display movement velocity, search time and detection accuracy can serve as a useful guide for designing a dynamic search task, thus helping to maximize the cost‐effectiveness of dynamic search tasks while minimizing errors and misses during the search process.  相似文献   

15.
The interactive use of visual interface tools has diversified the use of visualisations. This article reviews the relevant aspects of interaction and challenges the sufficiency of traditional evaluation criteria developed for static graphs. Traditionally, the problem for statisticians has been to maintain perceptual discriminability of details, when quantities of data increase. Currently, however, even non-professional users need to integrate qualitatively different kinds of information. The review of task requirements indicates the use of a visual outline: (1) visual tools can facilitate parallel separation of individual data entities and integration of their features and (2) more focused comparisons require visual memory due to eye movements. The article reports psychophysical experiments that measure performance accuracy and response latency conditioned by the above task requirements. The impact of shape and colour on performance interacted with display times; the times were shorter (100 ms) or longer (1 s) than the duration of typical gaze fixation. The features of graphs in the experiments were derived from a popular internet service. Thus, we describe methods for evaluating visual components of real services and provide general guidelines for visual design of human–computer interaction.  相似文献   

16.
This study explored the relationships between eye tracking and traditional usability testing data in the context of analyzing the usability of Algebra Nation?, an online system for learning mathematics used by hundreds of thousands of students. Thirty-five undergraduate students (20 females) completed seven usability tasks in the Algebra Nation? online learning environment. The participants were asked to log in, select an instructor for the instructional video, post a question on the collaborative wall, search for an explanation of a mathematics concept on the wall, find information relating to Karma Points (an incentive for engagement and learning), and watch two instructional videos of varied content difficulty. Participants’ eye movements (fixations and saccades) were simultaneously recorded by an eye tracker. Usability testing software was used to capture all participants’ interactions with the system, task completion time, and task difficulty ratings. Upon finishing the usability tasks, participants completed the System Usability Scale. Important relationships were identified between the eye movement metrics and traditional usability testing metrics such as task difficulty rating and completion time. Eye tracking data were investigated quantitatively using aggregated fixation maps, and qualitative examination was performed on video replay of participants’ fixation behavior. Augmenting the traditional usability testing methods, eye movement analysis provided additional insights regarding revisions to the interface elements associated with these usability tasks.  相似文献   

17.
An experiment is described to measure the distracting effects of advertisements on the conspicuity of routing signs in realistic scenes. Slides of railway station scenes were shown in which subjects had to search for a target word used in a routing sign present in the scene. Eye movements were recorded to determine search time and number of fixations during search time. Both search time and number of fixations increased systematically with the number of advertisements in two of the three experimental scenes. The distribution of fixations over the scenes is discussed.  相似文献   

18.
《Ergonomics》2012,55(9):1831-1840
Sixteen observers participated in a visual search experiment in which colour coding, search type, and the amount of pre-search information available to the observers were varied. Observers searched simulated symbolic tactical displays to find the number of target symbols (i.e. exhaustive search) or the quadrant of the display in which a single target symbol was located (i.e. self-terminating search). Displays varied in the way in which the symbology was colour coded: colour was either relevant (i.e. redundant with symbol shape) or irrelevant (orthogonal to symbol shape), or the display was monochrome. Half of the observers were cued with regard to the coding scheme prior to display onset, while the other observers were not. There was no overall difference in search time or accuracy, number of eye fixations, or pupillary response between cued and non-cued observers, but only because cued and non-cued observers used the coding schemes differently. Redundancy gain was only evident for cued observers, who searched colour relevant displays faster and with fewer fixations than colour irrelevant or monochrome displays. Non-cued observers' search pattern did not differ across colour coding schemes, but they searched colour irrelevant and monochrome displays faster than the cued observers. Differences between cued and non-cued observers' search strategy are discussed with regard to their implications for design and evaluation of colour multipurpose displays.  相似文献   

19.
Previous research has demonstrated a loss of helmet‐mounted display (HMD) legibility for users exposed to whole body vibration. A pair of human factors studies was conducted to evaluate the effect of whole body vibration on eye, head, and helmet movements for seated users of a HMD while conducting simple fixation and smooth pursuit tracking tasks. These experiments confirmed that vertical eye motion can be demonstrated, that is consistent with the human visual systems' response to the vestibular–ocular reflex (VOR). Helmet slippage was also shown to occur, which could exacerbate loss of display legibility. The largest amplitudes in eye movements were observed during exposure to sinusoidal vibration in the 4–6 Hz range, which is consistent with the frequencies that past research has associated with whole‐body resonance and the largest decrease in display legibility. Further, the measured eye movements appeared to be correlated with both the angular acceleration of the user's head and the angular slippage of the user's helmet. This research demonstrates that the loss of legibility while wearing HMDs likely results from a combination of VOR‐triggered eye movements and movement of the display. Future compensation algorithms should consider adjusting the display in response to both VOR‐triggered eye and HMD motion.  相似文献   

20.
随着眼动跟踪技术的日益成熟,面向终端用户的视线输入产品问世,视线交互(Gaze-based Interaction)的实用性越来越高。然而,由于眼睛并不是与生俱来的控制器官,用户界面中无论动态或静态的各种视觉反馈,在视线交互过程中都可能干扰用户的眼动,从而影响视线输入(视点坐标)。因此,通过两个视线点击(Eye Pointing)实验,从视点的空间分布特征和视线交互的人机工效两个方面,系统地评估了目标颜色因素对视线交互的影响。结果表明,目标颜色这类静态视觉反馈虽然不影响用户凝视目标时视点坐标的稳定性,但的确会对用户的眼动扫视过程造成显著影响,从而影响视线点击任务的人机工效。特别是在视线移动距离较长的情况下,这种影响更为明显。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号