首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 421 毫秒
1.
《Ergonomics》2012,55(5-6):647-660
Abstract

Adaptation experiments in shape tracing were conducted to investigate finger and eye movements in various conditions of visual and tactile information. Maximum velocity, mean velocity, maximum acceleration and reacceleration point were calculated from finger movements. Number of eye fixations and lead time of eye fixation to finger position were calculated from eye movements. The results showed that for the finger movement the values of the indices studied were higher in the combined visual and tactile condition than in the visual only condition. The number of eye fixations decreased when subjects repeated the tracing and was more marked in the combined visual and tactile condition than in the visual only condition. The results suggest that finger movements become faster and use of vision is reduced when both visual and tactile information are given.  相似文献   

2.
Human sensory inputs and motor outputs mutually affect one another. We pursue the idea that a tactile interface can influence human motor outputs by intervening in sensory–motor relationships. This study focuses on the shear deformation of a finger pad while a person traces a line or circle. During these tracing movements, the finger pads were deformed using a tactile interface. The tracing distances increased when the finger pad deformations were amplified by the tactile interface, which indicates that the intervention in the haptic sensorimotor loop affected the tracing movements. Elucidation of such interaction between the tracing movements and the shear deformations of finger pads enhances the understanding of human-assistive haptic techniques.  相似文献   

3.
《Ergonomics》2012,55(12):1667-1681
Abstract

This study employed an eye-tracking technique to investigate the influence of social presence on eye movements in visual search tasks. A total of 20 male subjects performed visual search tasks in a 2 (target presence: present vs. absent) × 2 (task complexity: complex vs. simple) × 2 (social presence: alone vs. a human audience) within-subject experiment. Results indicated that the presence of an audience could evoke a social facilitation effect on response time in visual search tasks. Compared with working alone, the participants made fewer and shorter fixations, larger saccades and shorter scan path in simple search tasks and more and longer fixations, smaller saccades and longer scan path in complex search tasks when working with an audience. The saccade velocity and pupil diameter in the audience-present condition were larger than those in the working-alone condition. No significant change in target fixation number was observed between two social presence conditions.

Practitioner Summary: This study employed an eye-tracking technique to examine the influence of social presence on eye movements in visual search tasks. Results clarified the variation mechanism and characteristics of oculomotor scanning induced by social presence in visual search.  相似文献   

4.
《Ergonomics》2012,55(7):874-894
During laparoscopic surgery video images are used to guide the movements of the hand and instruments, and objects in the operating field often obscure these images. Thus, surgeons often rely heavily on tactile information (sense of touch) to help guide their movements. It is important to understand how tactile perception is affected when using laparoscopic instruments, since many surgical judgements are based on how a tissue ‘feels’ to the surgeon, particularly in situations where visual inputs are degraded. Twelve naïve participants used either their index finger or a laparoscopic instrument to explore sandpaper surfaces of various grits (60, 100, 150 and 220). These movements were generated with either vision or no vision. Participants were asked to estimate the roughness of the surfaces they explored. The normal and tangential forces of either the finger or instrument on the sandpaper surfaces were measured. Results showed that participants were able to judge the roughness of the sandpaper surfaces when using both the finger and the instrument. However, post hoc comparisons showed that perceptual judgements of surface texture were altered in the no vision condition compared to the vision condition. This was also the case when using the instrument, compared to the judgements provided when exploring with the finger. This highlights the importance of the completeness of the video images during laparoscopic surgery. More normal and tangential force was used when exploring the surfaces with the finger as opposed to the instrument. This was probably an attempt to increase the contact area of the fingertip to maximize tactile input. With the instrument, texture was probably sensed through vibrations of the instrument in the hand. Applications of the findings lie in the field of laparoscopic surgery simulation techniques and tactile perception.  相似文献   

5.
During laparoscopic surgery video images are used to guide the movements of the hand and instruments, and objects in the operating field often obscure these images. Thus, surgeons often rely heavily on tactile information (sense of touch) to help guide their movements. It is important to understand how tactile perception is affected when using laparoscopic instruments, since many surgical judgements are based on how a tissue 'feels' to the surgeon, particularly in situations where visual inputs are degraded. Twelve na?ve participants used either their index finger or a laparoscopic instrument to explore sandpaper surfaces of various grits (60, 100, 150 and 220). These movements were generated with either vision or no vision. Participants were asked to estimate the roughness of the surfaces they explored. The normal and tangential forces of either the finger or instrument on the sandpaper surfaces were measured. Results showed that participants were able to judge the roughness of the sandpaper surfaces when using both the finger and the instrument. However, post hoc comparisons showed that perceptual judgements of surface texture were altered in the no vision condition compared to the vision condition. This was also the case when using the instrument, compared to the judgements provided when exploring with the finger. This highlights the importance of the completeness of the video images during laparoscopic surgery. More normal and tangential force was used when exploring the surfaces with the finger as opposed to the instrument. This was probably an attempt to increase the contact area of the fingertip to maximize tactile input. With the instrument, texture was probably sensed through vibrations of the instrument in the hand. Applications of the findings lie in the field of laparoscopic surgery simulation techniques and tactile perception.  相似文献   

6.
Shinar D 《Human factors》2008,50(3):380-384
OBJECTIVE: To describe the impact of Rockwell's early eye movements research. BACKGROUND: The advent of a new technology enabling measurements of eye movements in natural environments launched the seminal research of a Human Factors pioneer, Tom Rockwell, into how drivers process visual information. METHOD: In two seminal Human Factors articles -"Mapping Eye-Movement Pattern to the Visual Scene in Driving: An Exploratory Study" (Mourant & Rockwell, 1970) and "Strategies of Visual Search by Novice and Experienced Drivers" (Mourant & Rockwell, 1972)- Rockwell and his student, Ron Mourant, examined drivers' eye movements in naturalistic driving environments. RESULTS: The analyses of the visual fixations revealed systematic relationships between the sources of information the drivers needed to drive safely and the spatial distributions of their visual fixations. In addition, they showed that as drivers gain skill and experience, their pattern of fixations changes in a systematic manner. CONCLUSIONS: The research demonstrated that fixations and saccadic eye movements provide important insights into drivers' visual search behavior, information needs, and information acquisition processes. APPLICATION: This research has been a cornerstone for a myriad of driving-related studies, by Rockwell and other researchers. Building on Rockwell's pioneering work, these studies used eye-tracking systems to describe cognitive aspects of skill acquisition, and the effects of fatigue and other impairments on the process of attention and information gathering. A novel and potentially revolutionary application of this research is to use eye movement recordings for vehicle control and activation of in-vehicle safety systems.  相似文献   

7.
M Maltz  D Shinar 《Human factors》1999,41(1):15-25
This 2-part study focuses on eye movements to explain driving-related visual performance in younger and older persons. In the first task, participants' eye movements were monitored as they viewed a traffic scene image with a numeric overlay and visually located the numbers in their sequential order. The results showed that older participants had significantly longer search episodes than younger participants, and that the visual search of older adults was characterized by more fixations and shorter saccades, although the average fixation durations remained the same. In the second task, participants viewed pictures of traffic scenes photographed from the driver's perspective. Their task was to assume the role of the driver and regard the image accordingly. Results in the second task showed that older participants allocated a larger percentage of their visual scan time to a small subset of areas in the image, whereas younger participants scanned the images more evenly. Also, older participants revisited the same areas and younger participants did not. The results suggest how aging might affect the efficacy of visual information processing. Potential applications of this research include training older drivers for a more effective visual search, and providing older drivers with redundant information in case some information is missed.  相似文献   

8.
《Ergonomics》2012,55(7):789-799
Abstract

The purpose of this study was to provide a basis for improving individual visual performance of inspectors. The relationship between the correct count rate and eye movements of subjects when they counted dots arranged on samples presented for different lengths of time were analysed mainly to determine individual differences. Subjects' eye movements were measured with a corneal reflectance eye camera and analysed frame by frame with a video motion analyser. It was found that accuracy of visual inspection does not depend on length of search time and that a fast search time is not incompatible with a slow search speed. Furthermore, fixation time and number of fixations were considered the main factors governing accuracy of visual inspection. When limited time is allowed for search, a search strategy of prolonging the fixation time leads to high performance and consequently shorter inspection time. Several other findings were obtained which appear important in obtaining accurate information rapidly.  相似文献   

9.
Attentional sequence-based recognition: Markovian and evidential reasoning   总被引:1,自引:0,他引:1  
Biological vision systems explore their environment via allocating their visual resources to only the interesting parts of a scene. This is achieved by a selective attention mechanism that controls eye movements. The data thus generated is a sequence of subimages of different locations and thus a sequence of features extracted from those images - referred to as attentional sequence. In higher level visual processing leading to scene cognition, it is hypothesized that the information contained in attentional sequences are combined and utilized by special mechanisms - although still poorly understood. However, developing models of such mechanisms prove out to be crucial - if we are to understand and mimic this behavior in robotic systems. In this paper, we consider the recognition problem and present two approaches to using attentional sequences for recognition: Markovian and evidential reasoning. Experimental results with our mobile robot APES reveal that simple shapes can be modeled and recognized by these methods - using as few as ten fixations and very simple features. For more complex scenes, longer attentional sequences or more sophisticated features may be required for cognition.  相似文献   

10.
《Advanced Robotics》2013,27(9-10):1271-1294
This study develops a method to compensate for the communication time delay for tactile transmission systems. For transmitting tactile information from remote sites, the communication time delay degrades the validity of feedback. However, so far time delay compensation methods for tactile transmissions have yet to be proposed. For visual or force feedback systems, local models of remote environments were adopted for compensating the communication delay. The local models cancel the perceived time delay in sensory feedback signals by synchronizing them with the users' operating movements. The objectives of this study are to extend the idea of the local model to tactile feedback systems and develop a system that delivers tactile roughness of textures from remote environments to the users of the system. The local model for tactile roughness is designed to reproduce the characteristic cutaneous deformations, including vibratory frequencies and amplitudes, similar to those that occur when a human finger scans rough textures. Physical properties in the local model are updated in real-time by a tactile sensor installed on the slave-side robot. Experiments to deliver the perceived roughness of textures were performed using the developed system. The results showed that the developed system can deliver the perceived roughness of textures. When the communication time delay was simulated, it was confirmed that the developed system eliminated the time delay perceived by the operators. This study concludes that the developed local model is effective for remote tactile transmissions.  相似文献   

11.
We present a computational cognitive model of novice and expert aviation pilot action planning called ADAPT that models performance in a dynamically changing simulated flight environment. We perform rigorous tests of ADAPT's predictive validity by comparing the performance of individual human pilots to that of their respective models. Individual pilots were asked to execute a series of flight maneuvers using a flight simulator, and their eye fixations and control movements were recorded in a time-synched database. Computational models of each of the 25 individual pilots were constructed, and the individual models simulated execution of the same flight maneuvers performed by the human pilots. The time-synched eye fixations and control movements of individual pilots and their respective models were compared, and rigorous tests of ADAPT's predictive validity were performed. The model explains and predicts a significant portion of pilot visual attention and control movements during flight as a function of piloting expertise. Implications for adaptive training systems are discussed.  相似文献   

12.
This study investigated the effects of non-obtrusive feedback on continuous lifted hand/finger behaviour, task performance and comfort. In an experiment with 24 participants the effects of two visual and two tactile feedback signals were compared to a no-feedback condition in a computer task. Results from the objective measures showed that all types of feedback were equally effective to reduce lifted hand/finger behaviour (effectiveness) compared to absence of feedback, while task performance was not affected (efficiency). In contrast to objective measures, subjective user experience was significantly different for the four types of feedback signals. Continuous tactile feedback appeared to be the best signal; not only the effectiveness and efficiency were rated reasonable, it also scored best on perceived match between signal and required action. This study shows the importance of including user experiences when investigating usability of feedback signals. Non-obtrusive feedback embedded in products and environments may successfully be used to support office workers to adopt healthy, productive and comfortable working behaviour.  相似文献   

13.
For black-and-white alphanumeric information, the speed of visual perception decreases with decreasing contrast. We investigated the effect of luminance contrast on the speed of visual search and reading when characters and background differed also with respect to colour. The luminance contrast between background and characters was varied, while colour contrast was held nearly constant. Stimuli with moderate (green/grey) or high colour contrast (green/red or yellow/blue), and three character sizes (0.17, 0.37, and 1.26 deg) were used. Eye movements were recorded during the visual search task. We found that the visual search times, number of eye fixations, and mean fixation durations increased strongly with decreasing luminance contrast despite the presence of colour contrast. The effects were largest for small characters (0.17 deg), but occurred also for medium (0.37 deg), and in some cases for large (1.26 deg) characters. Similarly, reading rates decreased with decreasing luminance contrast. Thus, moderate or even high colour contrast does not guarantee quick visual perception, if the luminance contrast between characters and background is small. This is probably due to the fact that visual acuity (the ability to see small details) is considerably lower for pure colour information than for luminance information. Therefore, in user interfaces, good visibility of alphanumeric information requires clear luminance (brightness) difference between foreground and background.  相似文献   

14.
Understanding the attentional behavior of the human visual system when visualizing a rendered 3D shape is of great importance for many computer graphics applications. Eye tracking remains the only solution to explore this complex cognitive mechanism. Unfortunately, despite the large number of studies dedicated to images and videos, only a few eye tracking experiments have been conducted using 3D shapes. Thus, potential factors that may influence the human gaze in the specific setting of 3D rendering, are still to be understood. In this work, we conduct two eye‐tracking experiments involving 3D shapes, with both static and time‐varying camera positions. We propose a method for mapping eye fixations (i.e., where humans gaze) onto the 3D shapes with the aim to produce a benchmark of 3D meshes with fixation density maps, which is publicly available. First, the collected data is used to study the influence of shape, camera position, material and illumination on visual attention. We find that material and lighting have a significant influence on attention, as well as the camera path in the case of dynamic scenes. Then, we compare the performance of four representative state‐of‐the‐art mesh saliency models in predicting ground‐truth fixations using two different metrics. We show that, even combined with a center‐bias model, the performance of 3D saliency algorithms remains poor at predicting human fixations. To explain their weaknesses, we provide a qualitative analysis of the main factors that attract human attention. We finally provide a comparison of human‐eye fixations and Schelling points and show that their correlation is weak.  相似文献   

15.
Murata A  Furukawa N 《Human factors》2005,47(3):598-612
The relative contribution of number of fixations and fixation duration to reaction time in visual search was investigated. Ten participants (age 20-24 years) took part in each of two experiments. In Experiment 1, the experimental factors were display type (icon and file name), organization (arrangements with and without grouping), and number of stimuli presented (4, 8, and 16). In Experiment 2, a search task for a target stimulus (three prespecified random letters) was conducted, and the experimental factor was the display's layout complexity. Multiple regression analysis was used to examine whether reaction time was explained by a mediational model in which reaction time is mediated by eye movements and display features are not directly related to reaction time. The mediational model was not supported, and the effects of display features on reaction time were not attributable solely to eye movements. The interaction between number of fixations and fixation duration was also explored as a function of display features. As the display feature changed and the task became more difficult, the contribution of the number of fixations to explain the variation in reaction time became dominant for both experiments. Potential applications include measurements of cognitive ability, eye muscle balance disorders, and binocular fusion ability.  相似文献   

16.
This work explores how people use visual feedback when performing simple reach-to-grasp movements in a tabletop virtual environment. In particular we investigated whether visual feedback is required for the entire reach or whether minimal feedback can be effectively used. Twelve participants performed reach-to-grasp movements toward targets at two locations. Visual feedback about the index finger and thumb was provided in four conditions: vision available throughout the movement, vision available up to peak wrist velocity, vision available until movement initiation, or vision absent throughout the movement. It was hypothesized that vision available until movement onset would be an advantage over a no vision situation yet not attain the performance observed when vision was available up to peak velocity. Results indicated that movement time was longest in the no vision condition but similar for the three conditions where vision was available. However, deceleration time and peak aperture measures suggest grasping is more difficult when vision is not available for at least the first third of the movement. These results suggest that designers of virtual environments can manipulate the availability of visual feedback of one's hand without compromising interactivity. This may be applied, for example, when detailed rendering of other aspects of the environmental layout is more important, when motion lag is a problem or when hand/object concealment is an issue.  相似文献   

17.
《Ergonomics》2012,55(11):1871-1884
Abstract

This paper reports on two experiments in which subjects' eye movement behaviour was monitored while they searched for target information in colour coded and monochrome horizontal situation indicator (HSI) displays. The first experiment required subjects to locate and report alphanumeric information associated with the active waypoint on the displayed flightpath. Initial fixations in the display were more accurately directed to the target information when it was redundantly colour coded compared with when it was coded by shape and relative positional codes. Fewer fixations and a shorter time were required to locate the colour coded target and verbally report the relevant information. The time advantage of colour coded displays compared with monochrome displays was greatest for visually cluttered displays. In the second study there was no advantage of a coloured display when the task was to count all the displayed waypoint symbols on the flight path. The lack of any benefit for colour coding was a result of waypoint symbols having strong positional predictability due to their relationship to the displayed flightpath in both the colour and monochrome displays. The implication from these results is that colour coded information confers an advantage over a spatial code for targets at unknown spatial location but less benefit when target location can be predicted by other visual cues.  相似文献   

18.
《Ergonomics》2012,55(5):689-697
The hypothesis was tested that peak velocity of saccadic eye movements in visual motor tasks varies with variables related to energy regulation. The hypothesis is based on the cognitive-energetical performance model of Sanders. An experimental paradigm was developed in which saccadic peak velocity of task-relevant eye movements is measured while a choice reaction task is carried out. Confounding factors of saccadic amplitude and movement direction were controlled. The task was designed in such a way that in each trial subjects performed a target saccade towards an imperative stimulus and a return saccade after the manual response back to the centre of the screen. For both types of saccades the experimental variables were foreperiod duration (short versus long), knowledge of results (with versus without), postsaccadic demand (low versus high) and time on task (five 30-min intervals). In two experiments, there are main and interaction effects of the task variables on peak saccadic velocity. Return saccades are slower than target saccades, but not in the case of high postsaccadic demand. Knowledge of results increases peak saccadic velocity, but more so for return than for target saccades. Time on task leads to a decrease in peak saccadic velocity, which is much stronger for return than for target saccades; furthermore this effect is more pronounced after short than after long foreperiods. Peak saccadic velocity is changed within seconds. The results support the hypothesis. Peak saccadic velocity of task related eye movements reflects energy regulation during task performance. The paradigm will be developed as a diagnostic tool in workload measurement.  相似文献   

19.
The goal of this study is to examine the effects of time pressure and feedback on learning performance, as mediated by eye movement. Time pressure is one of main causes of human error in the workplace. Providing participants with feedback about their performance before task completion has been shown to reduce human error in diverse domains. Since both time pressure and feedback induce motivation, which is closely related to attention, we measured participants' eye movements to trace their attention and information acquisition coupled with a visual display. Time-to-deadline (long and short) and the presence of feedback were the independent factors used while measuring participants’ performance and eye movements as they learned new information about the subject of project management and answered multiple-choice questions via self-paced online learning systems. Using structural equation modeling, we found a mediating effect of eye movement on the relationships among time-to-deadline, feedback, and learning performance. Insufficient time-to-deadline accelerated the number of fixations on the screen, which resulted in longer task completion times and increased correct rates for participants learning about project management. The models in this study suggest the possibility of predicting performance from eye movement under time-to-deadline and feedback conditions. The structural equation model in the study can be applied to online and remote learning systems, in which time management is one of the main challenges for individual learners.  相似文献   

20.
This study investigates the characteristics of eye movements during a camouflaged target search task. Camouflaged targets were randomly presented on two natural landscapes. The performance of each camouflage design was assessed by target detection hit rate, detection time, number of fixations on display, first saccade amplitude to target, number of fixations on target, fixation duration on target, and subjective ratings of search task difficulty. The results showed that the camouflage patterns could significantly affect the eye-movement behavior, especially first saccade amplitude and fixation duration, and the findings could be used to increase the sensitivity of the camouflage assessment. We hypothesized that the assessment could be made with regard to the differences in detectability and discriminability of the camouflage patterns. These could explain less efficient search behavior in eye movements. Overall, data obtained from eye movements can be used to significantly enhance the interpretation of the effects of different camouflage design.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号