首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this series of experiments, we investigated whether a crude representation of the hand that was extinguished at movement onset improved performance when compared to a no-feedback situation. Subjects performed simple reach to grasp movements in a virtual environment in two experiments. In Experiment 1, trials were blocked so that subjects were aware that a graphical representation of the hand would either be available throughout the movement (FA), be removed at movement onset (FAB), or not be available (NF). In Experiment 2, trials were randomized so that subjects were unaware of whether feedback would be available throughout the trial or removed at movement onset. Our results indicated that when subjects were aware of the availability of graphical feedback, the FAB condition improved performance compared to the NF condition. Furthermore, movement time was similar in the two feedback available conditions (FA, FAB). In contrast, for the randomized trial presentation, the positive influence of the FAB condition was diminished. These results suggest that visual feedback available prior to movement onset can be used to calibrate the proprioceptive system and improve performance over a no feedback situation. These results can be applied by designers of virtual environments to solve problems related to occlusion of important environmental information by the hand as users reach to grasp and manipulate objects.  相似文献   

2.
ABSTRACT

Visual feedback about one’s own movements can be important for effective performance in natural and computer-generated environments. Previous work suggested that, for young adults, performance in virtual environments was influenced by the presence of a crude representation of the hand in a task-specific fashion. The current study was performed to determine whether this pattern of results holds across the lifespan. Specifically, we were interested in determining whether a representation of the hand is useful for movement performance in children, middle adults, and older adults when they reach to grasp an object, transfer it between their two hands, or receive it from a partner. Surprisingly, visual feedback condition had very little effect on performance in the current experiment, with the exception that participants in all three age groups altered how wide they opened their hand when grasping depending on visual condition. Patterns of reach and grasp performance changed across task depending on age, with older adults displaying different patterns than children and middle-aged adults. These results suggest that feedforward planning and the use of feedback are modulated by both age and task.  相似文献   

3.
The aim of this experiment was to test the influence of target context on adaptation to scale perturbations introduced by a video display. Participants performed pointing movements without direct vision of their moving hand, although they could see their movements on a video display. Their perceived movements could be reduced, enlarged. or displayed at their actual size. Three target contexts were compared: dark surround, illuminated frame, and familiar object. Movements were executed with or without vision of hand displacement. Results showed that target context enhanced an allocentric coding of the movement, which improved movement execution. However, the effect of target context changed whether or not the displacement of the hand was available. Overall, the results suggest that target context allowed the extraction of dynamic information about movements, which is used to program and control movements. This suggests that target context could be used efficiently to improve spatial accuracy and speed in teleoperation learning. Potential applications include the reduction of difficulties encountered during teleoperation learning through the introduction of visual context.  相似文献   

4.
M Akamatsu 《Ergonomics》1992,35(5-6):647-660
Adaptation experiments in shape tracing were conducted to investigate finger and eye movements in various conditions of visual and tactile information. Maximum velocity, mean velocity, maximum acceleration and reacceleration point were calculated from finger movements. Number of eye fixations and lead time of eye fixation to finger position were calculated from eye movements. The results showed that for the finger movement the values of the indices studied were higher in the combined visual and tactile condition than in the visual only condition. The number of eye fixations decreased when subjects repeated the tracing and was more marked in the combined visual and tactile condition than in the visual only condition. The results suggest that finger movements become faster and use of vision is reduced when both visual and tactile information are given.  相似文献   

5.
《Ergonomics》2012,55(5-6):647-660
Abstract

Adaptation experiments in shape tracing were conducted to investigate finger and eye movements in various conditions of visual and tactile information. Maximum velocity, mean velocity, maximum acceleration and reacceleration point were calculated from finger movements. Number of eye fixations and lead time of eye fixation to finger position were calculated from eye movements. The results showed that for the finger movement the values of the indices studied were higher in the combined visual and tactile condition than in the visual only condition. The number of eye fixations decreased when subjects repeated the tracing and was more marked in the combined visual and tactile condition than in the visual only condition. The results suggest that finger movements become faster and use of vision is reduced when both visual and tactile information are given.  相似文献   

6.
The purpose of this experiment is to investigate the fine motor performance of young and older adults on a reach to grasp task in a desktop virtual environment with increasing precision requirements. Aging brings about potential loss of an individual's function due to disease, injury, or the degenerative nature of aging itself. Three-dimensional virtual environments have been identified as systems with good potential to ameliorate such problems in older individuals, and precise fine motor skills represent an important class of functional skills. Two groups of participants (Young, n=10, mean age 21.3 years, range 20–24, senior, n=10, mean age 70.7 years, range 60–85) performed a reach to grasp in a desktop virtual environment with simple, low contrast graphics. Results indicate that visual feedback of the hand for sensory guidance of movement did not improve motor performance for either group, and that as precision requirements of the task increased, age group differences in movement time and peak grasp aperture also increased. These findings extend the literature on age group differences in human motor control across the lifespan and differ from previous studies which showed presence of visual feedback of the hand improved motor performance in young adults. Differences in luminance contrast levels in past studies and the current one suggest that control over this feature of the visual scene is an important design consideration for all end-users and warrants additional investigation. Additional recommendations for age-specific design of three dimensional user interfaces include usage of tangibles that are sufficient in size to limit detrimental effects for older adults.  相似文献   

7.
Gesture-based systems allow users to interact with a virtual reality application in a natural way. Visual feedback for the gesture-based interaction technique has an impact on the performance and the hand instability making the manipulation of the object less precise. This paper investigated two new interaction techniques in a virtual environment. It describes the influence of natural and non-natural virtual feedback in the selection process using the GITDVR-G interaction technique, which consists of a grasping visual feedback. The GITDVR-G was evaluated in a virtual knee surgery training system. The results showed that it was effective in terms of the task completion time, and that the participants preferred the natural grasping visual feedback. Besides that, the precise manipulation in a newly-designed interaction technique (Precise GITDVR-G) was evaluated. The Precise GITDVR-G includes a normal manipulation mode and a precise manipulation mode that can be triggered by hand gestures. During the precise manipulation mode, an inset view will appear and move with the selected object to provide a better view to users, while the movements of the virtual hand are scaled down to improve the precision. Four different configurations of the precise manipulation technique were evaluated, and the results showed that the unimanual control method with an inset view performed better in terms of the task performance time and the subjective feedback. The finding suggested that the realistic virtual grasping visual feedback can be applied in a virtual hand interaction technique, and that the inset view feature is helpful in the precise manipulation.  相似文献   

8.
This study investigated the influence of the type of visual feedback during practice with a complex visuo-motor transformation of a sliding two-sided lever on the acquisition of an internal model of the transformation. Three groups of participants, who practised with different types of visual feedback, were compared with regard to movement accuracy, curvature and movement time. One group had continuous visual feedback during practice and two groups were presented terminal visual feedback, either only the end position of the movement or the end position together with the trajectory of the cursor. Results showed that continuous visual feedback led to more precise movement end positions during practice than terminal visual feedback, but to less precise movements during open-loop tests. This finding indicates that terminal visual feedback supports the development of a precise internal model of a new visuo-motor transformation. However, even terminal feedback of the cursor trajectory during practice did not result in an internal model, which includes appropriate curvatures of hand movements. STATEMENT OF RELEVANCE: This paper presents results on the influence of type of visual feedback on learning the complex motor skill of controlling a sliding lever. These findings contribute to the conceptual basis of optimised training procedures for the acquisition of sensori-motor skills required for the mastery of instruments utilised in minimally invasive surgery.  相似文献   

9.
一种基于非线性弹簧模型的虚拟手交互新方法   总被引:4,自引:1,他引:3       下载免费PDF全文
基于虚拟手的交互技术在人机交互和人机工程学测试等应用中发挥着重要的作用。为了实现直观自然、实时准确、接近真实世界中的虚拟手与虚拟物体的交互,并计算出反馈作用力,首先提出了用非线性弹簧模型计算抓取作用力,使虚拟手和虚拟环境之间实现了基于物理的交互;然后将计算结果以视觉渲染的形式反馈给用户,并对仿真的速率做了定量分析,以便使仿真速率可以达到屏幕刷新频率和力反馈刷新频率的要求。实验结果表明,虚拟手不仅可以直观自然地抓取3维虚拟物体,而且和3维物体之间能够进行实时交互,同时可计算出反馈作用力。  相似文献   

10.
This study aims to determine whether indirect touch device can be used to interact with graphical objects displayed on another screen in an air traffic control (ATC) context. The introduction of such a device likely requires an adaptation of the sensory-motor system. The operator has to simultaneously perform movements on the horizontal plane while assessing them on the vertical plane. Thirty-six right-handed participants performed movement training with either constant or variable practice and with or without visual feedback of the displacement of their actions. Participants then performed a test phase without visual feedback. Performance improved in both practice conditions, but accuracy was higher with visual feedback. During the test phase, movement time was longer for those who had practiced with feedback, suggesting an element of dependency. However, this ‘cost’ of feedback did not extend to movement accuracy. Finally, participants who had received variable training performed better in the test phase, but accuracy was still unsatisfactory. We conclude that continuous visual feedback on the stylus position is necessary if tablets are to be introduced in ATC.  相似文献   

11.
Pennel I  Coello Y  Orliaguet JP 《Ergonomics》2002,45(15):1047-1077
The present study (N=56) investigated spatio-temporal accuracy of horizontal reaching movements controlled visually through a vertical video monitor. Direct vision of the hand was precluded and the direction of hand trajectory, as perceived on the video screen, was varied by changing the angle of the camera. The orientation of the visual scene displayed on the fronto-parallel plane was thus congruent (0 degrees condition) or non-congruent (directional bias of 15 degrees, 30 degrees or 45 degrees counterclockwise) according to the horizontal working space. The goal of this study was to determine whether local learning of a directional bias can be transferred to other locations in the working space, but taking into account the magnitude of the directional bias (15 degrees, 30 degrees or 45 degrees ), and the position of the successive objectives (targets at different distances (TDD) or different azimuths (TDA)). Analysis of the spatial accuracy of pointing movements showed that when introducing a directional bias, terminal angular error was linearly related to the amount of angular perturbation (around 30%). Seven trials were, on average, necessary to eliminate this terminal error, whatever the magnitude of the directional bias and the position of the successive targets. When changing the location of the spatial objective, transfer of adaptation was achieved in the TDD condition but remained partial in the TDA condition. Furthermore, initial orientation of the trajectory suggested that some participants used a hand-centred frame of reference whereas others used an external one to specify movement vector. The adaptation process differed as a function of the frame of reference used, but only in the TDA condition. Adaptation for participants using a hand-centred frame of reference was more concerned with changes in the shape of the trajectory, whereas participants using an external frame of reference adapted their movement by up-dating the initial direction of hand trajectory. As a whole, these findings suggest that the processes involved in remote visual control of hand movement are complex with the result that tasks requiring video-controlled manipulation like video-controlled surgery require specific spatial abilities in actors and consequential plasticity of their visuo-motor system, in particular concerning the selection of the frame of reference for action.  相似文献   

12.
Hand dysfunction caused by hand injuries, strokes, or other neurological degenerative diseases such as cervical spondylosis is being increasingly reported. Currently, hand function assessments for diagnosis or rehabilitation are primarily based on qualitative scales, which are subjective and may vary considerably depending on the expertise of the attending clinician. Although wearable sensors and computer vision techniques have been proposed to obtain quantitative hand movement information, both have limitations. In this study, a multiview video tracking and recording system was set up using high-speed cameras and mapping of actual hand movements. The state-of-the-art software DeepLabCut was used to obtain precise 2D and 3D finger joint positions. Kinematic parameters, such as movement count, period, phase, and Pearson coefficient were used to characterize hand movement based on the relative distance-time curves of finger joints. Experimental results in a clinical setting showed that this video-based image-recognition neural network method can accurately distinguish healthy from dysfunctional hand movements. The proposed system is inexpensive, easy to set up and use, and exhibits high accuracy. Thus, it can revolutionize medical hand motion analysis and spur the development of automated quantitative systems for early hand-related disorder detection.  相似文献   

13.
《Ergonomics》2012,55(15):1047-1077
The present study (N=56) investigated spatio-temporal accuracy of horizontal reaching movements controlled visually through a vertical video monitor. Direct vision of the hand was precluded and the direction of hand trajectory, as perceived on the video screen, was varied by changing the angle of the camera. The orientation of the visual scene displayed on the fronto-parallel plane was thus congruent (0° condition) or non-congruent (directional bias of 15°, 30° or 45° counterclockwise) according to the horizontal working space. The goal of this study was to determine whether local learning of a directional bias can be transferred to other locations in the working space, but taking into account the magnitude of the directional bias (15°, 30° or 45°), and the position of the successive objectives (targets at different distances (TDD) or different azimuths (TDA)). Analysis of the spatial accuracy of pointing movements showed that when introducing a directional bias, terminal angular error was linearly related to the amount of angular perturbation (around 30%). Seven trials were, on average, necessary to eliminate this terminal error, whatever the magnitude of the directional bias and the position of the successive targets. When changing the location of the spatial objective, transfer of adaptation was achieved in the TDD condition but remained partial in the TDA condition. Furthermore, initial orientation of the trajectory suggested that some participants used a hand-centred frame of reference whereas others used an external one to specify movement vector. The adaptation process differed as a function of the frame of reference used, but only in the TDA condition. Adaptation for participants using a hand-centred frame of reference was more concerned with changes in the shape of the trajectory, whereas participants using an external frame of reference adapted their movement by up-dating the initial direction of hand trajectory. As a whole, these findings suggest that the processes involved in remote visual control of hand movement are complex with the result that tasks requiring video-controlled manipulation like video-controlled surgery require specific spatial abilities in actors and consequential plasticity of their visuo-motor system, in particular concerning the selection of the frame of reference for action.  相似文献   

14.
Tanaka H  Tai M  Qian N 《Neural computation》2004,16(10):2021-2040
We investigated the differences between two well-known optimization principles for understanding movement planning: the minimum variance (MV) model of Harris and Wolpert (1998) and the minimum torque change (MTC) model of Uno, Kawato, and Suzuki (1989). Both models accurately describe the properties of human reaching movements in ordinary situations (e.g., nearly straight paths and bell-shaped velocity profiles). However, we found that the two models can make very different predictions when external forces are applied or when the movement duration is increased. We considered a second-order linear system for the motor plant that has been used previously to simulate eye movements and single-joint arm movements and were able to derive analytical solutions based on the MV and MTC assumptions. With the linear plant, the MTC model predicts that the movement velocity profile should always be symmetrical, independent of the external forces and movement duration. In contrast, the MV model strongly depends on the movement duration and the system's degree of stability; the latter in turn depends on the total forces. The MV model thus predicts a skewed velocity profile under many circumstances. For example, it predicts that the peak location should be skewed toward the end of the movement when the movement duration is increased in the absence of any elastic force. It also predicts that with appropriate viscous and elastic forces applied to increase system stability, the velocity profile should be skewed toward the beginning of the movement. The velocity profiles predicted by the MV model can even show oscillations when the plant becomes highly oscillatory. Our analytical and simulation results suggest specific experiments for testing the validity of the two models.  相似文献   

15.
以人机工程学测试为目标提出虚拟手仿真模型,建立虚拟手运动学模型。提出一种曲线拟合的校准方法,以获取精确的数据,更好地控制虚拟手的运动。建立弹簧模型计算手与被抓取物体之间的受力,以视觉渲染的形式将其反馈给用户。实验结果表明,虚拟手可自然地抓取三维虚拟物体,视觉反馈的使用可免于购买昂贵的力反馈设备。  相似文献   

16.
虚拟操作训练系统中的人机交互技术   总被引:1,自引:0,他引:1       下载免费PDF全文
张玉祥  沙寒 《计算机工程》2008,34(19):274-276
为提高某型导弹动力系统虚拟操作训练中人机交互的沉浸性,开发了基于数据手套和空间位置跟踪仪的人机交互环境。介绍交互环境的硬件组成和工作原理,根据手的生理结构和运动特性,建立具有约束的虚拟手几何建模,采用面向对象的技术,对虚拟手的数据结构进行描述,采用放置透明包围体的方法,有效解决碰撞检测问题,对碰撞后的视觉和力觉反馈进行研究。该系统提高了该型导弹动力系统虚拟操作训练的效率。  相似文献   

17.
This paper focuses on multiplayer cooperative interaction in a shared haptic environment based on a local area network. Decoupled motion control, which allows one user to manipulate a haptic interface to control only one‐dimensional movement of an avatar, is presented as a new type haptic‐based cooperation among multiple users. Users respectively move an avatar along one coordinate axis so that the motion of the avatar is the synthesis of movements along all axes. It is different from previous haptic cooperation where all users can apply forces on an avatar along any direction to move it, the motion of which completely depends on the resultant force. A novel concept of movement feedback is put forward where one user can sense other users’ hand motions through his or her own haptic interface. The concept can also be explained wherein one person who is required to move a virtual object along only one axis can also feel the motions of the virtual object along other axes. Movement feedback, which is a feeling of motion, differs from force feedback, such as gravity, collision force and resistance. A spring‐damper force model is proposed for the computation of motion feedback to implement movement transmission among users through haptic devices. Experimental results validate that movement feedback is beneficial for performance enhancement of such kind of haptic‐based cooperation, and the effect of movement feedback in performance improvement is also evaluated by all subjects.Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

18.
Two experiments have investigated the effects of lag on discrete target-directed head movements in a virtual environment. Both the target and a head-slaved pointer (subjected to lag) were presented on a head-mounted display. Target-directed head movements in the presence of a constant time lag were shown to obey Fitts' law (R2 > .93). A previously reported interaction between the effects of lag and the effects of index-of-difficulty on hand movement time could not be found in head movement time when the target width was kept constant. Further experiments suggested that there is a significant interaction between the effects of target width and lag but not between target distance and lag. A model predicting head movement strategy in the presence of lag is proposed to explain the experimental findings. This model predicts, and experiments verified, that for a target width from 1 degree to 8 degrees and a target distance range of 2.5 degrees to 30 degrees, the effect of lag (up to 267 ms) on target-directed head movement is independent of target distance but dependent on target width. Actual or potential applications of this research include the design of virtual control panel and its layout in a Virtual Reality simulator.  相似文献   

19.
Vibrotactile feedback is widely used in mobile devices because it provides a discreet and private feedback channel. Gaze-based interaction, on the other hand, is useful in various applications due to its unique capability to convey the focus of interest. Gaze input is naturally available as people typically look at things they operate, but feedback from eye movements is primarily visual. Gaze interaction and the use of vibrotactile feedback have been two parallel fields of human–computer interaction research with a limited connection. Our aim was to build this connection by studying the temporal and spatial mechanisms of supporting gaze input with vibrotactile feedback. The results of a series of experiments showed that the temporal distance between a gaze event and vibrotactile feedback should be less than 250 ms to ensure that the input and output are perceived as connected. The effectiveness of vibrotactile feedback was largely independent of the spatial body location of vibrotactile actuators. In comparison to other modalities, vibrotactile feedback performed equally to auditory and visual feedback. Vibrotactile feedback can be especially beneficial when other modalities are unavailable or difficult to perceive. Based on the findings, we present design guidelines for supporting gaze interaction with vibrotactile feedback.  相似文献   

20.
Haptic feedback is an important component of immersive virtual reality (VR) applications that is often suggested to complement visual information through the sense of touch. This paper investigates the use of a haptic vest in navigation tasks. The haptic vest produces a repulsive vibrotactile feedback from nearby static virtual obstacles that augments the user spatial awareness. The tasks require the user to perform complex movements in a 3D cluttered virtual environment, like avoiding obstacles while walking backwards and pulling a virtual object. The experimental setup consists of a room-scale environment. Our approach is the first study where a haptic vest is tracked in real time using a motion capture device so that proximity-based haptic feedback can be conveyed according to the actual movement of the upper body of the user.User study experiments have been conducted with and without haptic feedback in virtual environments involving both normal and limited visibility conditions. A quantitative evaluation was carried out by measuring task completion time and error (collision) rate. Multiple haptic rendering techniques have also been tested. Results show that under limited visibility conditions proximity-based haptic feedback generated by a wearable haptic vest can significantly reduce the number of collisions with obstacles in the virtual environment.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号