首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
随着眼动跟踪技术的日益成熟,面向终端用户的视线输入产品问世,视线交互(Gaze-based Interaction)的实用性越来越高。然而,由于眼睛并不是与生俱来的控制器官,用户界面中无论动态或静态的各种视觉反馈,在视线交互过程中都可能干扰用户的眼动,从而影响视线输入(视点坐标)。因此,通过两个视线点击(Eye Pointing)实验,从视点的空间分布特征和视线交互的人机工效两个方面,系统地评估了目标颜色因素对视线交互的影响。结果表明,目标颜色这类静态视觉反馈虽然不影响用户凝视目标时视点坐标的稳定性,但的确会对用户的眼动扫视过程造成显著影响,从而影响视线点击任务的人机工效。特别是在视线移动距离较长的情况下,这种影响更为明显。  相似文献   

2.
Large displays have become ubiquitous in our everyday lives, but these displays are designed for sighted people.This paper addresses the need for visually impaired people to access targets on large wall-mounted displays. We developed an assistive interface which exploits mid-air gesture input and haptic feedback, and examined its potential for pointing and steering tasks in human computer interaction(HCI). In two experiments, blind and blindfolded users performed target acquisition tasks using mid-air gestures and two different kinds of feedback(i.e., haptic feedback and audio feedback). Our results show that participants perform faster in Fitts' law pointing tasks using the haptic feedback interface rather than the audio feedback interface. Furthermore, a regression analysis between movement time(MT) and the index of difficulty(ID)demonstrates that the Fitts' law model and the steering law model are both effective for the evaluation of assistive interfaces for the blind. Our work and findings will serve as an initial step to assist visually impaired people to easily access required information on large public displays using haptic interfaces.  相似文献   

3.
In this paper, we present a real-time 3D pointing gesture recognition algorithm for mobile robots, based on a cascade hidden Markov model (HMM) and a particle filter. Among the various human gestures, the pointing gesture is very useful to human-robot interaction (HRI). In fact, it is highly intuitive, does not involve a-priori assumptions, and has no substitute in other modes of interaction. A major issue in pointing gesture recognition is the difficultly of accurate estimation of the pointing direction, caused by the difficulty of hand tracking and the unreliability of the direction estimation.The proposed method involves the use of a stereo camera and 3D particle filters for reliable hand tracking, and a cascade of two HMMs for a robust estimate of the pointing direction. When a subject enters the field of view of the camera, his or her face and two hands are located and tracked using particle filters. The first stage HMM takes the hand position estimate and maps it to a more accurate position by modeling the kinematic characteristics of finger pointing. The resulting 3D coordinates are used as input into the second stage HMM that discriminates pointing gestures from other types. Finally, the pointing direction is estimated for the pointing state.The proposed method can deal with both large and small pointing gestures. The experimental results show gesture recognition and target selection rates of better than 89% and 99% respectively, during human-robot interaction.  相似文献   

4.
5.
虚拟人手的实时建模是以人机交互为核心的VR/AR系统中的重要技术难点,传统的基于LeapMotion的手部跟踪视觉反馈仅展示骨架动画信息,缺少应用演示的真实感。由此,提出了一个快速高效的个体化手部建模与实时交互系统,重点阐述了该系统的的工作原理以及算法。以含有骨架信息的静止手模型为模板,使用LeapMotion跟踪用户手的骨架数据作为输入,提出了基于骨骼尺寸的手部模型自动个体化建模算法。提供了一种基于不完整骨骼运动数据的线性蒙皮技术的改进算法,可快速计算不同手部姿势下骨架之间的变换矩阵,动态更新虚拟手的姿势。实验结果表明,此系统在实验的硬件平台上运行速度可以达到60 f/s,误差稳定在4 mm以内,能为人机交互任务实时提供真实可靠的视觉反馈。  相似文献   

6.
In a self-testing vision screener, examinees use an input device for pointing the orientation of the targets, which are presented inside the vision screener. Examinees operate the input device without visual feedback. In the present study, the suitability of pointing devices was evaluated for conditions such as are present in a self-testing vision screener. The evaluation consisted of an experimental assessment of pointing accuracy and recording subjective ratings while using the various devices. Six commercially available computer input devices – a joystick, a gamepad, a trackball, two track pads and a PC mouse - were evaluated under visual conditions similar to those that would be present when using a self-testing vision screener.Pointing accuracy was found to vary significantly with the type of device (F(3.2, 93.1) = 3.937, p = 0.009) and the effect of the device on pointing accuracy was important (partial η2 = 0.120). The most accurate pointing was achieved when participants used the joystick. Using the joystick, a mean of 96.8% (SD = 4.3%) of pointing trials resulted in the correct orientation. If only diagonal orientations are considered, the correct pointing rate increased to a mean of 99.5% (SD = 1.5%) when using the joystick.In terms of the subjective ranking, the gamepad and the joystick achieved the best and the second best ranks respectively, whereas the trackball was the least preferred device.Based on our findings, we recommend using a joystick as an input device in pointing tasks in order to minimize the effects of suboptimal visual feedback on motor performance. As for the particular case of testing visual acuity, various procedures are suggested. Thus, the effect of suboptimal visual feedback on the outcome of the acuity test is reduced.  相似文献   

7.
We use our hands to manipulate objects in our daily life. The hand is capable of accomplishing diverse tasks such as pointing, gripping, twisting and tearing. However, there is not much work that considers using the hand as input in distributed virtual environments (DVEs), in particular over the Internet. The main reasons are that the Internet suffers from high network latency, which affects interaction, and the hand has many degrees of freedom, which adds additional challenges to synchronizing the collaboration. In this paper, we propose a prediction method specifically designed for human hand motion to address the network latency problem in DVEs. Through a thorough analysis of finger motion, we have identified various finger motion constraints and we propose a constraint-based motion prediction method for hand motion. To reduce the average prediction error under high network latency, e.g., over the Internet, we further propose a revised dead reckoning scheme here. Our performance results show that the proposed prediction method produces a lower prediction error than some popular methods while the revised dead reckoning scheme produces a lower average prediction error than the traditional dead reckoning scheme, in particular at high network latency.  相似文献   

8.
Recent research on interactive electronic systems like computer, digital TV, smartphones can improve the quality of life of many disabled and elderly people by helping them to engage more fully to the world. Previously, a simulator was developed that reflects the effect of impairment on interaction with electronic devices and thus helps designers in developing accessible systems. In this article, the scope of the simulator has been extended to multiple pointing devices. The way that hand strength affects pointing performance of people with and without mobility impairment in graphical user interfaces was investigated for four different input modalities, and a set of linear equations to predict pointing time and average number of submovements for different devices was developed. These models were used to develop an adaptation algorithm to facilitate pointing in electronic interfaces by users with motor impairment using different pointing devices. The algorithm attracts a pointer when it is near a target and thus helps to reduce random movement during homing and clicking. The algorithm was optimized using the simulator and then tested on a real-life application with multiple distractors involving three different pointing devices. The algorithm significantly reduces pointing time for different input modalities.  相似文献   

9.
The use of pointing devices to input to interactive computer systems is increasing. This paper considers the human factors aspects of the use of a range of these devices. A distinction is drawn between direct and indirect devices. A number of aspects of pointing responses are discussed with reference to the experimental evidence: that pointing is non-linguistic, serial, involves gross motor movement and relies on visual feedback. The overall speed and the extent to which it is a natural form of response are also discussed. The paper ends with some conclusions about the suitability of different devices for different tasks.  相似文献   

10.
11.
In this paper, we demonstrate how a new interactive 3 D desktop metaphor based on two-handed 3 D direct manipulation registered with head-tracked stereo viewing can be applied to the task of constructing animated characters. In our configuration, a six degree-of-freedom head-tracker and CrystalEyes shutter glasses are used to produce stereo images that dynamically follow the user head motion. 3 D virtual objects can be made to appear at a fixed location in physical space which the user may view from different angles by moving his head. To construct 3 D animated characters, the user interacts with the simulated environment using both hands simultaneously: the left hand, controlling a Spaceball, is used for 3 D navigation and object movement, while the right hand, holding a 3 D mouse, is used to manipulate through a virtual tool metaphor the objects appearing in front of the screen. In this way, both incremental and absolute interactive input techniques are provided by the system. Hand-eye coordination is made possible by registering virtual space exactly to physical space, allowing a variety of complex 3 D tasks necessary for constructing 3 D animated characters to be performed more easily and more rapidly than is possible using traditional interactive techniques. The system has been tested using both Polhemus Fastrak and Logitech ultrasonic input devices for tracking the head and 3 D mouse.  相似文献   

12.
13.
Opportunistic Controls are a class of user interaction techniques that we have developed for augmented reality (AR) applications to support gesturing on, and receiving feedback from, otherwise unused affordances already present in the domain environment. By leveraging characteristics of these affordances to provide passive haptics that ease gesture input, Opportunistic Controls simplify gesture recognition, and provide tangible feedback to the user. In this approach, 3D widgets are tightly coupled with affordances to provide visual feedback and hints about the functionality of the control. For example, a set of buttons can be mapped to existing tactile features on domain objects. We describe examples of Opportunistic Controls that we have designed and implemented using optical marker tracking, combined with appearance-based gesture recognition. We present the results of two user studies. In the first, participants performed a simulated maintenance inspection of an aircraft engine using a set of virtual buttons implemented both as Opportunistic Controls and using simpler passive haptics. Opportunistic Controls allowed participants to complete their tasks significantly faster and were preferred over the baseline technique. In the second, participants proposed and demonstrated user interfaces incorporating Opportunistic Controls for two domains, allowing us to gain additional insights into how user interfaces featuring Opportunistic Controls might be designed.  相似文献   

14.
This paper presents a novel interactive framework for 3D content-based search and retrieval using as query model an object that is dynamically sketched by the user. In particular, two approaches are presented for generating the query model. The first approach uses 2D sketching and symbolic representation of the resulting gestures. The second utilizes non-linear least squares minimization to model the 3D point cloud that is generated by the 3D tracking of the user’s hands, using superquadrics. In the context of the proposed framework, three interfaces were integrated to the sketch-based 3D search system including (a) an unobtrusive interface that utilizes pointing gesture recognition to allow the user manipulate objects in 3D, (b) a haptic–VR interface composed by 3D data gloves and a force feedback device, and (c) a simple air–mouse. These interfaces were tested and comparative results were extracted according to usability and efficiency criteria.  相似文献   

15.
The complexity of telemanipulation systems has increased and thus there are a variety of elements and factors that affect the performance of these systems. An experimental study was done in this field to obtain information about the effect of several factors on the quality of tasks carried out by the system. Not only the effect of the factors has been considered but also the interactions among them. A taxonomy of functional factors was proposed to facilitate this study. The factors were divided into two principal groups: intrinsic and extrinsic factors. A factorial design was conducted with five factors: operator (with vs. without training), movement control (position vs. rate control), force feedback (kinesthetic vs. visual feedback), master bandwidth (high vs. low bandwidth) and task type (insertion vs. tracking movement). An open platform for experimentation with telerobotics systems (PLATERO) was developed to perform all the proposed experiments. Analyzed variables include completion time, SOSF (sum of squared forces), insertion forces and tracking error. Results show that there is a great deal of interaction between type of task and the other factors. This means that for each task there is a system configuration that obtains better performance. Another important finding shows that an expert operator is able to adapt to the different system configurations and obtain good results. However, a novice operator obtains better results with some factors than with others. Finally, in order to determine which system configuration obtains the highest task quality, the main task requirement must be defined, because the best system configuration depends on it.  相似文献   

16.
Touch-based interaction with computing devices is becoming more and more common. In order to design for this setting, it is critical to understand the basic human factors of touch interactions such as tapping and dragging; however, there is relatively little empirical research in this area, particularly for touch-based dragging.To provide foundational knowledge in this area, and to help designers understand the human factors of touch-based interactions, we conducted an experiment using three input devices (the finger, a stylus, and a mouse as a performance baseline) and three different pointing activities. The pointing activities were bidirectional tapping, one-dimensional dragging, and radial dragging (pointing to items arranged in a circle around the cursor). Tapping activities represent the elemental target selection method and are analysed as a performance baseline. Dragging is also a basic interaction method and understanding its performance is important for touch-based interfaces because it involves relatively high contact friction. Radial dragging is also important for touch-based systems as this technique is claimed to be well suited to direct input yet radial selections normally involve the relatively unstudied dragging action, and there have been few studies of the interaction mechanics of radial dragging. Performance models of tap, drag, and radial dragging are analysed.For tapping tasks, we confirm prior results showing finger pointing to be faster than the stylus/mouse but inaccurate, particularly with small targets. In dragging tasks, we also confirm that finger input is slower than the mouse and stylus, probably due to the relatively high surface friction. Dragging errors were low in all conditions. As expected, performance conformed to Fitts' Law.Our results for radial dragging are new, showing that errors, task time and movement distance are all linearly correlated with number of items available. We demonstrate that this performance is modelled by the Steering Law (where the tunnel width increases with movement distance) rather than Fitts' Law. Other radial dragging results showed that the stylus is fastest, followed by the mouse and finger, but that the stylus has the highest error rate of the three devices. Finger selections in the North-West direction were particularly slow and error prone, possibly due to a tendency for the finger to stick–slip when dragging in that direction.  相似文献   

17.
Haptic feedback is an important component of immersive virtual reality (VR) applications that is often suggested to complement visual information through the sense of touch. This paper investigates the use of a haptic vest in navigation tasks. The haptic vest produces a repulsive vibrotactile feedback from nearby static virtual obstacles that augments the user spatial awareness. The tasks require the user to perform complex movements in a 3D cluttered virtual environment, like avoiding obstacles while walking backwards and pulling a virtual object. The experimental setup consists of a room-scale environment. Our approach is the first study where a haptic vest is tracked in real time using a motion capture device so that proximity-based haptic feedback can be conveyed according to the actual movement of the upper body of the user.User study experiments have been conducted with and without haptic feedback in virtual environments involving both normal and limited visibility conditions. A quantitative evaluation was carried out by measuring task completion time and error (collision) rate. Multiple haptic rendering techniques have also been tested. Results show that under limited visibility conditions proximity-based haptic feedback generated by a wearable haptic vest can significantly reduce the number of collisions with obstacles in the virtual environment.  相似文献   

18.
In this paper, we study one of the most fundamental tasks in human‐computer interaction, the pointing task. It can be described simply as reaching a target with a cursor starting from an initial position (e.g. executing a movement using a computer mouse to select an icon). In this paper, a switched dynamic model is proposed to handle cursor movements in indirect pointing tasks. The model contains a ballistic movement phase governed by a nonlinear model in Lurie form and a corrective movement phase described by a linear visual‐feedback system. The stability of the model is first established and the derived model is then validated with experimental data acquired in a pointing task with a mouse. It is established that the measured data of pointing movements of different types can be fitted within the proposed model. Numerical comparison against pointing models available in the literature is also provided.  相似文献   

19.
Previous research has not fully examined the effect of additional sensory feedback, particularly delivered through the haptic modality, in pointing task performance with visual distractions. This study examined the effect of haptic feedback and visual distraction on pointing task performance in a 3D virtual environment. Results indicate a strong positive effect of haptic feedback on performance in terms of task time and root mean square error of motion. Level of similarity between distractor objects and the target object significantly reduced performance, and subjective ratings indicated a sense of increased task difficulty as similarity increased. Participants produced the best performance in trials where distractor objects had a different color but the same shape as the target object and constant haptic assistive feedback was provided. Overall, this study provides insight toward the effect of object features and similarity and the effect of haptic feedback on pointing task performance.  相似文献   

20.
Intraoral target (typing) and on-screen target (pointing/tracking) selection tasks were performed by 10 participants during 3 consecutive day sessions. Tasks were performed using 2 different intraoral sensor layouts. Reduction of undesired sensor activations while speaking as well as the influence of intraoral temperature variation on the signals of the intraoral interface was investigated. Results showed that intraoral target selection tasks were performed better when the respective sensor was located in the anterior area of the palate, reaching 78 and 16 activations per minute for repetitive and “unordered” sequences, respectively. Virtual target pointing and tracking tasks, of circles of 50, 70, and 100 pixels diameter, showed no significant difference in performance, reaching average pointing throughputs of 0.62 to 0.72 bits per second and relative time on target of 34% to 60%. Speaking tasks caused an average of 10 to 31 involuntary activations per minute in the anterior part of the palate. Intraoral temperature variation between 11.87 °C and 51.37 °C affected the sensor signal baseline in a range from –25.34% to 48.31%. Results from this study provide key design considerations to further increase the efficiency of tongue–computer interfaces for individuals with upper-limb mobility impairments.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号