首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Sensing technologies such as inertia tracking and computer vision enable spatial interactions where users make selections by ‘air pointing’: moving a limb, finger, or device to a specific spatial region. In addition of expanding the vocabulary of possible interactions available, air pointing brings the potential benefit of enabling ‘eyes-free’ interactions, where users rely on proprioception and kinaesthesia rather than vision. This paper explores the design space for air pointing interactions, and presents tangible results in the form of a framework that helps designers understand input dimensions and resulting interaction qualities. The framework provides a set of fundamental concepts that aid in thinking about the air pointing domain, in characterizing and comparing existing solutions, and in evaluating novel techniques. We carry out an initial investigation to demonstrate the concepts of the framework by designing and comparing three air pointing techniques: one based on small angular ‘raycasting’ movements, one on large movements across a 2D plane, and one on movements in a 3D volume. Results show that large movements on the 2D plane are both rapid (selection times under 1 s) and accurate, even without visual feedback. Raycasting is rapid but inaccurate, and the 3D volume is expressive but slow, inaccurate, and effortful. Many other findings emerge, such as selection point ‘drift’ in the absence of feedback. These results and the organising framework provide a foundation for innovation and understanding of air pointing interaction.  相似文献   

2.
高飞  张宪民 《计算机工程》2007,33(18):199-201
在视线跟踪系统中,提出了一种以眼角点为参考点来计算人眼注视方向和位置的新方法。该方法中以动点和参考点的差值来计算人眼的注视方向和位置。动点采用虹膜中心,因为它可以准确反映眼球的变化。参考点采用眼角点,因为眼角点在人脸上是个非常稳定的点,人脸表情的变化基本上不会引起它的位置变化。该方法克服了过去以mark点或普尔钦斑点为参考点的缺点,不需要使用者在脸上做mark点,而且允许人脸在小范围内偏转。实验证明,该方法中自动定位眼角点快速准确,可以很好地解决视线跟踪系统中眼睛相对运动距离的问题。  相似文献   

3.
Eye-gaze tracking is traditionally used to analyze ocular parameters for investigating visual psychology, marketing study, behavior analysis, and so on. Currently, eye-gaze trackers are also being used to control electronic interfaces in assistive technology, automobile control, and even consumer electronic products like smartphones and tablets. However, there are not many attempts to combine these two streams of research on active and passive uses of eye-gaze trackers. This article compares a few ocular parameters to estimate users’ cognitive load in eye-gaze-controlled interfaces. It was found that average velocity of a particular type of microsaccadic eye movement called Saccadic Intrusion is most indicative of users’ cognitive load compared to pupil dilation and eye-blink-based parameters. Results from the study can be used to develop new metrics of cognitive load measurement, as well as to design intelligent gaze-controlled interfaces that respond to users’ cognitive load.  相似文献   

4.
5.
随着眼动跟踪技术的日益成熟,面向终端用户的视线输入产品问世,视线交互(Gaze-based Interaction)的实用性越来越高。然而,由于眼睛并不是与生俱来的控制器官,用户界面中无论动态或静态的各种视觉反馈,在视线交互过程中都可能干扰用户的眼动,从而影响视线输入(视点坐标)。因此,通过两个视线点击(Eye Pointing)实验,从视点的空间分布特征和视线交互的人机工效两个方面,系统地评估了目标颜色因素对视线交互的影响。结果表明,目标颜色这类静态视觉反馈虽然不影响用户凝视目标时视点坐标的稳定性,但的确会对用户的眼动扫视过程造成显著影响,从而影响视线点击任务的人机工效。特别是在视线移动距离较长的情况下,这种影响更为明显。  相似文献   

6.
A tracking state increases the bandwidth of pen-based interfaces. However, this state is difficult to detect with default visual feedback. This paper reports on two experiments that are designed to evaluate multimodal feedback for pointing tasks (both 1D and 2D) in tracking state. In 1D pointing experiments, results show that there is a significant effect for input types on movement time while feedback type and the use of different hands for receiving feedback (i.e. the dominant or non-preferred hand) do not affect movement time significantly. We also report that there is a significant effect for feedback types and input device types on error rate while the choice of hand (used for detecting feedback vibrations) does not affect the error rate significantly. In the 2D pointing experiment, results show that there are no significant effects for either input type or the use of different hands on movement time while feedback type affects movement time significantly. Results for both the 1D and 2D pointing tasks show that tactile plus visual feedback can improve accuracy and audio is not efficient to give user feedback in tracking state. This paper proposes several guidelines for feedback design in tracking state. We believe these results can aid designers of pen-based interfaces.  相似文献   

7.
针对大口径天线伺服控制系统在风扰动较大的情况下跟踪目标会产生较大的指向误差,为达到跟踪精度和指向精度的要求,在PID控制器的基础上加入扰动观测器,使得天线的指向精度和跟踪精度等都得到了很大的提升.仿真结果表明,加入扰动观测器后,伺服系统的性能得到了优化,系统对风扰动的抑制能力显著增强,跟踪高速目标的指向精度明显提高.  相似文献   

8.
It is important to consider the physiological and behavioral mechanisms that allow users to physically interact with virtual environments. Inspired by a neuroanatomical model of perception and action known as the two visual systems hypothesis, we conducted a study with two controlled experiments to compare four different kinds of spatial interaction: (1) voice-based input, (2) pointing with a visual cursor, (3) pointing without a visual cursor, and (4) pointing with a time-lagged visual cursor. Consistent with the two visual systems hypothesis, we found that voice-based input and pointing with a cursor were less robust to a display illusion known as the induced Roelofs effect than pointing without a cursor or even pointing with a lagged cursor. The implications of these findings are discussed, with an emphasis on how the two visual systems model can be used to understand the basis for voice and gestural interactions that support spatial target selection in large screen and immersive environments.  相似文献   

9.
In this paper, we present a real-time 3D pointing gesture recognition algorithm for mobile robots, based on a cascade hidden Markov model (HMM) and a particle filter. Among the various human gestures, the pointing gesture is very useful to human-robot interaction (HRI). In fact, it is highly intuitive, does not involve a-priori assumptions, and has no substitute in other modes of interaction. A major issue in pointing gesture recognition is the difficultly of accurate estimation of the pointing direction, caused by the difficulty of hand tracking and the unreliability of the direction estimation.The proposed method involves the use of a stereo camera and 3D particle filters for reliable hand tracking, and a cascade of two HMMs for a robust estimate of the pointing direction. When a subject enters the field of view of the camera, his or her face and two hands are located and tracked using particle filters. The first stage HMM takes the hand position estimate and maps it to a more accurate position by modeling the kinematic characteristics of finger pointing. The resulting 3D coordinates are used as input into the second stage HMM that discriminates pointing gestures from other types. Finally, the pointing direction is estimated for the pointing state.The proposed method can deal with both large and small pointing gestures. The experimental results show gesture recognition and target selection rates of better than 89% and 99% respectively, during human-robot interaction.  相似文献   

10.
随着动中通控制及天线技术的逐步成熟以及用户对航空上网需求的逐步增加,结合航空制造及卫星通信技术经验,设计了一种Ku/Ka双频、双天线控制系统。该系统可根据机载计算机控制指令在不同地域快速完成Ku、Ka天线对星及卫星切换,保障链路通信正常,以最佳带宽服务理念为航空用户提供多层次通信服务保障。系统采用尺寸轻量化设计、融合高精度伺服控制技术、快速卫星切换技术以及高动态跟踪技术提高系统的动态响应及精准对星、卫星跟踪能力。该系统严格按照军品研制流程进行设计,已完成样机研制和地面相关试验,技术指标符合前期设计要求和具有良好的应用前景。  相似文献   

11.
This study investigates the usability of various “dwell times” for selecting visual objects with eye-gaze-based input by means of eye tracking. Two experiments are described in which participants used eye-gaze-based input to select visual objects consisting of alphanumeric characters, dots, or visual icons. First, a preliminary experiment was designed to identify the range of dwell time durations suitable for eye-gaze-based object selection. Twelve participants were asked to evaluate, on a 7-point rating scale, how easily they could perform an object-selection task with a dwell time of 250, 500, 1000, or 2000 ms per object. The evaluations showed that a dwell time of 250 ms to around 1000 ms was rated as potentially useful for object selection with eye-gaze-based input. In the following main experiment, therefore, 30 participants used eye tracking to select object sequences from a display with a dwell time of 200, 400, 800, 1000 or 1200 ms per object. Object selection time, object selection success rate, the number of object selection corrections, and dwell time evaluations were obtained. The results showed that the total time necessary to select visual objects (object selection time) increased when dwell time increased, but longer dwell times resulted in a higher object-selection success rate and fewer object selection corrections. Furthermore, regardless of object type, eye-gaze-based object selection with dwell times of 200–800 ms was significantly slower for participants with glasses than for those without glasses. Most importantly, participant evaluations showed that a dwell time of 600 ms per object was easiest to use for eye-gaze-based selection of all three types of visual objects.  相似文献   

12.
The fast aging of many western and eastern societies and their increasing reliance on information technology create a compelling need to reconsider older users' interactions with computers. Changes in perceptual and motor skill abilities that often accompany the aging process have important implications for the design of information input devices. This paper summarises the results of two comparative studies on information input with 90 subjects aged between 20 and 75 years. In the first study, three input devices – mouse, touch screen and eye-gaze control – were analysed concerning efficiency, effectiveness and subjective task difficulty with respect to the age group of the computer user. In the second study, an age-differentiated analysis of hybrid user interfaces for input confirmation was conducted combining eye-gaze control with additional input devices. Input confirmation was done with the space bar of a PC keyboard, speech input or a foot pedal. The results of the first study show that regardless of participants' age group, the best performance in terms of short execution time results from touch screen information input. This effect is even more pronounced for the elderly. Regarding the hybrid interfaces, the lowest mean execution time, error rate and task difficulty were found for the combination of eye-gaze control with the space bar. In conclusion, we recommend using direct input devices, particularly a touch screen, for the elderly. For user groups with severe motor impairments, we suggest eye-gaze information input.  相似文献   

13.
A control design procedure for dealing with the robust slewing and pointing manoeuvres of a class of new generation spacecraft is proposed. The technique is based on the application of the optimal interpolation technique to the sliding control approach. This is used subsequently for insuring high pointing and tracking performance of the system in the presence of induced structural disturbances and modeling uncertainties. Numerical simulations are carried out, and suggestions for further studies are outlined  相似文献   

14.
Pointing gestures are our natural way of referencing distant objects and thus widely used in HCI for controlling devices. Due to current pointing models’ inherent inaccuracies, most of the systems using pointing gestures so far rely on visual feedback showing users where they point at. However, in many environments, e.g., smart homes, it is rarely possible to display cursors since most devices do not contain a display. Therefore, we raise the question of how to facilitate accurate pointing-based interaction in a cursorless context. In this paper we present two user studies showing that previous cursorless techniques are rather inaccurate as they lack important considerations about users’ characteristics that would help in minimizing inaccuracy. We show that pointing accuracy could be significantly improved by acknowledging users’ handedness and ocular dominance. In a first user study (n=?33), we reveal the large effect of ocular dominance and handedness on human pointing behavior. Current ray-casting techniques neglect both ocular dominance and handedness as effects onto pointing behavior, precluding them from accurate cursorless selection. With a second user study (n=?25), we show that accounting for ocular dominance and handedness yields to significantly more accurate selections compared to two previously published ray-casting techniques. This speaks for the importance of considering users’ characteristics further to develop better selection techniques to foster more robust accurate selections.  相似文献   

15.
In this paper, we present an approach for recognizing pointing gestures in the context of human–robot interaction. In order to obtain input features for gesture recognition, we perform visual tracking of head, hands and head orientation. Given the images provided by a calibrated stereo camera, color and disparity information are integrated into a multi-hypothesis tracking framework in order to find the 3D-positions of the respective body parts. Based on the hands’ motion, an HMM-based classifier is trained to detect pointing gestures. We show experimentally that the gesture recognition performance can be improved significantly by using information about head orientation as an additional feature. Our system aims at applications in the field of human–robot interaction, where it is important to do run-on recognition in real-time, to allow for robot egomotion and not to rely on manual initialization.  相似文献   

16.
A smart space, which is embedded with networked sensors and smart devices, can provide various useful services to its users. For the success of a smart space, the problem of tracking and identification of smart space users is of paramount importance. We propose a system, called Optimus, for persistent tracking and identification of users in a smart space, which is equipped with a camera network. We assume that each user carries a smartphone in a smart space. A camera network is used to solve the problem of tracking multiple users in a smart space and information from smartphones is used to identify tracks. For robust tracking, we first detect human subjects from images using a head detection algorithm based on histograms of oriented gradients. Then, human detections are combined to form tracklets and delayed track-level association is used to combine tracklets to build longer trajectories of users. Last, accelerometers in smartphones are used to disambiguate identities of trajectories. By linking identified trajectories, we show that the average length of a track can be lengthened by over six times. The performance of the proposed system is evaluated extensively in realistic scenarios.  相似文献   

17.
Since the hysteresis property inherently exists in the rubber material, it is necessary to deal with the control issues for the micro-hand by considering the hysteresis property. Therefore, in this paper, the robust tracking control for the micro-hand systems is discussed from the aspect of the Prandtl–Ishlinskii hysteresis property which is more applicable for the real applications. Firstly, a new model is obtained by combining the dynamic model of the micro-hand with Prandtl–Ishlinskii hysteresis property. Secondly, a new stability condition based on bounded input and bounded output stability is proposed for the Prandtl–Ishlinskii hysteresis modeled micro-hand system from two different cases. Thirdly, by designing the robust controllers based on the internal model control method, the tracking performance can be improved by eliminating the effect from the disturbance. Finally, simulation is used to further demonstrate the effectiveness of the proposed design scheme.  相似文献   

18.
基于单视觉主动红外光源系统,提出了一种视线检测方法.在眼部特征检测阶段,采用投影法定位人脸;根据人脸对称性和五官分布的先验知识,确定瞳孔潜在区域;最后进行人眼特征的精确分割.在视线方向建模阶段,首先在头部静止的情况下采用非线性多项式建立从平面视线参数到视线落点的映射模型;然后采用广义回归神经网络对不同头部位置造成的视线偏差进行补偿,使非线性映射函数扩展到任何头部位置.实验结果及在交互式图形界面系统中的应用验证了该方法的有效性.  相似文献   

19.
Recent research on interactive electronic systems like computer, digital TV, smartphones can improve the quality of life of many disabled and elderly people by helping them to engage more fully to the world. Previously, a simulator was developed that reflects the effect of impairment on interaction with electronic devices and thus helps designers in developing accessible systems. In this article, the scope of the simulator has been extended to multiple pointing devices. The way that hand strength affects pointing performance of people with and without mobility impairment in graphical user interfaces was investigated for four different input modalities, and a set of linear equations to predict pointing time and average number of submovements for different devices was developed. These models were used to develop an adaptation algorithm to facilitate pointing in electronic interfaces by users with motor impairment using different pointing devices. The algorithm attracts a pointer when it is near a target and thus helps to reduce random movement during homing and clicking. The algorithm was optimized using the simulator and then tested on a real-life application with multiple distractors involving three different pointing devices. The algorithm significantly reduces pointing time for different input modalities.  相似文献   

20.
王琦  曹卫权  梁杰  李赟  吴杰 《计算机工程》2021,47(11):136-143
Tor匿名通信系统在全球范围内被广泛部署与使用,但其抵御溯源攻击的能力有待进一步建模分析。为精确衡量Tor用户在端到端溯源攻击下的安全性,综合Tor节点选择算法、用户使用模式、溯源攻击对手能力等要素,建立针对端到端溯源攻击对手的Tor安全性模型。经实验验证与分析结果表明,该模型可在统计意义下较精确计算对手捕获通信链路的概率及次数,以此衡量不同端到端溯源攻击对手对用户安全性的破坏程度。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号