首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The proliferation of accelerometers on consumer electronics has brought an opportunity for interaction based on gestures. We present uWave, an efficient recognition algorithm for such interaction using a single three-axis accelerometer. uWave requires a single training sample for each gesture pattern and allows users to employ personalized gestures. We evaluate uWave using a large gesture library with over 4000 samples for eight gesture patterns collected from eight users over one month. uWave achieves 98.6% accuracy, competitive with statistical methods that require significantly more training samples. We also present applications of uWave in gesture-based user authentication and interaction with 3D mobile user interfaces. In particular, we report a series of user studies that evaluates the feasibility and usability of lightweight user authentication. Our evaluation shows both the strength and limitations of gesture-based user authentication.  相似文献   

2.
3.
The “Midas Touch” problem has long been a difficult problem existing in gesture-based interaction. This paper proposes a visual attention-based method to address this problem from the perspective of cognitive psychology. There are three main contributions in this paper: (1) a visual attention-based parallel perception model is constructed by combining top-down and bottom-up attention, (2) a framework is proposed for dynamic gesture spotting and recognition simultaneously, and (3) a gesture toolkit is created to facilitate gesture design and development. Experimental results show that the proposed method has a good performance for both isolated and continuous gesture recognition tasks. Finally, we highlight the implications of this work for the design and development of all gesture-based applications.  相似文献   

4.
随着智能移动终端的发展及摄像镜头的小型化,自拍变得越来越流行。如何设计新型自拍交互方法使得用户在自拍过程中能够自由、实时地控制相机是自拍相机交互界面的关键问题。提出利用基于视觉的运动手势交互界面的新方法,使自拍过程中用户只要挥一挥手臂就可以实现与自拍相机的交互功能。使用手势交互的方法,用户可以把相机放在任意的平台上,自由地摆出各种自拍姿态,增加了自拍的丰富性,提高了用户体验。主要提出挥手及画圈两种交互手势,通过组合应用可以实现丰富高效的自拍交互控制功能,如快门控制、白平衡,曝光度等。手势的识别利用相机摄像的实时图像进行处理,采用稀疏光流算法来识别运动手势。用户评估实验表明,所提出运动手势自拍交互界面具有较好的交互效率以及良好的用户满意度,两种手势的识别效率约为85%。  相似文献   

5.
许芬 《计算机应用研究》2021,38(12):3521-3526
从姿态信息采集、姿态情绪特征提取、姿态情绪识别算法和姿态情绪数据库几个方面对国内外姿态情绪识别研究进行了全面的总结,分析了姿态情绪识别研究存在的难点和挑战,提出姿态情绪识别的关键是姿态情绪特征提取和姿态情绪数据库的建立,最后探讨了姿态情绪识别研究的发展方向.  相似文献   

6.
We present an intuitive, implicit, gesture based identification system suited for applications such as the user login to home multimedia services, with less strict security requirements. The term “implicit gesture” in this work refers to a natural physical hand manipulation of the control device performed by the user, who picks it up from its neutral motionless position or shakes it. For reference with other related systems, explicit and well defined identification gestures were used. Gestures were acquired by an accelerometer sensor equipped device in a form of the Nintendo WiiMote remote controller. A dynamic time warping method is used at the core of our gesture based identification system. To significantly increase the computational efficiency and temporal stability, the “super-gesture” concept was introduced, where acceleration features of multiple gestures are combined in only one super-gesture template per each user. User evaluation spanning over a period of 10 days and including 10 participants was conducted. User evaluation study results show that our algorithm ensures nearly 100 % recognition accuracy when using explicit identification signature gestures and between 88 % and 77 % recognition accuracy when the system needs to distinguish between 5 and 10 users, using the implicit “pick-up” gesture. Performance of the proposed system is comparable to the results of other related works when using explicit identification gestures, while showing that implicit gesture based identification is also possible and viable.  相似文献   

7.
《Artificial Intelligence》2007,171(8-9):568-585
Head pose and gesture offer several conversational grounding cues and are used extensively in face-to-face interaction among people. To accurately recognize visual feedback, humans often use contextual knowledge from previous and current events to anticipate when feedback is most likely to occur. In this paper we describe how contextual information can be used to predict visual feedback and improve recognition of head gestures in human–computer interfaces. Lexical, prosodic, timing, and gesture features can be used to predict a user's visual feedback during conversational dialog with a robotic or virtual agent. In non-conversational interfaces, context features based on user–interface system events can improve detection of head gestures for dialog box confirmation or document browsing. Our user study with prototype gesture-based components indicate quantitative and qualitative benefits of gesture-based confirmation over conventional alternatives. Using a discriminative approach to contextual prediction and multi-modal integration, performance of head gesture detection was improved with context features even when the topic of the test set was significantly different than the training set.  相似文献   

8.
Traditionally, gesture-based interaction in virtual environments is composed of either static, posture-based gesture primitives or temporally analyzed dynamic primitives. However, it would be ideal to incorporate both static and dynamic gestures to fully utilize the potential of gesture-based interaction. To that end, we propose a probabilistic framework that incorporates both static and dynamic gesture primitives. We call these primitives Gesture Words (GWords). Using a probabilistic graphical model (PGM), we integrate these heterogeneous GWords and a high-level language model in a coherent fashion. Composite gestures are represented as stochastic paths through the PGM. A gesture is analyzed by finding the path that maximizes the likelihood on the PGM with respect to the video sequence. To facilitate online computation, we propose a greedy algorithm for performing inference on the PGM. The parameters of the PGM can be learned via three different methods: supervised, unsupervised, and hybrid. We have implemented the PGM model for a gesture set of ten GWords with six composite gestures. The experimental results show that the PGM can accurately recognize composite gestures.  相似文献   

9.
手势识别是人机交互中的重要组成部分,文章针对基于光流PCA(主分量分析)和DTW(动态时间规整)进行命令手势识别。利用块相关算法计算光流,并通过主分量分析得到降维的投影系数,以及手掌区域的质心作为混合特征向量。针对该混合特征向量定义了新的加权距离测度,并用DTW对手势进行匹配。针对9个手势训练和识别,识别率达到92%。  相似文献   

10.
Leap Motion手势识别在识别区域边缘和手指遮挡部位存在识别不稳定的现象。提出了一种Leap Motion手势交互层次校正方法。该方法通过实时对比阈值方式分析Leap Motion的识别错误,并采用层次化的校正算法校正人手位置,解决人手交互过程中的识别不稳定现象。通过对实验进行分析,75%参与者对实验交互方式满意,80%参与者认为该方法更精确,且交互内容识别精度超过89%,充分证明了该方法能够提高Leap Motion的识别准确率,提升用户体验。  相似文献   

11.
Jiang  Du  Li  Gongfa  Sun  Ying  Kong  Jianyi  Tao  Bo 《Multimedia Tools and Applications》2019,78(21):29953-29970

In the field of human-computer interaction, vision-based gesture recognition methods are widely studied. However, its recognition effect depends to a large extent on the performance of the recognition algorithm. The skeletonization algorithm and convolutional neural network (CNN) for the recognition algorithm reduce the impact of shooting angle and environment on recognition effect, and improve the accuracy of gesture recognition in complex environments. According to the influence of the shooting angle on the same gesture recognition, the skeletonization algorithm is optimized based on the layer-by-layer stripping concept, so that the key node information in the hand skeleton diagram is extracted. The gesture direction is determined by the spatial coordinate axis of the hand. Based on this, gesture segmentation is implemented to overcome the influence of the environment on the recognition effect. In order to further improve the accuracy of gesture recognition, the ASK gesture database is used to train the convolutional neural network model. The experimental results show that compared with SVM method, dictionary learning + sparse representation, CNN method and other methods, the recognition rate reaches 96.01%.

  相似文献   

12.
This paper introduces a novel framework, Gesture and Appearance Cutout Embedding (GACE), that supports real-time integration of human appearance and gesture-guided control within a game. It aims to enhance immersion since it allows game users to see their personal appearance in a real-time manner. In addition, we exploit the gesture-based control to allow user interaction with other in-game characters. With the goal to make implementation easier, we address the challenges in the whole pipeline of video processing, gesture recognition, and communication. The system is successfully integrated into both entertainment and serious games. Extensive experiments show that the proposed system runs reliably and comfortably with a commodity setting. Meanwhile, the user impression study indicates our system is favored by end users.  相似文献   

13.
Accurately understanding a user’s intention is often essential to the success of any interactive system. An information retrieval system, for example, should address the vocabulary problem (Furnas et al., 1987) to accommodate different query terms users may choose. A system that supports natural user interaction (e.g., full-body game and immersive virtual reality) must recognize gestures that are chosen by users for an action. This article reports an experimental study on the gesture choice for tasks in three application domains. We found that the chance for users to produce the same gesture for a given task is below 0.355 on average, and offering a set of gesture candidates can improve the agreement score. We discuss the characteristics of those tasks that exhibit the gesture disagreement problem and those tasks that do not. Based on our findings, we propose some design guidelines for free-hand gesture-based interfaces.  相似文献   

14.
Machine learning is a technique for analyzing data that aids the construction of mathematical models. Because of the growth of the Internet of Things (IoT) and wearable sensor devices, gesture interfaces are becoming a more natural and expedient human-machine interaction method. This type of artificial intelligence that requires minimal or no direct human intervention in decision-making is predicated on the ability of intelligent systems to self-train and detect patterns. The rise of touch-free applications and the number of deaf people have increased the significance of hand gesture recognition. Potential applications of hand gesture recognition research span from online gaming to surgical robotics. The location of the hands, the alignment of the fingers, and the hand-to-body posture are the fundamental components of hierarchical emotions in gestures. Linguistic gestures may be difficult to distinguish from nonsensical motions in the field of gesture recognition. Linguistic gestures may be difficult to distinguish from nonsensical motions in the field of gesture recognition. In this scenario, it may be difficult to overcome segmentation uncertainty caused by accidental hand motions or trembling. When a user performs the same dynamic gesture, the hand shapes and speeds of each user, as well as those often generated by the same user, vary. A machine-learning-based Gesture Recognition Framework (ML-GRF) for recognizing the beginning and end of a gesture sequence in a continuous stream of data is suggested to solve the problem of distinguishing between meaningful dynamic gestures and scattered generation. We have recommended using a similarity matching-based gesture classification approach to reduce the overall computing cost associated with identifying actions, and we have shown how an efficient feature extraction method can be used to reduce the thousands of single gesture information to four binary digit gesture codes. The findings from the simulation support the accuracy, precision, gesture recognition, sensitivity, and efficiency rates. The Machine Learning-based Gesture Recognition Framework (ML-GRF) had an accuracy rate of 98.97%, a precision rate of 97.65%, a gesture recognition rate of 98.04%, a sensitivity rate of 96.99%, and an efficiency rate of 95.12%.  相似文献   

15.
针对目前复杂环境下因光照不均匀、背景近肤色以及手势尺度较小等原因导致的手势检测算法识别率低的问题,提出了一种手势识别方法 HD-YOLOv5s。首先采用基于Retinex理论的自适应Gamma图像增强预处理方法降低光照变化对手势识别效果的影响;其次构建具有自适应卷积注意力机制SKNet的特征提取网络,提高网络的特征提取能力,减少复杂环境中的背景干扰问题;最后在特征融合网络中构建新型的双向特征金字塔结构,充分利用低层级特征以降低浅层语义信息的丢失,提高小尺度手势的检测精度,同时采用跨层级联的方式,进一步提高模型的检测效率。为了验证改进方法的有效性,分别在具有丰富光照强度对比的自制数据集和具有复杂背景的公共数据集NUS-Ⅱ上进行实验,识别率达到了99.5%和98.9%,单帧照片的检测时间仅需0.01~0.02 s。  相似文献   

16.
自从1992年激光投影键盘发明以来,其良好的光景交互体验吸引了众多光景效果爱好者。若能在其键盘功能基础上再完成一些较简单的空间平面手势识别,将会满足更多的个性化交互追求。该文通过基于径向基函数神经网络的人机交互手势识别算法研究了激光投影键盘的手势识别功能扩展方法,并通过实验测试预定义的10种手势动作,实验结果表明识别率达到96.4%,从而一定程度上实现了激光键盘手势识别功能扩展的个性化交互追求。  相似文献   

17.
Humans use a combination of gesture and speech to interact with objects and usually do so more naturally without holding a device or pointer. We present a system that incorporates user body-pose estimation, gesture recognition and speech recognition for interaction in virtual reality environments. We describe a vision-based method for tracking the pose of a user in real time and introduce a technique that provides parameterized gesture recognition. More precisely, we train a support vector classifier to model the boundary of the space of possible gestures, and train Hidden Markov Models (HMM) on specific gestures. Given a sequence, we can find the start and end of various gestures using a support vector classifier, and find gesture likelihoods and parameters with a HMM. A multimodal recognition process is performed using rank-order fusion to merge speech and vision hypotheses. Finally we describe the use of our multimodal framework in a virtual world application that allows users to interact using gestures and speech.  相似文献   

18.
This paper presents an interactive multi-agent system based on a fully immersive virtual environment. A user can interact with the virtual characters in real time via an avatar by changing their moving behavior. Moreover, the user is allowed to select any character as the avatar to be controlled. A path planning algorithm is proposed to address the problem of dynamic navigation of individual and groups of characters in the multi-agent system. A natural interface is designed for the interaction between the user and the virtual characters, as well as the virtual environment, based on gesture recognition. To evaluate the efficiency of the dynamic navigation method, performance results are provided. The presented system has the potential to be used in the training and evaluation of emergency evacuation and other real-time applications of crowd simulation with interaction.  相似文献   

19.
Hand gestures have great potential to act as a computer interface in the entertainment environment. However, there are two major problems when implementing the hand gesture-based interface for multiple users, the complexity problem and the personalization problem. In order to solve these problems and implement multi-user data glove interface successfully, we propose an adaptive mixture-of-experts model for data-glove based hand gesture recognition models which can solve both the problems.The proposed model consists of the mixture-of-experts used to recognize the gestures of an individual user, and a teacher network trained with the gesture data from multiple users. The mixture-of-experts model is trained with an expectation-maximization (EM) algorithm and an on-line learning rule. The model parameters are adjusted based on the feedback received from the real-time recognition of the teacher network.The model is applied to a musical performance game with the data glove (5DT Inc.) as a practical example. Comparison experiments using several representative classifiers showed both outstanding performance and adaptability of the proposed method. Usability assessment completed by the users while playing the musical performance game revealed the usefulness of the data glove interface system with the proposed method.  相似文献   

20.
随着电子技术的不断发展,人机交互方式也在得到转变,手势识别作为其中一项典型应用正吸引越来越多人的关注,本文即在嵌入式平台上通过相关算法实现了基本的手势动作识别。文中利用摄像头进行手势图像数据采集,采用STM32作为微处理器,对图像进行差影分割、噪声去除等处理,完成了近距离范围内对运动手势的实时定位和基本识别,并在此基础上对游戏俄罗斯方块进行了控制,实现了手势识别技术在人机交互中的应用,很好得体现出手势操作的便利性和全新用户体验。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号