首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In user interfaces of modern systems, users get the impression of directly interacting with application objects. In 3D based user interfaces, novel input devices, like hand and force input devices, are being introduced. They aim at providing natural ways of interaction. The use of a hand input device allows the recognition of static poses and dynamic gestures performed by a user's hand. This paper describes the use of a hand input device for interacting with a 3D graphical application. A dynamic gesture language, which allows users to teach some hand gestures, is presented. Furthermore, a user interface integrating the recognition of these gestures and providing feedback for them, is introduced. Particular attention has been spent on implementing a tool for easy specification of dynamic gestures, and on strategies for providing graphical feedback to users' interactions. To demonstrate that the introduced 3D user interface features, and the way the system presents graphical feedback, are not restricted to a hand input device, a force input device has also been integrated into the user interface.  相似文献   

2.
手势识别的快速发展及体感设备的不断更新为三维手势交互提供了灵感,基于Leap Motion 手势识别和最邻近算法,建立了一种三维手势交互系统。首先对手势设计理论和交互手 势设计原则进行研究,基于此设计手势功能和建立手势库,并将手势库分为 8 种手势;其次进 行手势特征提取,建立手指关键点模型,获取手势特征的角度特征;然后计算 KNN 算法和 SVM 算法的手势识别效率,KNN 改进算法取得较好的识别效率;最后,设计三维交互系统,手势分 类为 4 个模块,每个模块有 2 个手势任务;20 名测试者中提取 1 600 组手势数据,并进行总采 集样本关节点均值的数据分析;设计三维交互系统模块,在 Unity3D 中创建的三维交互系统中 导入 1 600 组手势数据,根据自定义的 8 种手势驱动虚拟手完成交互设计过程,完成用户体验 分析和手势识别效率统计。通过研究发现,基于 Leap Motion 手势识别具有较高的识别效率, 三维手势交互系统富有创新性。  相似文献   

3.
Virtual environments provide a whole new way of viewing and manipulating 3D data. Current technology moves the images out of desktop monitors and into the space immediately surrounding the user. Users can literally put their hands on the virtual objects. Unfortunately, techniques for interacting with such environments are yet to mature. Gloves and sensor-based trackers are unwieldy, constraining and uncomfortable to use. A natural, more intuitive method of interaction would be to allow the user to grasp objects with their hands and manipulate them as if they were real objects.We are investigating the use of computer vision in implementing a natural interface based on hand gestures. A framework for a gesture recognition system is introduced along with results of experiments in colour segmentation, feature extraction and template matching for finger and hand tracking, and simple hand pose recognition. Implementation of a gesture interface for navigation and object manipulation in virtual environments is presented.  相似文献   

4.
针对概念草图的输入输出问题,采用了基于草绘输入的交互式图形设计方法,实现了单笔划草绘轨迹主动分段识别与规整方法;运用了基于笔式手势的草图编辑方法,并将概念草图以DXF的格式输出,实现与现有CAD系统之间的集成.根据手势设计原则以及草图编辑的需要,定义并实现了选择、删除、移动等10种手势,同时采用感知器线性分类方法对手势进行识别.实例表明:手势编辑模式能够改善传统编辑方式在自然性与智能性方面的不足.  相似文献   

5.
Motion capture is a technique of digitally recording the movements of real entities, usually humans. It was originally developed as an analysis tool in biomechanics research, but has grown increasingly important as a source of motion data for computer animation. In this context it has been widely used for both cinema and video games. Hand motion capture and tracking in particular has received a lot of attention because of its critical role in the design of new Human Computer Interaction methods and gesture analysis. One of the main difficulties is the capture of human hand motion. This paper gives an overview of ongoing research “HandPuppet3D” being carried out in collaboration with an animation studio to employ computer vision techniques to develop a prototype desktop system and associated animation process that will allow an animator to control 3D character animation through the use of hand gestures. The eventual goal of the project is to support existing practice by providing a softer, more intuitive, user interface for the animator that improves the productivity of the animation workflow and the quality of the resulting animations. To help achieve this goal the focus has been placed on developing a prototype camera based desktop gesture capture system to capture hand gestures and interpret them in order to generate and control the animation of 3D character models. This will allow an animator to control 3D character animation through the capture and interpretation of hand gestures. Methods will be discussed for motion tracking and capture in 3D animation and in particular that of hand motion tracking and capture. HandPuppet3D aims to enable gesture capture with interpretation of the captured gestures and control of the target 3D animation software. This involves development and testing of a motion analysis system built from algorithms recently developed. We review current software and research methods available in this area and describe our current work.  相似文献   

6.
We propose a 3D interaction and autostereoscopic display system that use gesture recognition, which can manipulate virtual objects in the scene directly by hand gestures and can display objects in 3D stereoscopy. The system consists of a gesture recognition and manipulation part as well as an autostereoscopic display as an interactive display part. To manipulate the 3D virtual scene, a gesture recognition algorithm is proposed, which use spatial‐temporal sequences of feature vectors to match predefined gestures. To get smooth 3D visualization, we utilize the programmable graphics pipeline in graphic processing unit to accelerate data processing. We develop a prototype system for 3D virtual exhibition. The prototype system reaches frame rates of 60 fps and operates efficiently with a mean recognition accuracy of 90%.  相似文献   

7.
Humans use a combination of gesture and speech to interact with objects and usually do so more naturally without holding a device or pointer. We present a system that incorporates user body-pose estimation, gesture recognition and speech recognition for interaction in virtual reality environments. We describe a vision-based method for tracking the pose of a user in real time and introduce a technique that provides parameterized gesture recognition. More precisely, we train a support vector classifier to model the boundary of the space of possible gestures, and train Hidden Markov Models (HMM) on specific gestures. Given a sequence, we can find the start and end of various gestures using a support vector classifier, and find gesture likelihoods and parameters with a HMM. A multimodal recognition process is performed using rank-order fusion to merge speech and vision hypotheses. Finally we describe the use of our multimodal framework in a virtual world application that allows users to interact using gestures and speech.  相似文献   

8.
This paper presents a gesture recognition system for visualization navigation. Scientists are interested in developing interactive settings for exploring large data sets in an intuitive environment. The input consists of registered 3-D data. A geometric method using Bezier curves is used for the trajectory analysis and classification of gestures. The hand gesture speed is incorporated into the algorithm to enable correct recognition from trajectories with variations in hand speed. The method is robust and reliable: correct hand identification rate is 99.9% (from 1641 frames), modes of hand movements are correct 95.6% of the time, recognition rate (given the right mode) is 97.9%. An application to gesture-controlled visualization of 3D bioinformatics data is also presented.  相似文献   

9.
The use of hand gestures offers an alternative to the commonly used human computer interfaces (i.e., keyboard, mouse, gamepad), providing a more intuitive way of navigating among menus and in multimedia applications. One of the most difficult issues when designing a hand gesture recognition system is to introduce new detectable gestures without high cost, this is known as gesture scalability. Commonly, the introduction of new gestures needs a recording session of them, involving real subjects in the process. This paper presents a training framework for hand posture detection systems based on a learning scheme fed with synthetically generated range images. Different configurations of a 3D hand model result in sets of synthetic subjects, which have shown good performance in the separation of gestures from several dictionaries of the State of Art. The proposed approach allows the learning of new dictionaries with no need of recording real subjects, so it is fully scalable in terms of gestures. The obtained accuracy rates for the dictionaries evaluated are comparable to, and for some cases better than, the ones reported for different real subjects training schemes.  相似文献   

10.
This paper presents the visual recognition of static gesture (SG) or dynamic gesture (DG). Gesture is one of the most natural interface tools for human–computer interaction (HCI) as well as for communication between human beings. In order to implement a human-like interface, gestures could be recognized using only visual information such as the visual mechanism of human beings; SGs and DGs can be processed concurrently as well. This paper aims at recognizing hand gestures obtained from the visual images on a 2D image plane, without any external devices. Gestures are spotted by a task-specific state transition based on natural human articulation. SGs are recognized using image moments of hand posture, while DGs are recognized by analyzing their moving trajectories on the hidden Markov models (HMMs). We have applied our gesture recognition approach to gesture-driven editing systems operating in real time.  相似文献   

11.
This article proposes a 3-dimensional (3D) vision-based ambient user interface as an interaction metaphor that exploits a user's personal space and its dynamic gestures. In human-computer interaction, to provide natural interactions with a system, a user interface should not be a bulky or complicated device. In this regard, the proposed ambient user interface utilizes an invisible personal space to remove cumbersome devices where the invisible personal space is virtually augmented through exploiting 3D vision techniques. For natural interactions with the user's dynamic gestures, the user of interest is extracted from the image sequences by the proposed user segmentation method. This method can retrieve 3D information from the segmented user image through 3D vision techniques and a multiview camera. With the retrieved 3D information of the user, a set of 3D boxes (SpaceSensor) can be constructed and augmented around the user; then the user can interact with the system by touching the augmented SpaceSensor. In the user's dynamic gesture tracking, the computational complexity of SpaceSensor is relatively lower than that of conventional 2-dimensional vision-based gesture tracking techniques, because the touched positions of SpaceSensor are tracked. According to the experimental results, the proposed ambient user interface can be applied to various systems that require real-time user's dynamic gestures for their interactions both in real and virtual environments.  相似文献   

12.
13.
基于触摸屏的手势遥控系统   总被引:1,自引:0,他引:1  
传统遥控方式对用户的限制和束缚降低了用户体验质量。为此,提出一种基于触摸屏的手势遥控系统。通过分析触摸手势元动作,对触摸手势进行分类和数学建模,设计该遥控系统的触摸手势识别算法。该算法充分考虑了用户的认知和行为习惯差异,实现智能电视的手势遥控系统,收集真实用户在使用该遥控系统时的操作习惯,进一步提高触摸手势及其对应的遥控操作的识别准确率。实验结果表明,该算法能较好地区分易引起误操作的触摸手势,使得平均识别准确率达到99%。  相似文献   

14.
设计了一种通过佩戴阵列型表面肌电传感器,实时识别受试者的8种手势,并控制一个自主研发的六自由度灵巧操作假手进行同步动作的人–机协同控制系统.控制假手的手势识别策略基于神经网络算法,受试者仅需在首次训练阶段重复完成预先设定的8种手势动作(分别为放松、手腕外翻、手腕内翻、握拳、伸掌、手势2、手势3和竖大拇指),之后该系统即能够实时识别受试者随机完成8种手势中的任意一种手势.本文提出的网络参数随机搜索算法和梯度下降算法,与目前同规模的神经网络相比提高了网络的训练速度和手势预测精度;该手势识别算法使用Tensorflow机器学习框架学习权值并进行了可视化分析;采用经过优化的手势训练方式既缩短了受试者的手势训练时间,同时提高了手势训练的熟练度.本文对一名肌肉无损伤的受试者进行表面肌电信号采集、训练和预测,对8种手势的综合预测精度达到97%,且再次佩戴时不再需要进行训练.受试者实际控制假手时,使用投票算法对实时手势预测结果进行深度优化,最终假手的动作同步率到达99%.  相似文献   

15.
Considerable effort has been put toward the development of intelligent and natural interfaces between users and computer systems. In line with this endeavor, several modes of information (e.g., visual, audio, and pen) that are used either individually or in combination have been proposed. The use of gestures to convey information is an important part of human communication. Hand gesture recognition is widely used in many applications, such as in computer games, machinery control (e.g., crane), and thorough mouse replacement. Computer recognition of hand gestures may provide a natural computer interface that allows people to point at or to rotate a computer-aided design model by rotating their hands. Hand gestures can be classified into two categories: static and dynamic. The use of hand gestures as a natural interface serves as a motivating force for research on gesture taxonomy, its representations, and recognition techniques. This paper summarizes the surveys carried out in human--computer interaction (HCI) studies and focuses on different application domains that use hand gestures for efficient interaction. This exploratory survey aims to provide a progress report on static and dynamic hand gesture recognition (i.e., gesture taxonomies, representations, and recognition techniques) in HCI and to identify future directions on this topic.  相似文献   

16.
利用Kinect相机结合增强现实技术和手势识别方法设计并实现了一个弓弦乐器虚拟演奏系统——以二胡为例.将Kinect获取的现实场景和虚拟乐器融合在一起绘制成增强现实场景.通过Kinect得到的深度数据和贝叶斯肤色模型将用户的左手分割出来,并再次绘制在增强图像上形成新的图像,从而解决虚拟演奏场景中的虚实遮挡问题.利用基于反向动力学和马尔可夫模型的三维虚拟手势拟合方法,对演奏过程中的左手手势进行识别,并结合右手的运动状态完成乐器的虚拟演奏.  相似文献   

17.
Wearable projector and camera (PROCAM) interfaces, which provide a natural, intuitive and spatial experience, have been studied for many years. However, existing hand input research into such systems revolved around investigations into stable settings such as sitting or standing, not fully satisfying interaction requirements in sophisticated real life, especially when people are moving. Besides, increasingly more mobile phone users use their phones while walking. As a mobile computing device, the wearable PROCAM system should allow for the fact that mobility could influence usability and user experience. This paper proposes a wearable PROCAM system, with which the user can interact by inputting with finger gestures like the hover gesture and the pinch gesture on projected surfaces. A lab-based evaluation was organized, which mainly compared two gestures (the pinch gesture and the hover gesture) in three situations (sitting, standing and walking) to find out: (1) How and to what degree does mobility influence different gesture inputs? Are there any significant differences between gesture inputs in different settings? (2) What reasons cause these differences? (3) What do people think about the configuration in such systems and to what extent does the manual focus impact such interactions? From qualitative and quantitative points of view, the main findings imply that mobility impacts gesture interactions in varying degrees. The pinch gesture undergoes less influence than the hover gesture in mobile settings. Both gestures were impacted more in walking state than in sitting and standing states by all four negative factors (lack of coordination, jittering hand effect, tired forearms and extra attention paid). Manual focus influenced mobile projection interaction. Based on the findings, implications are discussed for the design of a mobile projection interface with gestures.  相似文献   

18.
Gesture elicitation studies, which are a popular technology for collecting requirements and expectations by involving real users in gesture design processes, often suffer from gesture disagreement and legacy bias and may not generate optimal gestures for a target system in practice. This paper reports a research project on user-defined gestures for interacting with immersive VR shopping applications. The main contribution of this work is the proposal of a more practical method for deriving more reliable gestures than traditional gesture elicitation studies. We applied this method to a VR shopping application and obtained empirical evidence for the benefits of deriving two gestures in the a priori stage and selecting the top-two gestures in the a posteriori stage of traditional elicitation studies for each referent. We hope that this research can help lay a theoretical foundation for freehand-gesture-based user interface design and be generalised to all freehand-gesture-based applications.  相似文献   

19.
The use of hand gestures provides an attractive alternative to cumbersome interface devices for human-computer interaction (HCI). In particular, visual interpretation of hand gestures can help in achieving the ease and naturalness desired for HCI. This has motivated a very active research area concerned with computer vision-based analysis and interpretation of hand gestures. We survey the literature on visual interpretation of hand gestures in the context of its role in HCI. This discussion is organized on the basis of the method used for modeling, analyzing, and recognizing gestures. Important differences in the gesture interpretation approaches arise depending on whether a 3D model of the human hand or an image appearance model of the human hand is used. 3D hand models offer a way of more elaborate modeling of hand gestures but lead to computational hurdles that have not been overcome given the real-time requirements of HCI. Appearance-based models lead to computationally efficient “purposive” approaches that work well under constrained situations but seem to lack the generality desirable for HCI. We also discuss implemented gestural systems as well as other potential applications of vision-based gesture recognition. Although the current progress is encouraging, further theoretical as well as computational advances are needed before gestures can be widely used for HCI. We discuss directions of future research in gesture recognition, including its integration with other natural modes of human-computer interaction  相似文献   

20.
设计了一种基于Qt的人机交互软件,用于从表面肌电信号中解码出手势,控制空间机械臂灵巧手作业;介绍了肌电信号解码手势并控制仿真灵巧手系统的组成,包括下位机肌电采集接口部分和上位机人机交互软件两部分;详细说明了人机交互软件的3个功能模块,即接收并显示16路神经接口向上位机发送的肌电信号、对肌电信号进行实时手势解码以及控制仿真灵巧手;分析了软件设计过程中的几个关键技术,信号与槽机制、多线程与多进程结合、UDP通信等;最后,设计了基于肌电信号解码3种手势并控制仿真灵巧手的实时实验,手势识别率在98%以上,控制延迟为200 ms左右;实验结果表明,人机交互软件运行稳定,功能齐全,在航天遥操作人机交互系统中具有应用前景.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号