首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
User interfaces of current 3D and virtual reality environments require highly interactive input/output (I/O) techniques and appropriate input devices, providing users with natural and intuitive ways of interacting. This paper presents an interaction model, some techniques, and some ways of using novel input devices for 3D user interfaces. The interaction model is based on a tool‐object syntax, where the interaction structure syntactically simulates an action sequence typical of a human's everyday life: One picks up a tool and then uses it on an object. Instead of using a conventional mouse, actions are input through two novel input devices, a hand‐ and a force‐input device. The devices can be used simultaneously or in sequence, and the information they convey can be processed in a combined or in an independent way by the system. The use of a hand‐input device allows the recognition of static poses and dynamic gestures performed by a user's hand. Hand gestures are used for selecting, or acting as, tools and for manipulating graphical objects. A system for teaching and recognizing dynamic gestures, and for providing graphical feedback for them, is described.  相似文献   

2.
Haptic technologies are often used to improve access to the structural content of graphical user interfaces, thereby augmenting the interaction process for blind users. While haptic design guidelines offer valuable assistance when developing non-visual interfaces, the recommendations presented are often tailored to the feedback produced via one particular haptic input/output device. A blind user is therefore restricted to interacting with a device which may be unfamiliar to him/her, rather than selecting from the range of commercially available products. This paper reviews devices available on the first and second-hand markets, and describes an exploratory study undertaken with 12 blindfolded sighted participants to determine the effectiveness of three devices for non-visual web interaction. The force-feedback devices chosen for the study, ranged in the number of translations and rotations that the user was able to perform when interacting with them. Results have indicated that the Novint Falcon could be used to target items faster in the first task presented, compared with the other devices. However, participants agreed that the force-feedback mouse was most comfortable to use when interacting with the interface. Findings have highlighted the benefits which low cost haptic input/output devices can offer to the non-visual browsing process, and any changes which may need to be made to accommodate their deficiencies. The study has also highlighted the need for web designers to integrate appropriate haptic feedback on their web sites to cater for the strengths and weaknesses of various devices, in order to provide universally accessible sites and online applications.  相似文献   

3.
Pointing devices, essential input tools for the graphical user interface (GUI) of desktop computers, require precise motor control and dexterity to use. Haptic force-feedback devices provide the human operator with tactile cues, adding the sense of touch to existing visual and auditory interfaces. However, the performance enhancements, comfort, and possible musculoskeletal loading of using a force-feedback device in an office environment are unknown. Hypothesizing that the time to perform a task and the self-reported pain and discomfort of the task improve with the addition of force feedback, 26 people ranging in age from 22 to 44 years performed a point-and-click task 540 times with and without an attractive force field surrounding the desired target. The point-and-click movements were approximately 25% faster with the addition of force feedback (paired t-tests, p < 0.001). Perceived user discomfort and pain, as measured through a questionnaire, were also smaller with the addition of force feedback (p < 0.001). However, this difference decreased as additional distracting force fields were added to the task environment, simulating a more realistic work situation. These results suggest that for a given task, use of a force-feedback device improves performance, and potentially reduces musculoskeletal loading during mouse use. Actual or potential applications of this research include human-computer interface design, specifically that of the pointing device extensively used for the graphical user interface.  相似文献   

4.
This paper presents a novel interactive framework for 3D content-based search and retrieval using as query model an object that is dynamically sketched by the user. In particular, two approaches are presented for generating the query model. The first approach uses 2D sketching and symbolic representation of the resulting gestures. The second utilizes non-linear least squares minimization to model the 3D point cloud that is generated by the 3D tracking of the user’s hands, using superquadrics. In the context of the proposed framework, three interfaces were integrated to the sketch-based 3D search system including (a) an unobtrusive interface that utilizes pointing gesture recognition to allow the user manipulate objects in 3D, (b) a haptic–VR interface composed by 3D data gloves and a force feedback device, and (c) a simple air–mouse. These interfaces were tested and comparative results were extracted according to usability and efficiency criteria.  相似文献   

5.
Considerable effort has been put toward the development of intelligent and natural interfaces between users and computer systems. In line with this endeavor, several modes of information (e.g., visual, audio, and pen) that are used either individually or in combination have been proposed. The use of gestures to convey information is an important part of human communication. Hand gesture recognition is widely used in many applications, such as in computer games, machinery control (e.g., crane), and thorough mouse replacement. Computer recognition of hand gestures may provide a natural computer interface that allows people to point at or to rotate a computer-aided design model by rotating their hands. Hand gestures can be classified into two categories: static and dynamic. The use of hand gestures as a natural interface serves as a motivating force for research on gesture taxonomy, its representations, and recognition techniques. This paper summarizes the surveys carried out in human--computer interaction (HCI) studies and focuses on different application domains that use hand gestures for efficient interaction. This exploratory survey aims to provide a progress report on static and dynamic hand gesture recognition (i.e., gesture taxonomies, representations, and recognition techniques) in HCI and to identify future directions on this topic.  相似文献   

6.
A tangible goal for 3D modeling   总被引:1,自引:0,他引:1  
As we progress into applications that incorporate interactive life-like 3D computer graphics, the mouse falls short as a user interface device, and it becomes obvious that 3D computer graphics could achieve much more with a more intuitive user interface mechanism. Haptic interfaces, or force feedback devices, promise to increase the quality of human-computer interaction by accommodating our sense of touch. The article discusses the application of touch feedback systems to 3D modelling. To achieve a high interactivity level requires novel rendering techniques such as volume-based rendering algorithms  相似文献   

7.
The article outlines the development of the Rock 'n' Scroll input method which lets users gesture to scroll, select, and command an application without resorting to buttons, touchscreens, spoken commands, or other input methods. The Rock'n' Scroll user interface shows how inertial sensors in handheld devices can provide additional function beyond “tilt-to-scroll”. By also using them to recognize gestures, a significantly richer vocabulary for controlling the device is available that implements an electronic photo album, pager, or other limited function digital appliance without any additional input methods. The examples presented offer a glimpse at the freedom for both device designers and users inherent in devices that can be held in either hand, at any orientation, operated with mittens on, or not in the hand at all  相似文献   

8.
提出一个在线双向适应的笔手势界面框架,该框架针对传统笔手势界面中静态手势识别器不能支持用户的个性化输入以及用户在笔手势界面中面临的手势记忆问题,提出了在线双向适应的策略:一方面系统能够适应用户(系统可以在线支持用户的个性化输入);另一方面用户可以学习系统(用户可以学习系统提供的某些笔手势).该框架包括5个部分:(1)双向适应笔手势输入解释模型;(2)双向适应笔手势输入解释流程;(3)上下文优先级定义;(4)纠错和模糊消解界面;(5)在线笔手势查询帮助系统.在该框架的指导下作者开发了一个原型系统并进行了对比实验评估.结果表明该框架在可用性上具有较大的优势.  相似文献   

9.
Despite the existence of advanced functions in smartphones, most blind people are still using old-fashioned phones with familiar layouts and dependence on tactile buttons. Smartphones support accessibility features including vibration, speech and sound feedback, and screen readers. However, these features are only intended to provide feedback to user commands or input. It is still a challenge for blind people to discover functions on the screen and to input the commands. Although voice commands are supported in smartphones, these commands are difficult for a system to recognize in noisy environments. At the same time, smartphones are integrated with sophisticated motion sensors, and motion gestures with device tilt have been gaining attention for eyes-free input. We believe that these motion gesture interactions offer more efficient access to smartphone functions for blind people. However, most blind people are not smartphone users and they are aware of neither the affordances available in smartphones nor the potential for interaction through motion gestures. To investigate the most usable gestures for blind people, we conducted a user-defined study with 13 blind participants. Using the gesture set and design heuristics from the user study, we implemented motion gesture based interfaces with speech and vibration feedback for browsing phone books and making a call. We then conducted a second study to investigate the usability of the motion gesture interface and user experiences using the system. The findings indicated that motion gesture interfaces are more efficient than traditional button interfaces. Through the study results, we provided implications for designing smartphone interfaces.  相似文献   

10.
The context of mobility raises many issues for geospatial applications providing location-based services. Mobile device limitations, such as small user interface footprint and pen input whilst in motion, result in information overload on such devices and interfaces which are difficult to navigate and interact with. This has become a major issue as mobile GIS applications are now being used by a wide group of users, including novice users such as tourists, for whom it is essential to provide easy-to-use applications. Despite this, comparatively little research has been conducted to address the mobility problem. We are particularly concerned with the limited interaction techniques available to users of mobile GIS which play a primary role in contributing to the complexity of using such an application whilst mobile. As such, our research focuses on multimodal interfaces as a means to present users with a wider choice of modalities for interacting with mobile GIS applications. Multimodal interaction is particularly advantageous in a mobile context, enabling users of location-based applications to choose the mode of input that best suits their current task and location. The focus of this article concerns a comprehensive user study which demonstrates the benefits of multimodal interfaces for mobile geospatial applications.  相似文献   

11.
IMMIView is an interactive system that relies on multiple modalities and multi-user interaction to support collaborative design review. It was designed to offer natural interaction in visualization setups such as large-scale displays, head mounted displays or TabletPC computers. To support architectural design, our system provides content creation and manipulation, 3D scene navigation and annotations. Users can interact with the system using laser pointers, speech commands, body gestures and mobile devices. In this paper, we describe how we design a system to answer architectural user requirements. In particular, our system takes advantage of multiple modalities to provide a natural interaction for design review. We also propose a new graphical user interface adapted to architectural user tasks, such as navigation or annotations. The interface relies on a novel stroke-based interaction supported by simple laser pointers as input devices for large-scale displays. Furthermore, input devices such as speech and body tracking allow IMMIView to support multiple users. Moreover, they allow each user to select different modalities according to their preference and modality adequacy for the user task. We present a multi-modal fusion system developed to support multi-modal commands on a collaborative, co-located, environment, i.e. with two or more users interacting at the same time, on the same system. The multi-modal fusion system listens to inputs from all the IMMIView modules in order to model user actions and issue commands. The multiple modalities are fused based on a simple rule-based sub-module developed in IMMIView and presented in this paper. User evaluation performed over IMMIView is presented. The results show that users feel comfortable with the system and suggest that users prefer the multi-modal approach to more conventional interactions, such as mouse and menus, for the architectural tasks presented.  相似文献   

12.
One goal of research in the area of human–machine interaction is to improve the ways a human user interacts with a computer through a multimedia interface. This interaction comprises of not only text, graphical animation, stereo sounds, and live video images, but also force and haptic feedback, which can provide more “real” feeling to the user. The force feedback joystick, a human interface device is an input–output device. It not only tracks user's physical manipulation input, but also provides realistic physical sensations of force coordinated with system output. As part of our research, we have developed a multimedia computer game that assimilates images, sounds, and force feedback. We focused on the issues of how to combine these media to allow the user feel the compliance, damping, and vibration effects through the force feedback joystick. We conducted series of human subject experiments that incorporated different combinations of media, including the comparative study of the different performances of 60 human users, aiming to answer the question: What are the effects of force feedback (and associated time delays) when used in combination with visual and auditory information as part of a multi-modal interface? It is hoped that these results can be utilized in the design of enhanced multimedia systems that incorporate force feedback.  相似文献   

13.
We live in a society that depends on high-tech devices for assistance with everyday tasks, including everything from transportation to health care, communication, and entertainment. Tedious tactile input interfaces to these devices result in inefficient use of our time. Appropriate use of natural hand gestures will result in more efficient communication if the underlying meaning is understood. Overcoming natural hand gesture understanding challenges is vital to meet the needs of these increasingly pervasive devices in our every day lives. This work presents a graph-based approach to understand the meaning of hand gestures by associating dynamic hand gestures with known concepts and relevant knowledge. Conceptual-level processing is emphasized to robustly handle noise and ambiguity introduced during generation, data acquisition, and low-level recognition. A simple recognition stage is used to help relax scalability limitations of conventional stochastic language models. Experimental results show that this graph-based approach to hand gesture understanding is able to successfully understand the meaning of ambiguous sets of phrases consisting of three to five hand gestures. The presented approximate graph-matching technique to understand human hand gestures supports practical and efficient communication of complex intent to the increasingly pervasive high-tech devices in our society.  相似文献   

14.
Designing of touchless user interface is gaining popularity in various contexts. Users can interact with electronic devices using such interfaces even when their hands are dirty or non-conductive. Also, users with partial physical disability can interact with electronic devices with the help of touchless interfaces. In this paper, we propose a Leap Motion controller-based methodology to facilitate rendering of 2D and 3D shapes on display devices. The proposed method tracks finger movements while users perform natural gestures within the field of view of the motion sensor. Then, trajectories are analyzed to extract extended Npen++ features in 3D. These features capture finger movements during the gestures and they are fed to unidirectional left-to-right Hidden Markov Model (HMM) for training. A one-to-one mapping between gestures and shapes, is proposed. Finally, the shapes corresponding to these gestures are rendered over the display using a typical MuPad supported interface. We have created a dataset of 5400 samples recorded by 10 volunteers. Our dataset contains 18 geometric and 18 non-geometric shapes such as “circle”, “rectangle”, “flower”, “cone”, “sphere”, etc. The proposed method has achieved 92.87% accuracy using a 5-fold cross validation scheme. Experiments reveal that the extended 3D features perform better than the existing 3D features when applied for shape representation and classification. The method can be used for developing diverse HCI applications suitable for smart display devices.  相似文献   

15.
Exertion games (exergames) pose interesting challenges in terms of user interaction techniques. Players are commonly unable to use traditional input devices such as mouse and keyboard, given the body movement requirements of this type of videogames. In this work we propose a hand gesture interface to direct actions in a target-shooting exertion game that is played while exercising on an ergo-bike. A vision-based hand gesture interface for interacting with objects in a 3D videogame is designed and implemented. The system is capable to issue game commands to any computer game that normally responds to mouse and keyboard without modifying the underlying source code of the game. The vision system combines Bag-of-features and Support Vector Machine (SVM) to achieve user-independent and real-time hand gesture recognition. In particular, a Finite State Machine (FSM) is used to build the grammar that generates gesture commands for the game. We carried out a user study to gather feedback from participants, and our preliminary results show the high level of interest from users use this multimedia system that implements a natural way of interaction. Albeit some concerns in terms of comfort, users had a positive experience using our exertion game and they expressed their positive intention to use a system like this in their daily lives.  相似文献   

16.
This article proposes a 3-dimensional (3D) vision-based ambient user interface as an interaction metaphor that exploits a user's personal space and its dynamic gestures. In human-computer interaction, to provide natural interactions with a system, a user interface should not be a bulky or complicated device. In this regard, the proposed ambient user interface utilizes an invisible personal space to remove cumbersome devices where the invisible personal space is virtually augmented through exploiting 3D vision techniques. For natural interactions with the user's dynamic gestures, the user of interest is extracted from the image sequences by the proposed user segmentation method. This method can retrieve 3D information from the segmented user image through 3D vision techniques and a multiview camera. With the retrieved 3D information of the user, a set of 3D boxes (SpaceSensor) can be constructed and augmented around the user; then the user can interact with the system by touching the augmented SpaceSensor. In the user's dynamic gesture tracking, the computational complexity of SpaceSensor is relatively lower than that of conventional 2-dimensional vision-based gesture tracking techniques, because the touched positions of SpaceSensor are tracked. According to the experimental results, the proposed ambient user interface can be applied to various systems that require real-time user's dynamic gestures for their interactions both in real and virtual environments.  相似文献   

17.
18.
The proliferation of accelerometers on consumer electronics has brought an opportunity for interaction based on gestures. We present uWave, an efficient recognition algorithm for such interaction using a single three-axis accelerometer. uWave requires a single training sample for each gesture pattern and allows users to employ personalized gestures. We evaluate uWave using a large gesture library with over 4000 samples for eight gesture patterns collected from eight users over one month. uWave achieves 98.6% accuracy, competitive with statistical methods that require significantly more training samples. We also present applications of uWave in gesture-based user authentication and interaction with 3D mobile user interfaces. In particular, we report a series of user studies that evaluates the feasibility and usability of lightweight user authentication. Our evaluation shows both the strength and limitations of gesture-based user authentication.  相似文献   

19.
ABSTRACT

One of the challenges of teleoperation is the recognition of a user’s intended commands, particularly in the manning of highly dynamic systems such as drones. In this paper, we present a solution to this problem by developing a generalized scheme relying on a Convolutional Neural Network (CNN) that is trained to recognize a user’s intended commands, directed through a haptic device. Our proposed method allows the interface to be personalized for each user, by pre-training the CNN differently according to the input data that is specific to the intended end user. Experiments were conducted using two haptic devices and classification results demonstrate that the proposed system outperforms geometric-based approaches by nearly 12%. Furthermore, our system also lends itself to other human–machine interfaces where intention recognition is required.  相似文献   

20.
Virtual environments provide a whole new way of viewing and manipulating 3D data. Current technology moves the images out of desktop monitors and into the space immediately surrounding the user. Users can literally put their hands on the virtual objects. Unfortunately, techniques for interacting with such environments are yet to mature. Gloves and sensor-based trackers are unwieldy, constraining and uncomfortable to use. A natural, more intuitive method of interaction would be to allow the user to grasp objects with their hands and manipulate them as if they were real objects.We are investigating the use of computer vision in implementing a natural interface based on hand gestures. A framework for a gesture recognition system is introduced along with results of experiments in colour segmentation, feature extraction and template matching for finger and hand tracking, and simple hand pose recognition. Implementation of a gesture interface for navigation and object manipulation in virtual environments is presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号