首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We have developed a gesture input system that provides a common interaction technique across mobile, wearable and ubiquitous computing devices of diverse form factors. In this paper, we combine our gestural input technique with speech output and test whether or not the absence of a visual display impairs usability in this kind of multimodal interaction. This is of particular relevance to mobile, wearable and ubiquitous systems where visual displays may be restricted or unavailable. We conducted the evaluation using a prototype for a system combining gesture input and speech output to provide information to patients in a hospital Accident and Emergency Department. A group of participants was instructed to access various services using gestural inputs. The services were delivered by automated speech output. Throughout their tasks, these participants could see a visual display on which a GUI presented the available services and their corresponding gestures. Another group of participants performed the same tasks but without this visual display. It was predicted that the participants without the visual display would make more incorrect gestures and take longer to perform correct gestures than the participants with the visual display. We found no significant difference in the number of incorrect gestures made. We also found that participants with the visual display took longer than participants without it. It was suggested that for a small set of semantically distinct services with memorable and distinct gestures, the absence of a GUI visual display does not impair the usability of a system with gesture input and speech output.  相似文献   

2.
The emergence of small handheld devices such as tablets and smartphones, often with touch sensitive surfaces as their only input modality, has spurred a growing interest in the subject of gestures for human–computer interaction (HCI). It has been proven before that eye movements can be consciously controlled by humans to the extent of performing sequences of predefined movement patterns, or “gaze gestures” that can be used for HCI purposes in desktop computers. Gaze gestures can be tracked noninvasively using a video-based eye-tracking system. We propose here that gaze gestures can also be an effective input paradigm to interact with handheld electronic devices. We show through a pilot user study how gaze gestures can be used to interact with a smartphone, how they are easily assimilated by potential users, and how the Needleman-Wunsch algorithm can effectively discriminate intentional gaze gestures from otherwise typical gaze activity performed during standard interaction with a small smartphone screen. Hence, reliable gaze–smartphone interaction is possible with accuracy rates, depending on the modality of gaze gestures being used (with or without dwell), higher than 80 to 90%, negligible false positive rates, and completion speeds lower than 1 to 1.5 s per gesture. These encouraging results and the low-cost eye-tracking equipment used suggest the possibilities of this new HCI modality for the field of interaction with small-screen handheld devices.  相似文献   

3.
Designing of touchless user interface is gaining popularity in various contexts. Users can interact with electronic devices using such interfaces even when their hands are dirty or non-conductive. Also, users with partial physical disability can interact with electronic devices with the help of touchless interfaces. In this paper, we propose a Leap Motion controller-based methodology to facilitate rendering of 2D and 3D shapes on display devices. The proposed method tracks finger movements while users perform natural gestures within the field of view of the motion sensor. Then, trajectories are analyzed to extract extended Npen++ features in 3D. These features capture finger movements during the gestures and they are fed to unidirectional left-to-right Hidden Markov Model (HMM) for training. A one-to-one mapping between gestures and shapes, is proposed. Finally, the shapes corresponding to these gestures are rendered over the display using a typical MuPad supported interface. We have created a dataset of 5400 samples recorded by 10 volunteers. Our dataset contains 18 geometric and 18 non-geometric shapes such as “circle”, “rectangle”, “flower”, “cone”, “sphere”, etc. The proposed method has achieved 92.87% accuracy using a 5-fold cross validation scheme. Experiments reveal that the extended 3D features perform better than the existing 3D features when applied for shape representation and classification. The method can be used for developing diverse HCI applications suitable for smart display devices.  相似文献   

4.
Usability is an important step in the software and product design cycle. There are a number of methodologies such as talk aloud protocol, and cognitive walkthrough that can be employed in usability evaluations. However, many of these methods are not designed to include users with disabilities. Legislation and good design practice should provide incentives for researchers in this field to consider more inclusive methodologies. We carried out two studies to explore the viability of collecting gestural protocols from sign language users who are deaf using the think aloud protocol (TAP) method. Results of our studies support the viability of gestural TAP as a usability evaluation method and provide additional evidence that the cognitive systems used to produce successful verbal protocols in people who are hearing seem to work similarly in people who speak with gestures. The challenges for adapting the TAP method for gestural language relate to how the data was collected and not to the data or its analysis.  相似文献   

5.
Freehand gestural interaction, in which the user’s hands move in mid-air to provide input, has been of interest to researchers, but freehand menu selection interfaces have been under-investigated so far. Freehand menu selection is inherently difficult, especially with increasing menu breadth (i.e., the number of items), largely because moving hands in free space cannot achieve precision as high as physical input devices such as mouse and stylus. We have designed a novel menu selection interface called the rapMenu (Ni et al., 2008), which is controlled by wrist tilt and multiple pinch gestures, and takes advantage of the multiple discrete gesture inputs to reduce the required precision of the user hand movements.In this article, we first review the visual design and behavior of the rapMenu technique, as well as related design issues and its potential advantages. In the second part, we present two studies of the rapMenu in order to further investigate the strengths and limitations of the design principle. In the first study, we compared the rapMenu to the extensively studied tilt menu technique (Rahman et al., 2009). Our results revealed that the rapMenu outperforms the tilt menu as menu breadth increases. In the second study, we investigated how the rapMenu affords the opportunity of eyes-free selection and users’ transition from novice to expert. We found that within 10 min of practice, eyes-free selection with rapMenu has competitive speed and accuracy with the visual rapMenu and the tilt menu. Finally, we discuss design variations that use other axes of wrist movement and adopt alternative auditory feedback.  相似文献   

6.
In user interfaces of modern systems, users get the impression of directly interacting with application objects. In 3D based user interfaces, novel input devices, like hand and force input devices, are being introduced. They aim at providing natural ways of interaction. The use of a hand input device allows the recognition of static poses and dynamic gestures performed by a user's hand. This paper describes the use of a hand input device for interacting with a 3D graphical application. A dynamic gesture language, which allows users to teach some hand gestures, is presented. Furthermore, a user interface integrating the recognition of these gestures and providing feedback for them, is introduced. Particular attention has been spent on implementing a tool for easy specification of dynamic gestures, and on strategies for providing graphical feedback to users' interactions. To demonstrate that the introduced 3D user interface features, and the way the system presents graphical feedback, are not restricted to a hand input device, a force input device has also been integrated into the user interface.  相似文献   

7.
8.
This paper proposes and organization of presentation and control that implements a flexible audio management system we call “audio windows”. The result is a new user interface integrating and enhanced spatial sound presentation system, an audio emphasis system, and a gestural input recognition system. We have implemented these ideas in a modest prototype, also described, designed as an audio server appropriate for a teleconferencing system. Our system combines a gestural front end (currently based on a DataGlove, but whose concepts are appropriate for other devices as well) with an enhanced spatial sound system, a digital signal processing separation of multiple sound sources, augmented with “filtears”, audio feedback cues that convey added information without distraction or loss of intelligibility. Our prototype employs a manual front end (requiring no keyboard or mouse) driving an auditory back end (requiring no CRT or visual display).  相似文献   

9.
User interfaces of current 3D and virtual reality environments require highly interactive input/output (I/O) techniques and appropriate input devices, providing users with natural and intuitive ways of interacting. This paper presents an interaction model, some techniques, and some ways of using novel input devices for 3D user interfaces. The interaction model is based on a tool‐object syntax, where the interaction structure syntactically simulates an action sequence typical of a human's everyday life: One picks up a tool and then uses it on an object. Instead of using a conventional mouse, actions are input through two novel input devices, a hand‐ and a force‐input device. The devices can be used simultaneously or in sequence, and the information they convey can be processed in a combined or in an independent way by the system. The use of a hand‐input device allows the recognition of static poses and dynamic gestures performed by a user's hand. Hand gestures are used for selecting, or acting as, tools and for manipulating graphical objects. A system for teaching and recognizing dynamic gestures, and for providing graphical feedback for them, is described.  相似文献   

10.
Multi-touch, which has been heralded as a revolution in human–computer interaction, provides features such as gestural interaction, tangible interfaces, pen-based computing, and interface customization—features embraced by an increasingly tech-savvy public. However, multi-touch platforms have not been adopted as “everyday” computer interaction devices that support important text entry intensive applications such as word processing and spreadsheets. In this paper, we present two studies that begin to explore user performance and experience with entering text using a multi-touch input. The first study establishes a benchmark for text entry performance on a multi-touch platform across input modes that compare uppercase-only to mixed-case, single-touch to multi-touch and copy to memorization tasks. The second study includes mouse style interaction for formatting rich text to simulate a word processing task using multi-touch input. As expected, our results show that users do not perform as well in terms of text entry efficiency and speed using a multi-touch interface as with a traditional keyboard. Not as expected was the result that degradation in performance was significantly less for memorization versus copy tasks, and consequently willingness to use multi-touch was substantially higher (50% versus 26%) in the former case. Our results, which include preferred input styles of participants, also provide a baseline for further research to explore techniques for improving text entry performance on multi-touch systems.  相似文献   

11.
Full body gestures provide alternative input to video games that are more natural and intuitive. However, full-body game gestures designed by developers may not always be the most suitable gestures available. A key challenge in full-body game gestural interfaces lies in how to design gestures such that they accommodate the intensive, dynamic nature of video games, e.g., several gestures may need to be executed simultaneously using different body parts. This paper investigates suitable simultaneous full-body game gestures, with the aim of accommodating high interactivity during intense gameplay. Three user studies were conducted: first, to determine user preferences, a user-elicitation study was conducted where participants were asked to define gestures for common game actions/commands; second, to identify suitable and alternative body parts, participants were asked to rate the suitability of each body part (one and two hands, one and two legs, head, eyes, and torso) for common game actions/commands; third, to explore the consensus of suitable simultaneous gestures, we proposed a novel choice-based elicitation approach where participants were asked to mix and match gestures from a predefined list to produce their preferred simultaneous gestures. Our key findings include (i) user preferences of game gestures, (ii) a set of suitable and alternative body parts for common game actions/commands, (iii) a consensus set of simultaneous full-body game gestures that assist interaction in different interactive game situations, and (iv) generalized design guidelines for future full-body game interfaces. These results can assist designers and practitioners to develop more effective full-body game gestural interfaces or other highly interactive full-body gestural interfaces.  相似文献   

12.
The use of hand gestures provides an attractive alternative to cumbersome interface devices for human-computer interaction (HCI). In particular, visual interpretation of hand gestures can help in achieving the ease and naturalness desired for HCI. This has motivated a very active research area concerned with computer vision-based analysis and interpretation of hand gestures. We survey the literature on visual interpretation of hand gestures in the context of its role in HCI. This discussion is organized on the basis of the method used for modeling, analyzing, and recognizing gestures. Important differences in the gesture interpretation approaches arise depending on whether a 3D model of the human hand or an image appearance model of the human hand is used. 3D hand models offer a way of more elaborate modeling of hand gestures but lead to computational hurdles that have not been overcome given the real-time requirements of HCI. Appearance-based models lead to computationally efficient “purposive” approaches that work well under constrained situations but seem to lack the generality desirable for HCI. We also discuss implemented gestural systems as well as other potential applications of vision-based gesture recognition. Although the current progress is encouraging, further theoretical as well as computational advances are needed before gestures can be widely used for HCI. We discuss directions of future research in gesture recognition, including its integration with other natural modes of human-computer interaction  相似文献   

13.
In order for robots to effectively understand natural language commands, they must be able to acquire meaning representations that can be mapped to perceptual features in the external world. Previous approaches to learning these grounded meaning representations require detailed annotations at training time. In this paper, we present an approach to grounded language acquisition which is capable of jointly learning a policy for following natural language commands such as “Pick up the tire pallet,” as well as a mapping between specific phrases in the language and aspects of the external world; for example the mapping between the words “the tire pallet” and a specific object in the environment. Our approach assumes a parametric form for the policy that the robot uses to choose actions in response to a natural language command that factors based on the structure of the language. We use a gradient method to optimize model parameters. Our evaluation demonstrates the effectiveness of the model on a corpus of commands given to a robotic forklift by untrained users.  相似文献   

14.
Modern interaction techniques like non-intrusive gestures provide means for interacting with distant displays and smart objects without touching them. We were interested in the effects of feedback modality (auditory, haptic or visual) and its combined effect with input modality on user performance and experience in such interactions. Therefore, we conducted two exploratory experiments where numbers were entered, either by gaze or hand, using gestures composed of four stroke elements (up, down, left and right). In Experiment 1, a simple feedback was given on each stroke during the motor action of gesturing: an audible click, a haptic tap or a visual flash. In Experiment 2, a semantic feedback was given on the final gesture: the executed number was spoken, coded by haptic taps or shown as text. With simultaneous simple feedback in Experiment 1, performance with hand input was slower but more accurate than with gaze input. With semantic feedback in Experiment 2, however, hand input was only slower. Effects of feedback modality were of minor importance; nevertheless, semantic haptic feedback in Experiment 2 showed to be useless at least without extensive training. Error patterns differed between both input modes, but again not dependent on feedback modality. Taken together, the results show that in designing gestural systems, choosing a feedback modality can be given a low priority; it can be chosen according to the task, context and user preferences.  相似文献   

15.
Jot is a novel research interface for virtual reality modeling. This system seamlessly integrates and applies a variety of virtual and physical tools, each customized for specific tasks. The Jot interface not only moves smoothly from one tool to another but also physically and cognitively matches individual tools to the tasks they perform. In particular, we exploit the notion that gestural interaction is more direct, in many cases, than traditional widget based interaction. We also respect the time tested observation that some operations-even conceptually three dimensional ones-are better performed with 1D or 2D input devices, whereas other operations are more naturally performed using stereoscopic views, higher DOF input devices, or both. Ultimately we strive for a 3D modeling system with an interface as transparent as the interaction afforded by a pencil and a sheet of paper. For example, the system should facilitate the tasks of drawing and erasing and provide an easy transition between the two. Jot emerged from our previous work on a mouse based system, called Sketch, for gesturally creating imprecise 3D models. Jot extends Sketch's functionality to a wider spectrum of modeling, from concept design to detailed feature based parametric parts. Jot also extends the interaction in Sketch to better support individual modeling tasks. We extended Sketch's gestural framework to integrate interface components ranging from traditional desktop interface widgets to context sensitive gestures to direct manipulation techniques originally designed for immersive VR  相似文献   

16.
17.
This article describes an exploratory study which examined the use of Brain–Computer Interface (BCI) and gestural technologies generated from a BCI headset as a novel potential alternative to a 4-digit PIN code for authentication. Unlike other input modalities, many of these tokens (i.e., “push,” “lift,” “excitement”), can overcome some of the security vulnerabilities associated with PIN authentication (e.g., observations from third parties). Participants engaged in a controlled study where they performed five, 4-token authentication tasks on a simulated authentication screen. The percentage of completed BCI and gestural input tasks, as well as input time and accuracy, was compared to the 4-digit PIN task. The results showed that while authentication using a BCI headset is currently not as complete, fast, or accurate as that of a 4-digit PIN code, users felt that such a system would represent greater security over PIN-based authentication. In addition, mental commands, which might be perceived as the most secure from the standpoint of inconspicuous detection, were found to offer disappointing results both in terms of completion percentage and completion time.  相似文献   

18.
19.
Multi-touch interaction, in particular multi-touch gesture interaction, is widely believed to give a more natural interaction style. We investigated the utility of multi-touch interaction in the safety critical domain of maritime dynamic positioning (DP) vessels. We conducted initial paper prototyping with domain experts to gain an insight into natural gestures; we then conducted observational studies aboard a DP vessel during operational duties and two rounds of formal evaluation of prototypes—the second on a motion platform ship simulator. Despite following a careful user-centred design process, the final results show that traditional touch-screen button and menu interaction was quicker and less erroneous than gestures. Furthermore, the moving environment accentuated this difference and we observed initial use problems and handedness asymmetries on some multi-touch gestures. On the positive side, our results showed that users were able to suspend gestural interaction more naturally, thus improving situational awareness.  相似文献   

20.
Pointing gestures are our natural way of referencing distant objects and thus widely used in HCI for controlling devices. Due to current pointing models’ inherent inaccuracies, most of the systems using pointing gestures so far rely on visual feedback showing users where they point at. However, in many environments, e.g., smart homes, it is rarely possible to display cursors since most devices do not contain a display. Therefore, we raise the question of how to facilitate accurate pointing-based interaction in a cursorless context. In this paper we present two user studies showing that previous cursorless techniques are rather inaccurate as they lack important considerations about users’ characteristics that would help in minimizing inaccuracy. We show that pointing accuracy could be significantly improved by acknowledging users’ handedness and ocular dominance. In a first user study (n=?33), we reveal the large effect of ocular dominance and handedness on human pointing behavior. Current ray-casting techniques neglect both ocular dominance and handedness as effects onto pointing behavior, precluding them from accurate cursorless selection. With a second user study (n=?25), we show that accounting for ocular dominance and handedness yields to significantly more accurate selections compared to two previously published ray-casting techniques. This speaks for the importance of considering users’ characteristics further to develop better selection techniques to foster more robust accurate selections.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号