首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The emergence of small handheld devices such as tablets and smartphones, often with touch sensitive surfaces as their only input modality, has spurred a growing interest in the subject of gestures for human–computer interaction (HCI). It has been proven before that eye movements can be consciously controlled by humans to the extent of performing sequences of predefined movement patterns, or “gaze gestures” that can be used for HCI purposes in desktop computers. Gaze gestures can be tracked noninvasively using a video-based eye-tracking system. We propose here that gaze gestures can also be an effective input paradigm to interact with handheld electronic devices. We show through a pilot user study how gaze gestures can be used to interact with a smartphone, how they are easily assimilated by potential users, and how the Needleman-Wunsch algorithm can effectively discriminate intentional gaze gestures from otherwise typical gaze activity performed during standard interaction with a small smartphone screen. Hence, reliable gaze–smartphone interaction is possible with accuracy rates, depending on the modality of gaze gestures being used (with or without dwell), higher than 80 to 90%, negligible false positive rates, and completion speeds lower than 1 to 1.5 s per gesture. These encouraging results and the low-cost eye-tracking equipment used suggest the possibilities of this new HCI modality for the field of interaction with small-screen handheld devices.  相似文献   

2.
In user interfaces of modern systems, users get the impression of directly interacting with application objects. In 3D based user interfaces, novel input devices, like hand and force input devices, are being introduced. They aim at providing natural ways of interaction. The use of a hand input device allows the recognition of static poses and dynamic gestures performed by a user's hand. This paper describes the use of a hand input device for interacting with a 3D graphical application. A dynamic gesture language, which allows users to teach some hand gestures, is presented. Furthermore, a user interface integrating the recognition of these gestures and providing feedback for them, is introduced. Particular attention has been spent on implementing a tool for easy specification of dynamic gestures, and on strategies for providing graphical feedback to users' interactions. To demonstrate that the introduced 3D user interface features, and the way the system presents graphical feedback, are not restricted to a hand input device, a force input device has also been integrated into the user interface.  相似文献   

3.
Haptic technologies and applications have received enormous attention in the last decade. The incorporation of haptic modality into multimedia applications adds excitement and enjoyment to an application. It also adds a more natural feel to multimedia applications, that otherwise would be limited to vision and audition, by engaging as well the user’s sense of touch, giving a more intrinsic feel essential for ambient intelligent applications. However, the improvement of an application’s Quality of Experience (QoE) by the addition of haptic feedback is still not completely understood. The research presented in this paper focuses on the effect of haptic feedback and what it potentially adds to the experience of the user as opposed to the traditional visual and auditory feedback. In essence, it investigates certain issues regarding stylus-based haptic education applications and haptic-enhanced entertainment videos. To this end, we used two haptic applications: the haptic handwriting learning tool to experiment with force feedback haptic interaction and the tactile YouTube application for tactile haptic feedback. In both applications, our analysis shows that the addition of haptic feedback will increase the QoE in the absence of fatigue or discomfort for this category of applications. This implies that the incorporation of haptic modality (both force feedback as well as tactile feedback) has positively contributed to the overall QoE for the users.  相似文献   

4.
Traditionally, the main goal of teleoperation has been to successfully achieve a given task as if performing the task in local space. An emerging and related requirement is to also match the subjective sensation or the user experience of the remote environment, while maintaining reasonable task performance. This concept is often called “presence” or “(experiential) telepresence,” which is informally defined as “the sense of being in a mediated environment.” In this paper, haptic feedback is considered as an important element for providing improved presence and reasonable task performance in remote navigation. An approach for using haptic information to “experientially” teleoperate a mobile robot is described. Haptic feedback is computed from the range information obtained by a sonar array attached to the robot, and delivered to a user's hand via a haptic probe. This haptic feedback is provided to the user, in addition to stereoscopic images from a forward-facing stereo camera mounted on the mobile robot. The experiment with a user population in a real-world environment showed that haptic feedback significantly improved both task performance and user-felt presence. When considering user-felt presence, no interaction among haptic feedback, image resolution, and stereoscopy was observed, that is, haptic feedback was effective, regardless of the fidelity of visual elements. Stereoscopic images also significantly improved both task performance and user-felt presence, but high-resolution images only significantly improved user-felt presence. When considering task performance, however, it was found that there was an interaction between haptic feedback and stereoscopy, that is, stereoscopic images were only effective when no force feedback was applied. According to the multiple regression analysis, haptic feedback was a higher contributing factor to the improvement in performance and presence than image resolution and stereoscopy.  相似文献   

5.
Usually, a mouse is used for input activities only, whereas output from the computer is sent via the monitor and one or two loudspeakers. But why not use the mouse for output, too? For instance, if it would be possible to predict the next interaction object the user wants to click on, a mouse with a mechanical brake could stop the cursor movement at the desired position. This kind of aid is especially attractive for small targets like resize handles of windows or small buttons. In this paper, we present an approach for the integration of haptic feedback in everyday graphical user interfaces. We use a specialized mouse which, is able to apply simple haptic information, to the user's hand and index finger. A multi-agent system has been designed which ‘observes’ the ‘user in order to predict the next interaction object and launch haptic feedback, thus supporting positioning actions with the mouse. Although primarily designed in order to provide intelligent’ haptic feedback, the system can be combined with other output modalities as well, due to its modular and flexible architecture.  相似文献   

6.
Vibrotactile feedback is widely used in mobile devices because it provides a discreet and private feedback channel. Gaze-based interaction, on the other hand, is useful in various applications due to its unique capability to convey the focus of interest. Gaze input is naturally available as people typically look at things they operate, but feedback from eye movements is primarily visual. Gaze interaction and the use of vibrotactile feedback have been two parallel fields of human–computer interaction research with a limited connection. Our aim was to build this connection by studying the temporal and spatial mechanisms of supporting gaze input with vibrotactile feedback. The results of a series of experiments showed that the temporal distance between a gaze event and vibrotactile feedback should be less than 250 ms to ensure that the input and output are perceived as connected. The effectiveness of vibrotactile feedback was largely independent of the spatial body location of vibrotactile actuators. In comparison to other modalities, vibrotactile feedback performed equally to auditory and visual feedback. Vibrotactile feedback can be especially beneficial when other modalities are unavailable or difficult to perceive. Based on the findings, we present design guidelines for supporting gaze interaction with vibrotactile feedback.  相似文献   

7.
Virtual learning environments can now be enriched not only with visual and auditory information, but also with tactile and kinesthetic feedback. However, the way to successfully integrate haptic feedback on a multimodal learning environment is still unclear. This study aims to provide guidelines on how visuohaptic simulations can be implemented effectively, thus the research question is: Under what conditions do visual and tactile information support students' development of conceptual learning of force‐related concepts? Participants comprised 170 undergraduate students of a Midwestern University enrolled in a physics for elementary education class. Four experiments were conducted using four different configurations of multimodal learning environments: Visual feedback only, haptic force feedback only, visual and haptic force feedback at the same time, and sequenced modality of haptic feedback first and visual feedback second. Our results suggest that haptic force feedback has the potential to enrich learning when compared with visual only environments. Also, haptic and visual modalities interact better when sequenced one after another rather than presented simultaneously. Finally, exposure to virtual learning environments enhanced by haptic forced feedback was a positive experience, but the ease of use and ease of interpretation was not so evident.  相似文献   

8.
This paper develops nonlinear multiresolution techniques for scientific visualization utilizing haptic methods. The visualization of data is critical to many areas of scientific pursuit. Scientific visualization is generally accomplished through computer graphic techniques. Recent advances in haptic technologies allow visual techniques to be augmented with haptic methods. The kinesthetic feedback provided through haptic techniques provides a second modality for visualization and allows for active exploration. Moreover, haptic methods can be utilized by individuals with visual impairments. Haptic representations of large data sets, however, can be confusing to a user, especially if a visual representation is not available or cannot be used. This paper develops a multiresolution data decomposition method based on the affine median filter. This results in a hybrid structure that can be tuned to yield a decomposition that varies from a linear wavelet decomposition to that produced by the median filter. The performance of this hybrid structure is analyzed utilizing deterministic signals and statistically in the frequency domain. This analysis and qualitative and quantitative implementation results show that the affine median decomposition has advantages over previously proposed methods. In addition to multiresolution decomposition development, analysis, and results, haptic implementation methods are presented  相似文献   

9.
Predefined sequences of eye movements, or ‘gaze gestures’, can be consciously performed by humans and monitored non-invasively using remote video oculography. Gaze gestures hold great potential in human–computer interaction, HCI, as long as they can be easily assimilated by potential users, monitored using low cost gaze tracking equipment and machine learning algorithms are able to distinguish the spatio-temporal structure of intentional gaze gestures from typical gaze activity performed during standard HCI. In this work, an evaluation of the performance of a bioinspired Bayesian pattern recognition algorithm known as Hierarchical Temporal Memory (HTM) on the real time recognition of gaze gestures is carried out through a user study. To improve the performance of traditional HTM during real time recognition, an extension of the algorithm is proposed in order to adapt HTM to the temporal structure of gaze gestures. The extension consists of an additional top node in the HTM topology that stores and compares sequences of input data by sequence alignment using dynamic programming. The spatio-temporal codification of a gesture in a sequence serves the purpose of handling the temporal evolution of gaze gestures instances. The extended HTM allows for reliable discrimination of intentional gaze gestures from otherwise standard human–machine gaze interaction reaching up to 98% recognition accuracy for a data set of 10 categories of gaze gestures, acceptable completion speeds and a low rate of false positives during standard gaze–computer interaction. These positive results despite the low cost hardware employed supports the notion of using gaze gestures as a new HCI paradigm for the fields of accessibility and interaction with smartphones, tablets, projected displays and traditional desktop computers.  相似文献   

10.
User interfaces of current 3D and virtual reality environments require highly interactive input/output (I/O) techniques and appropriate input devices, providing users with natural and intuitive ways of interacting. This paper presents an interaction model, some techniques, and some ways of using novel input devices for 3D user interfaces. The interaction model is based on a tool‐object syntax, where the interaction structure syntactically simulates an action sequence typical of a human's everyday life: One picks up a tool and then uses it on an object. Instead of using a conventional mouse, actions are input through two novel input devices, a hand‐ and a force‐input device. The devices can be used simultaneously or in sequence, and the information they convey can be processed in a combined or in an independent way by the system. The use of a hand‐input device allows the recognition of static poses and dynamic gestures performed by a user's hand. Hand gestures are used for selecting, or acting as, tools and for manipulating graphical objects. A system for teaching and recognizing dynamic gestures, and for providing graphical feedback for them, is described.  相似文献   

11.
This paper demonstrates a haptic device for interaction with a virtual environment. The force control is added by visual feedback that makes the system more responsive and accurate. There are two popular control methods widely used in haptic controller design. First, is impedance control when user motion input is measured, and then, the reaction force is fed back to the operator. The alternative method is admittance control, when forces exerted by user are measured and motion is fed back to the user. Both, impedance and admittance control are also basic ways for interacting with a virtual environment. In this paper, several experiments were performed to evaluate the suitability of force-impedance control for haptic interface development. The difference between conventional application of impedance control in robot motion control and its application in haptic interface development is investigated. Open loop impedance control methodology is implemented for static case and a general-purpose robot under open loop impedance control was developed as a haptic device, while a closed loop model based impedance control was used for haptic controller design in both static and dynamic case. The factors that could affect to the performance of a haptic interface are also investigated experimentally using parametric studies. Experimental results for 1 DOF rotational motion and 2 DOF planar translational motion systems are presented. The results show that the impedance control aided by visual feedback broaden the applicability of the haptic device and makes the system more responsive and accurate.
J. SasiadekEmail:
  相似文献   

12.
In this contribution a method will be described to optically simulate haptic feedback without resorting to mechanical force feedback devices. This method exploits the domination of the visual over the haptic modality. The perception of haptic feedback, usually generated by force feedback devices, was simulated by tiny displacements on the cursor position relative to the intended force. The usability of optically simulated haptic feedback (OSHF) was tested experimentally by measuring effectiveness, efficiency and satisfaction of its users in a Fitts’ type target-acquisition task and comparing the results with the usability of mechanically simulated force feedback and normal feedback. Results show that OSHF outperforms mechanically simulated haptic feedback and normal feedback, especially in the case of small targets.  相似文献   

13.
Large interactive displays have become ubiquitous in our everyday lives, but these displays are designed for the needs of sighted people. In this paper, we specifically address assisting people with visual impairments to aim at a target on a large wall-mounted display. We introduce a novel haptic device, which explores the use of vibrotactile feedback in blind user search strategies on a large wall-mounted display. Using mid-air gestures aided by vibrotactile feedback, we compared three target-aiming techniques: Random (baseline) and two novel techniques – Cruciform and Radial. The results of our two experiments show that visually impaired participants can find a target significantly faster with the Cruciform and Radial techniques than with the Random technique. In addition, they can retrieve information on a large display about twice as fast by augmenting speech feedback with haptic feedback in using the Radial technique. Although a large number of studies have been done on assistive interfaces for people who have visual impairments, very few studies have been done on large vertical display applications for them. In a broader sense, this work will be a stepping-stone for further research on interactive large public display technologies for users who are visually impaired.  相似文献   

14.
Haptics is a feedback technology that takes advantage of the human sense of touch by applying forces, vibrations, and/or motions to a haptic-enabled user device such as a mobile phone. Historically, human–computer interaction has been visual, data, or images on a screen. Haptic feedback can be an important modality in Mobile Location-Based Services like – knowledge discovery, pedestrian navigation and notification systems. In this paper we describe a methodology for the implementation of haptics in four distinct prototypes for pedestrian navigation. Prototypes are classified based on the user’s navigation guidance requirements, the user type (based on spatial skills), and overall system complexity. Here haptics is used to convey location, orientation, and distance information to users using pedestrian navigation applications. Initial user trials have elicited positive responses from the users who see benefit in being provided with a “heads up” approach to mobile navigation. We also tested the spatial ability of the user to navigate using haptics and landmark images based navigation. This was followed by a test of memory recall about the area. Users were able to successfully navigate from a given origin to a Destination Point without the use of a visual interface like a map. Results show the users of haptic feedback for navigation prepared better maps (better memory recall) of the region as compared to the users of landmark images based navigation.  相似文献   

15.
Auditory interfaces can overcome visual interfaces when a primary task, such as driving, competes for the attention of a user controlling a device, such as radio. In emerging interfaces enabled by camera tracking, auditory displays may also provide viable alternatives to visual displays. This paper presents a user study of interoperable auditory and visual menus, in which control gestures remain the same in the visual and the auditory domain. Tested control methods included a novel free-hand gesture interaction with camera-based tracking, and touch screen interaction with a tablet. The task of the participants was to select numbers from a visual or an auditory menu including a circular layout and a numeric keypad layout. Results show, that even with participant's full attention to the task, the performance and accuracy of the auditory interface are the same or even slightly better than the visual when controlled with free-hand gestures. The auditory menu was measured to be slower in touch screen interaction, but questionnaire revealed that over half of the participants felt that the circular auditory menu was faster than the visual menu. Furthermore, visual and auditory feedback in touch screen interaction with numeric layout was measured fastest, touch screen with circular menu second fastest, and the free-hand gesture interface was slowest. The results suggest that auditory menus can potentially provide a fast and desirable interface to control devices with free-hand gestures.  相似文献   

16.
Touchscreen interfaces offer benefits in terms of flexibility and ease of interaction and as such their use has increased rapidly in a range of devices, from mobile phones to in-car technology. However, traditional touchscreens impose an inevitable visual workload demand that has implications for safety, especially in automotive use. Recent developments in touchscreen technology have enabled feedback to be provided via the haptic channel. A study was conducted to investigate the effects of visual and haptic touchscreen feedback on visual workload, task performance and subjective response using a medium-fidelity driving simulator. Thirty-six experienced drivers performed touchscreen ‘search and select’ tasks while engaged in a motorway driving task. The study utilised a 3 × 2 within-subjects design, with three levels of visual feedback: ‘immediate’, ‘delayed’, ‘none’; and two levels of haptic feedback: ‘visual only’, ‘visual + haptic’. Results showed that visual workload was increased when visual feedback was delayed or absent; however, introducing haptic feedback counteracted this effect, with no increases observed in glance time and count. Task completion time was also reduced when haptic feedback was enabled, while driving performance showed no effect due to feedback type. Subjective responses indicated that haptic feedback improved the user experience and reduced perceived task difficulty.  相似文献   

17.
Eye tracking is one of the most prominent modalities to track user attention while interacting with computational devices. Today, most of the current eye tracking frameworks focus on tracking the user gaze during website browsing or while performing other tasks and interactions with a digital device. Most frameworks have in common that they do not exploit gaze as an input modality. In this paper we describe the realization of a framework named viGaze. Its main goal is to provide an easy to use framework to exploit the use of eye gaze as an input modality in various contexts. Therefore it provides features to explore explicit and implicit interactions in complex virtual environments by using the eye gaze of a user for various interactions. The viGaze framework is flexible and can be easily extended to incorporate other input modalities typically used in Post-WIMP interfaces such as gesture or foot input. In this paper we describe the key components of our viGaze framework and additionally describe a user study that was conducted to test the framework. The user study took place in a virtual retail environment, which provides a challenging pervasive environment and contains complex interactions that can be supported by gaze. The participants performed two gaze-based interactions with products on virtual shelves and started an interaction cycle between the products and an advertisement monitor placed on the shelf. We demonstrate how gaze can be used in Post-WIMP interfaces to steer the attention of users to certain components of the system. We conclude by discussing the advantages provided through the viGaze framework and highlighting the potentials of gaze-based interaction.  相似文献   

18.
Previous research has not fully examined the effect of additional sensory feedback, particularly delivered through the haptic modality, in pointing task performance with visual distractions. This study examined the effect of haptic feedback and visual distraction on pointing task performance in a 3D virtual environment. Results indicate a strong positive effect of haptic feedback on performance in terms of task time and root mean square error of motion. Level of similarity between distractor objects and the target object significantly reduced performance, and subjective ratings indicated a sense of increased task difficulty as similarity increased. Participants produced the best performance in trials where distractor objects had a different color but the same shape as the target object and constant haptic assistive feedback was provided. Overall, this study provides insight toward the effect of object features and similarity and the effect of haptic feedback on pointing task performance.  相似文献   

19.
Through an investigation of how the performance of people who have normal visual capabilities is affected by unimodal, bimodal, and trimodal feedback, this research establishes a foundation for presenting effective feedback to enhance the performance of individuals who have visual impairments. Interfaces that employ multiple feedback modalities, such as auditory, haptic, and visual, can enhance user performance for individuals with barriers limiting one or more channels of perception, such as a visual impairment. Results obtained demonstrate the effects of different feedback combinations on mental workload, accuracy, and performance time. Future, similar studies focused on participants with visual impairments will be grounded in this work. Published online: 6 November 2002  相似文献   

20.
Preserving older pedestrians’ navigation skills in urban environments is a challenge for maintaining their quality of life. However, existing aids do not take into account older people’s perceptual and cognitive declines nor their user experience, and they call upon sensory modalities that are already used during walking. The present study was aimed at comparing different guidance instructions using visual, auditory, and haptic feedback in order to identify the most efficient and best accepted one(s). Sixteen middle-age (non-retired) adults, 21 younger-old (young-retired) adults, and 21 older-old (old-retired) adults performed a navigation task in a virtual environment. The task was performed with visual feedback (directional arrows superimposed on the visual scenes), auditory feedback (sounds in the left/right ear), haptic feedback (vibrotactile information delivered by a wristband), combinations of different types of sensory feedback, or a paper map. The results showed that older people benefited from the sensory guidance instructions, as compared to the paper map. Visual and auditory feedbacks were associated with better performance and user experience than haptic feedback or the paper map, and the benefits were the greatest among the older-old participants, even though the paper-map familiarity was appreciated. Several recommendations for designing pedestrian navigation aids are proposed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号