首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
Human–computer interaction requires modeling of the user. A user profile typically contains preferences, interests, characteristics, and interaction behavior. However, in its multimodal interaction with a smart environment the user displays characteristics that show how the user, not necessarily consciously, verbally and nonverbally provides the smart environment with useful input and feedback. Especially in ambient intelligence environments we encounter situations where the environment supports interaction between the environment, smart objects (e.g., mobile robots, smart furniture) and human participants in the environment. Therefore it is useful for the profile to contain a physical representation of the user obtained by multi-modal capturing techniques. We discuss the modeling and simulation of interacting participants in a virtual meeting room, we discuss how remote meeting participants can take part in meeting activities and they have some observations on translating research results to smart home environments.  相似文献   

2.
Interactive visualizations such as virtual environments and their associated input and interface techniques have traditionally focused on localized single-user interactions and have lacked co-present active collaboration mechanisms where two or more co-located users can share and actively cooperate and interact with the visual simulation. VR facilities such as CAVEs or PowerWalls, among many others, seem to promise such collaboration but due to the special requirements in terms of 3D input and output devices and the physical configuration and layout, they are generally designed to support an active controlling participant—the immersed user—and a passive viewing only audience. In this paper we explore the integration of different technologies, such as small handheld devices and wireless networks with VR/VEs in order to develop a technical and conceptual interaction approach that allows creation of a more ad hoc, interaction rich, multimodal and multi-device environment, where multiple users can access certain interactive capabilities of VE and support co-located collaboration.  相似文献   

3.
This study presents a 3D virtual reality (VR) keyboard system with realistic haptic feedback. The system uses two five-fingered data gloves to track finger positions and postures, uses micro-speakers to create simulated vibrations, and uses a head-mounted display (HMD) for 3D display. When users press a virtual key in the VR environment, the system can provide realistic simulated key click haptic feedback to users. The results of this study show that the advantages of the haptic VR keyboard are that users can use it when wearing HMDs (users do not need to remove HMDs to use the VR keyboard), the haptic VR keyboard can pop-up display at any location in the VR environments (users do not need to go to a specific location to use an actual physical keyboard), and the haptic VR keyboard can be used to provide realistic key click haptic feedback (which other studies have shown enhances user performance). The results also show that the haptic VR keyboard system can be used to create complex vibrations that simulate measured vibrations from a real keyboard and enhance keyboard interaction in a fully immersive VR environment.  相似文献   

4.
The research presented in this paper aims at investigating user interaction in immersive virtual learning environments, focusing on the role and the effect of interactivity on conceptual learning. The goal has been to examine if the learning of young users improves through interacting in (i.e. exploring, reacting to, and acting upon) an immersive virtual environment (VE) compared to non-interactive or non-immersive environments. Empirical work was carried out with more than 55 primary school students between the ages of 8 and 12, in different between-group experiments: an exploratory study, a pilot study, and a large-scale experiment. The latter was conducted in a virtual environment designed to simulate a playground. In this “Virtual Playground,” each participant was asked to complete a set of tasks designed to address arithmetical “fractions” problems. Three different conditions, two experimental virtual reality (VR) conditions and a non-VR condition, that varied the levels of activity and interactivity, were designed to evaluate how children accomplish the various tasks. Pre-tests, post-tests, interviews, video, audio, and log files were collected for each participant, and analysed both quantitatively and qualitatively. This paper presents a selection of case studies extracted from the qualitative analysis, which illustrate the variety of approaches taken by children in the VEs in response to visual cues and system feedback. Results suggest that the fully interactive VE aided children in problem solving but did not provide a strong evidence of conceptual change as expected; rather, it was the passive VR environment, where activity was guided by a virtual robot, that seemed to support student reflection and recall, leading to indications of conceptual change.  相似文献   

5.
Eye tracking is one of the most prominent modalities to track user attention while interacting with computational devices. Today, most of the current eye tracking frameworks focus on tracking the user gaze during website browsing or while performing other tasks and interactions with a digital device. Most frameworks have in common that they do not exploit gaze as an input modality. In this paper we describe the realization of a framework named viGaze. Its main goal is to provide an easy to use framework to exploit the use of eye gaze as an input modality in various contexts. Therefore it provides features to explore explicit and implicit interactions in complex virtual environments by using the eye gaze of a user for various interactions. The viGaze framework is flexible and can be easily extended to incorporate other input modalities typically used in Post-WIMP interfaces such as gesture or foot input. In this paper we describe the key components of our viGaze framework and additionally describe a user study that was conducted to test the framework. The user study took place in a virtual retail environment, which provides a challenging pervasive environment and contains complex interactions that can be supported by gaze. The participants performed two gaze-based interactions with products on virtual shelves and started an interaction cycle between the products and an advertisement monitor placed on the shelf. We demonstrate how gaze can be used in Post-WIMP interfaces to steer the attention of users to certain components of the system. We conclude by discussing the advantages provided through the viGaze framework and highlighting the potentials of gaze-based interaction.  相似文献   

6.
This paper presents an experimental study on an agent system with multimodal interfaces for a smart office environment. The agent system is based upon multimodal interfaces such as recognition modules for both speech and pen-mouse gesture, and identification modules for both face and fingerprint. For essential modules, speech recognition and synthesis were basically used for a virtual interaction between user and system. In this study, a real-time speech recognizer based on a Hidden Markov Network (HM-Net) was incorporated into the proposed system. In addition, identification techniques based on both face and fingerprint were adopted to provide a specific user with the service of a user-customized interaction with security in an office environment. In evaluation, results showed that the proposed system was easy to use and would prove useful in a smart office environment, even though the performance of the speech recognizer was not satisfactory mainly due to noisy environments.  相似文献   

7.
Humans use a combination of gesture and speech to interact with objects and usually do so more naturally without holding a device or pointer. We present a system that incorporates user body-pose estimation, gesture recognition and speech recognition for interaction in virtual reality environments. We describe a vision-based method for tracking the pose of a user in real time and introduce a technique that provides parameterized gesture recognition. More precisely, we train a support vector classifier to model the boundary of the space of possible gestures, and train Hidden Markov Models (HMM) on specific gestures. Given a sequence, we can find the start and end of various gestures using a support vector classifier, and find gesture likelihoods and parameters with a HMM. A multimodal recognition process is performed using rank-order fusion to merge speech and vision hypotheses. Finally we describe the use of our multimodal framework in a virtual world application that allows users to interact using gestures and speech.  相似文献   

8.
9.
The ever growing use of virtual environments requires more and more engaging elements for enhancing user experiences. Specifically regarding sounding virtual environments, one promising option to achieve such realism and interactivity requirements is the use of virtual characters interacting with sounding objects. In this paper, we focus as a case study on virtual characters playing virtual music instruments. We address more specially the real-time motion control and interaction of virtual characters with their sounding environment for proposing engaging and compelling virtual music performances. Combining physics-based simulation with motion data is a recent approach to finely represent and modulate this motion-sound interaction, while keeping the realism and expressivity of the original captured motion. We propose a physically-enabled environment in which a virtual percussionist interacts with a physics-based sound synthesis algorithm. We introduce and extensively evaluate the Hybrid Inverse Motion Control (HIMC), a motion-driven hybrid control scheme dedicated to the synthesis of upper-body percussion movements. We also propose a physics-based sound synthesis model with which the virtual character can interact. Finally, we present an architecture offering an effective way to manage heterogenous data (motion and sound parameters) and feedback (visual and sound) that influence the resulting virtual percussion performances.  相似文献   

10.
11.
In this paper, we present a human-robot teaching framework that uses “virtual” games as a means for adapting a robot to its user through natural interaction in a controlled environment. We present an experimental study in which participants instruct an AIBO pet robot while playing different games together on a computer generated playfield. By playing the games and receiving instruction and feedback from its user, the robot learns to understand the user’s typical way of giving multimodal positive and negative feedback. The games are designed in such a way that the robot can reliably predict positive or negative feedback based on the game state and explore its user’s reward behavior by making good or bad moves. We implemented a two-staged learning method combining Hidden Markov Models and a mathematical model of classical conditioning to learn how to discriminate between positive and negative feedback. The system combines multimodal speech and touch input for reliable recognition. After finishing the training, the system was able to recognize positive and negative reward with an average accuracy of 90.33%.  相似文献   

12.
13.
We propose a framework with a flexible architecture that have been designed and implemented for collaborative interaction of users, to be applied in massive applications through the Web. We introduce the concept of interperception and use technologies as massive virtual environments and teleoperation for the creation of environments (mixing virtual and real ones) in order to promote accessibility and transparency in the interaction between people, and between people and animate devices (such as robots) through the Web. Experiments with massive games, with interactive applications in digital television, with users and robots interacting in virtual and real versions of museums and cultural centers are presented to validate our proposal.  相似文献   

14.
Virtual environments provide a whole new way of viewing and manipulating 3D data. Current technology moves the images out of desktop monitors and into the space immediately surrounding the user. Users can literally put their hands on the virtual objects. Unfortunately, techniques for interacting with such environments are yet to mature. Gloves and sensor-based trackers are unwieldy, constraining and uncomfortable to use. A natural, more intuitive method of interaction would be to allow the user to grasp objects with their hands and manipulate them as if they were real objects.We are investigating the use of computer vision in implementing a natural interface based on hand gestures. A framework for a gesture recognition system is introduced along with results of experiments in colour segmentation, feature extraction and template matching for finger and hand tracking, and simple hand pose recognition. Implementation of a gesture interface for navigation and object manipulation in virtual environments is presented.  相似文献   

15.
ABSTRACT

Gestural interaction devices emerged and originated various studies on multimodal human–computer interaction to improve user experience (UX). However, there is a knowledge gap regarding the use of these devices to enhance learning. We present an exploratory study which analysed the UX with a multimodal immersive videogame prototype, based on a Portuguese historical/cultural episode. Evaluation tests took place in high school environments and public videogaming events. Two users would be present simultaneously in the same virtual reality (VR) environment: one as the helmsman aboard Vasco da Gama’s fifteenth-century Portuguese ship and the other as the mythical Adamastor stone giant at the Cape of Good Hope. The helmsman player wore a VR headset to explore the environment, whereas the giant player used body motion to control the giant, and observed results on a screen, with no headset. This allowed a preliminary characterisation of UX, identifying challenges and potential use of these devices in multi-user virtual learning contexts. We also discuss the combined use of such devices, towards future development of similar systems, and its implications on learning improvement through multimodal human–computer interaction.  相似文献   

16.
In this paper the authors discuss the modelling and design of an augmented reality platform for disabled wheeled mobility studies. This design consists of a virtual environment with two degrees of freedom motion platform and integrated ground contact force feedback. Differential drive mobility users continue to be in touch with reality on their own mobility aid while interacting with virtual objects. The major domain related to the differential drive mobility of the disabled members of society which coincides with the use of manual wheelchairs, electric wheelchairs and mobility scooters. In order to account for environmental and dynamic effects, the wheeled mobility user needs to map the intended trajectory into the virtual world. Motion and inertia force feedback produced on the augmented simulator give the users a haptic sensory stimulus input regarding spatial movement and ground contact forces. The main objective is to model and design an augmented reality platform with real world kinematic and dynamic properties to place a wheeled mobility user closer to real world encounters. The use of the designed augmented reality platform will be beneficial to disabled wheeled mobility users in need of occupational therapist training and evaluation.  相似文献   

17.
将草图技术、自适应技术与传统的虚拟现实技术相结合应用于虚拟教学过程中,通过增强用户和系统之间个性化需求的交互处理来提高系统的智能性和友好性,进而有效提高虚拟教学环境的应用效果.旨在虚拟环境中构建基于草图的自适应用户界面并应用于虚拟教学来满足特定教学需求,着重结合实例分析了基于自适应草图用户界面的虚拟教学环境中的草图上下文处理机制.在上述研究基础上,设计开发了一个虚拟教学原型系统,实验证明该系统在用户体验上有明显的改善.  相似文献   

18.
Virtual learning environments can now be enriched not only with visual and auditory information, but also with tactile and kinesthetic feedback. However, the way to successfully integrate haptic feedback on a multimodal learning environment is still unclear. This study aims to provide guidelines on how visuohaptic simulations can be implemented effectively, thus the research question is: Under what conditions do visual and tactile information support students' development of conceptual learning of force‐related concepts? Participants comprised 170 undergraduate students of a Midwestern University enrolled in a physics for elementary education class. Four experiments were conducted using four different configurations of multimodal learning environments: Visual feedback only, haptic force feedback only, visual and haptic force feedback at the same time, and sequenced modality of haptic feedback first and visual feedback second. Our results suggest that haptic force feedback has the potential to enrich learning when compared with visual only environments. Also, haptic and visual modalities interact better when sequenced one after another rather than presented simultaneously. Finally, exposure to virtual learning environments enhanced by haptic forced feedback was a positive experience, but the ease of use and ease of interpretation was not so evident.  相似文献   

19.
In this article, we present a practical approach to analyzing mobile usage environments. We propose a framework for analyzing the restrictions that characteristics of different environments pose on the user's capabilities. These restrictions along with current user interfaces form the cost of interaction in a certain environment. Our framework aims to illustrate that cost and what causes it. The framework presents a way to map features of the environment to the effects they cause on the resources of the user and in some cases on the mobile device. This information can be used for guiding the design of adaptive and/or multimodal user interfaces or devices optimized for certain usage environments. An example of using the framework is presented along with some major findings and three examples of applying them in user interface design.  相似文献   

20.
Asperger's Syndrome (AS) is an autistic spectrum disorder characterised by normal to high IQ but with marked impairment in social skills. Successful social skills training appears to be best achieved either in situ or in role-play situations where users can explore different outcomes resulting from their social behaviour. Single user virtual environments (SVEs) provide an opportunity for users with AS to learn social interaction skills in a safe environment which they can visit as many times as they like.The use of game-like tasks can provide an incentive and can also be used to guide the user through progressive learning stages. Collaborative virtual environments (CVEs) allow several users to interact simultaneously within the virtual environment, each taking different perspectives or role-play characters. Within the AS interactive project a series of SVEs and CVEs have been developed in collaboration with users and professional groups with an overall aim of supporting social skills learning. Initial evaluation studies have been carried out which have been used to both inform and refine the design of these virtual environments (VEs) as well as giving an insight into the understanding and interpretation of these technologies by users with AS.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号