首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 334 毫秒
1.
Virtual Reality: How Much Immersion Is Enough?   总被引:2,自引:0,他引:2  
Bowman  D.A. McMahan  R.P. 《Computer》2007,40(7):36-43
Solid evidence of virtual reality's benefits has graduated from impressive visual demonstrations to producing results in practical applications. Further, a realistic experience is no longer immersion's sole asset. Empirical studies show that various components of immersion provide other benefits - full immersion is not always necessary. The goal of immersive virtual environments (VEs) was to let the user experience a computer-generated world as if it were real - producing a sense of presence, or "being there," in the user's mind.  相似文献   

2.
Mixed reality (MR) is a kind of virtual reality (VR) but a broader concept than augmented reality (AR), which augments the real world with synthetic electronic data. On the opposite side, there is a term, augmented virtuality (AV), which enhances or augments the virtual environment (VE) with data from the real world. Mixed reality covers a continuum from AR to AV. This concept embraces the definition of MR stated by Milgram and Kishino (1994). We describe some technical achievements we made in the Mixed Reality Project in Japan  相似文献   

3.
Today, VR research and development efforts often focus on the continual innovation of interaction styles and metaphors for virtual environments (VEs). New tools and interaction devices aim to increase the immersive experience rather than support seamless integration with real work scenarios. Even though users may soon perceive odours within VEs, real, task-oriented interaction within these environments will continue to lag behind. Combining these efforts can result in new user interfaces that reduce the cumbersome barriers prevalent in VEs today, finally unleashing the latent impact of this technology in everyday life. Implementing this vision requires an interdisciplinary and applied approach to integrate VR into the workplace. Mixed-reality display capabilities, useful multimodal interaction and perceptual, intuitive interfaces are major components of such an application-oriented and human-centered approach, for which we coined the term walk-up VR. The paper discusses augmented and virtual reality as contributing technologies  相似文献   

4.
In this paper, we compare four different auditory displays in a mobile audio-augmented reality environment (a sound garden). The auditory displays varied in the use of non-speech audio, Earcons, as auditory landmarks and 3D audio spatialization, and the goal was to test the user experience of discovery in a purely exploratory environment that included multiple simultaneous sound sources. We present quantitative and qualitative results from an initial user study conducted in the Municipal Gardens of Funchal, Madeira. Results show that spatial audio together with Earcons allowed users to explore multiple simultaneous sources and had the added benefit of increasing the level of immersion in the experience. In addition, spatial audio encouraged a more exploratory and playful response to the environment. An analysis of the participants’ logged data suggested that the level of immersion can be related to increased instances of stopping and scanning the environment, which can be quantified in terms of walking speed and head movement.  相似文献   

5.
Interactive visualizations such as virtual environments and their associated input and interface techniques have traditionally focused on localized single-user interactions and have lacked co-present active collaboration mechanisms where two or more co-located users can share and actively cooperate and interact with the visual simulation. VR facilities such as CAVEs or PowerWalls, among many others, seem to promise such collaboration but due to the special requirements in terms of 3D input and output devices and the physical configuration and layout, they are generally designed to support an active controlling participant—the immersed user—and a passive viewing only audience. In this paper we explore the integration of different technologies, such as small handheld devices and wireless networks with VR/VEs in order to develop a technical and conceptual interaction approach that allows creation of a more ad hoc, interaction rich, multimodal and multi-device environment, where multiple users can access certain interactive capabilities of VE and support co-located collaboration.  相似文献   

6.
One of the challenges that Ambient Intelligence (AmI) faces is the provision of a usable interaction concept to its users, especially for those with a weak technical background. In this paper, we describe a new approach to integrate interactive services provided by an AmI environment with the television set, which is one of the most widely used interaction client in the home environment. The approach supports the integration of different TV set configurations, guaranteeing the possibility to develop universally accessible solutions. An implementation of this approach has been carried out as a multimodal/multi-purpose natural human computer interface for elderly people, by creating adapted graphical user interfaces and navigation menus together with multimodal interaction (simplified TV remote control and voice interaction). In addition, this user interface can also be suited to other user groups. We have tested a prototype that adapts the videoconference and the information service with a group of 83 users. The results from the user tests show that the group found the prototype to be both satisfactory and efficient to use.  相似文献   

7.
Virtual reality (VR) lets people act within and upon computer-generated environments, making it ideal for exposure therapy and some other forms of mental health treatment. In addition to representing stimuli with some degree of realism, a virtual environment (VE) lets users look at and interact with these things much as they would in the real world, using primarily their eyes and hands. This gives users a sense of physical as well as mental control over the things around them in the VE. SpiderWorld is one of a growing number of VEs that psychologists and VR researchers have begun to use to treat phobias and other anxiety disorders. SpiderWorld immerses the patient in a routine environmen-like a home kitchen-and introduces realistic-looking spiders that the patient can observe, manipulate or even squash as part of exposure therapy. Therapists who treat phobic patients often try to reduce anxiety by exposing a patient to the stimuli or situations that provoke the phobic reaction. Generating these as part of a VE promises a new approach to treatment-not to mention sparing some spiders their exoskeletons!  相似文献   

8.
Head-mounted displays (HMDs) allow users to observe virtual environments (VEs) from an egocentric perspective. However, several experiments have provided evidence that egocentric distances are perceived as compressed in VEs relative to the real world. Recent experiments suggest that the virtual view frustum set for rendering the VE has an essential impact on the user's estimation of distances. In this article we analyze if distance estimation can be improved by calibrating the view frustum for a given HMD and user. Unfortunately, in an immersive virtual reality (VR) environment, a full per user calibration is not trivial and manual per user adjustment often leads to mini- or magnification of the scene. Therefore, we propose a novel per user calibration approach with optical see-through displays commonly used in augmented reality (AR). This calibration takes advantage of a geometric scheme based on 2D point - 3D line correspondences, which can be used intuitively by inexperienced users and requires less than a minute to complete. The required user interaction is based on taking aim at a distant target marker with a close marker, which ensures non-planar measurements covering a large area of the interaction space while also reducing the number of required measurements to five. We found the tendency that a calibrated view frustum reduced the average distance underestimation of users in an immersive VR environment, but even the correctly calibrated view frustum could not entirely compensate for the distance underestimation effects.  相似文献   

9.
We present a suitable virtually documented environment system providing the user with high level interaction possibilities. The system is dedicated to applications where the operator needs to have his hands free in order to access information, carry out measurements and/or operate on a device (e.g. maintenance, instruction). The system merges video images acquired through a head-mounted video camera with synthetic data (multimedia documents including CAD models and text) and presents these merged images to the operator. Registration techniques allow the operator to visualise information properly correlated to the real world: this is an essential aspect in order to achieve a feeling of presence in a real environment. We increase the sense of immersion through high level Human-Computer Interaction (HCI) allowing hands-free access to information through vocal commands as well as multimodal interaction associating speech and gesture. In this way, the user can access information and manipulate it in a very natural manner. We discuss the construction of the documentation system and the requested functionalities which led to the system architecture.  相似文献   

10.
Head-mounted displays (HMDs) allow users to immerse in a virtual environment (VE) in which the user’s viewpoint can be changed according to the tracked movements in real space. Because the size of the virtual world often differs from the size of the tracked lab space, a straightforward implementation of omni-directional and unlimited walking is not generally possible. In this article we review and discuss a set of techniques that use known perceptual limitations and illusions to support seemingly natural walking through a large virtual environment in a confined lab space. The concept behind these techniques is called redirected walking. With redirected walking, users are guided unnoticeably on a physical path that differs from the path the user perceives in the virtual world by manipulating the transformations from real to virtual movements. For example, virtually rotating the view in the HMD to one side with every step causes the user to unknowingly compensate by walking a circular arc in the opposite direction, while having the illusion of walking on a straight trajectory. We describe a number of perceptual illusions that exploit perceptual limitations of motion detectors to manipulate the user’s perception of the speed and direction of his motion. We describe how gains of locomotor speed, rotation, and curvature can gradually alter the physical trajectory without the users observing any discrepancy, and discuss studies that investigated perceptual thresholds for these manipulations. We discuss the potential of self-motion illusions to shift or widen the applicable ranges for gain manipulations and to compensate for over- or underestimations of speed or travel distance in VEs. Finally, we identify a number of key issues for future research on this topic.  相似文献   

11.
In this paper, we present a believable interaction mechanism for manipulation multiple objects in ubiquitous/augmented virtual environment. A believable interaction in multimodal framework is defined as a persistent and consistent process according to contextual experiences and common‐senses on the feedbacks. We present a tabletop interface as a quasi‐tangible framework to provide believable processes. An enhanced tabletop interface is designed to support multimodal environment. As an exemplar task, we applied the concept to fast accessing and manipulating distant objects. A set of enhanced manipulation mechanisms is presented for remote manipulations including inertial widgets, transformable tabletop, and proxies. The proposed method is evaluated in both performance and user acceptability in comparison with previous approaches. The proposed technique uses intuitive hand gestures and provides higher level of believability. It can also support other types of accessing techniques such as browsing and manipulation. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

12.
In this article we propose that organizational virtuality can be considered as a set of external as well as internal attributes. We focus on the attributes of internal virtuality, identify their possible dimensions, and refer to those as degrees of freedom. First we consider, in general terms, external and internal aspects of virtuality. Next we discuss organizational structures and introduce possible dimensions of virtuality that include the typology of the professional competencies. The dimensions of virtuality we introduce allow us to construct a multidimensional space of organizational virtuality. By means of this space we are able to convey the different degrees of organizational virtuality and offer some examples. In the final section of the article we test our constructs on real organizations and show a number of results concerning our measures of organizational virtuality. © 2007 Wiley Periodicals, Inc. Hum Factors Man 17: 575–586, 2007.  相似文献   

13.
Stereoscopic depth cues improve depth perception and increase immersion within virtual environments (VEs). However, improper display of these cues can distort perceived distances and directions. Consider a multi-user VE, where all users view identical stereoscopic images regardless of physical location. In this scenario, cues are typically customized for one "leader" equipped with a head-tracking device. This user stands at the center of projection (CoP) and all other users ("followers") view the scene from other locations and receive improper depth cues. This paper examines perceived depth distortion when viewing stereoscopic VEs from follower perspectives and the impact of these distortions on collaborative spatial judgments. Pairs of participants made collaborative depth judgments of virtual shapes viewed from the CoP or after displacement forward or backward. Forward and backward displacement caused perceived depth compression and expansion, respectively, with greater compression than expansion. Furthermore, distortion was less than predicted by a ray-intersection model of stereo geometry. Collaboration times were significantly longer when participants stood at different locations compared to the same location, and increased with greater perceived depth discrepancy between the two viewing locations. These findings advance our understanding of spatial distortions in multi-user VEs, and suggest a strategy for reducing distortion.  相似文献   

14.
Recent scholarship points to the rhetorical role of the aesthetic in multimodal composition and new media contexts. In this article, I examine the aesthetic as a rhetorical concept in writing studies and imagine the ways in which this concept can be useful to teachers of multimodal composition. My treatment of the concept begins with a return to the ancient Greek aisthetikos (relating to perception by the senses) in order to discuss the aesthetic as a meaningful mode of experience. I then review European conceptions of the aesthetic and finally draw from John Dewey and Bruno Latour to help shape this concept into a pragmatic and useful approach that can complement multimodal teaching and learning. The empirical approach I construct adds to an understanding of aesthetic experience with media in order to render more transparent the ways in which an audience creates knowledge—or takes and makes meaning—via the senses. Significantly, this approach to meaning making supports learning in digital environments where students are increasingly asked to both produce and consume media convergent texts that combine multiple modalities including sound, image, and user interaction.  相似文献   

15.
The multimodal self-organizing network (MMSON), an artificial neural network architecture carrying out sensory integration, is presented here. The architecture is designed using neurophysiological findings and imaging studies that pertain to sensory integration and consists of interconnected lattices of artificial neurons. In this artificial neural architecture, the degree of recognition of stimuli, that is, the perceived reliability of stimuli in the various subnetworks, is included in the computation. The MMSON's behavior is compared to aspects of brain function that deal with sensory integration. According to human behavioral studies, integration of signals from sensory receptors of different modalities enhances perception of objects and events and also reduces time to detection. In neocortex, integration takes place in bimodal and multimodal association areas and result, not only in feedback-mediated enhanced unimodal perception and shortened reaction time, but also in robust bimodal or multimodal percepts. Simulation data from the presented artificial neural network architecture show that it replicates these important psychological and neuroscientific characteristics of sensory integration.  相似文献   

16.
Training in virtual environments (VEs) has the potential to establish mental models and task mastery while providing a safe environment in which to practice. Performance feedback is known to contribute to this learning; however, the most effective ways to provide feedback in VEs have not been established. The present study examined the effects of differing feedback content, focusing on adaptive feedback. Participants learned search procedures during multiple missions in a VE. A control group received only a performance score after each mission. Two groups additionally received either detailed or general feedback after each mission, while two other groups received feedback that adapted based on their performance (either detailed-to-general, or general-to-detailed). Groups that received detailed feedback from the start of training had faster performance improvement than all other groups; however, all feedback groups showed improved performance and by the fourth mission performed at levels above the control group. Results suggest that detailed feedback early in the training cycle is the most beneficial for the fastest learning of new task skills in VEs.  相似文献   

17.
For some applications based on virtual reality technology, presence and task performance are important factors to validate the experience. Different approaches have been adopted to analyse the extent to which certain aspects of a computer-generated environment may enhance these factors, but mainly in 2D graphical user interfaces. This study explores the influence of different sensory modalities on performance and the sense of presence experienced within a 3D environment. In particular, we have evaluated visual, auditory and active haptic feedback for indicating selection of virtual objects. The effect of spatial alignment between proprioceptive and visual workspaces (co-location) has also been analysed. An experiment has been made to evaluate the influence of these factors in a controlled 3D environment based on a virtual version of the Simon game. The main conclusions obtained indicate that co-location must be considered in order to determine the sensory needs during interaction within a virtual environment. This study also provides further evidence that the haptic sensory modality influences presence to a higher extent, and that auditory cues can reduce selection times. Conclusions obtained provide initial guidelines that will help designers to set out better selection techniques for more complex environments, such as training simulators based on VR technology, by highlighting different optimal configurations of sensory feedback.  相似文献   

18.
In this paper, the concept of multimodal human-computer interaction is explored. It is proposed that multimodality can be defined from human or technology perspectives, which place emphasis on different attributes of the system. Furthermore, in this paper it is argued that the most effective definition of multimodality concentrates on task and goal dependencies. Not only does this permit consideration over and above the technology/human distinction, but also allows consideration of multiple tasks. In order to explore this notion, critical path analysis is used to develop models of multimodal systems. The model describes multimodal HCI, and allows consideration of the effects of modality dependency. The models allow prediction of transaction time under various conditions. Predictions arising from these models are shown to be good fits with data obtained from user trials. Thus, it is proposed that one can develop and evaluate preliminary versions of multimodal systems prior to prototype development.  相似文献   

19.
G. Riva 《Virtual Reality》1998,3(4):259-266
While many virtual reality (VR) applications have emerged in the areas of entertainment, education, military training, physical rehabilitation, and medicine, only recently have some research projects begun to test the possibility of using virtual environments (VEs) for research in neuroscience, neurosurgery and for the study and rehabilitation of human cognitive and functional activities. Virtual reality technology could have a strong impact on neuroscience. The key characteristic of VEs is the high level of control of the interaction with the tool without the constraints usually found in computer systems. VEs are highly flexible and programmable. They enable the therapist to present a wide variety of controlled stimuli and to measure and monitor a wide variety of responses made by the user. However, at this stage, a number of obstacles exist which have impeded the development of active research. These obstacles include problems with acquiring funding for an almost untested new treatment modality, the lack of reference standards, the non-interoperability of the VR systems and, last but not least, the relative lack of familiarity with the technology on the part of researchers in these fields.  相似文献   

20.
Virtual Reality - Virtual environments (VEs) can be infinitely large, but movement of the virtual reality (VR) user is constrained by the surrounding real environment. Teleporting has become a...  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号