首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Investigation of augmented reality (AR) environments has become a popular research topic for engineers, computer and cognitive scientists. Although application oriented studies focused on audio AR environments have been published, little work has been done to vigorously study and evaluate the important research questions of the effectiveness of three-dimensional (3D) sound in the AR context, and to what extent the addition of 3D sound would contribute to the AR experience.

Thus, we have developed two AR environments and performed vigorous experiments with human subjects to study the effects of 3D sound in the AR context. The study concerns two scenarios. In the first scenario, one participant must use vision only and vision with 3D sound to judge the relative depth of augmented virtual objects. In the second scenario, two participants must cooperate to perform a joint task in a game-based AR environment.

Hence, the goals of this study are (1) to access the impact of 3D sound on depth perception in a single-camera AR environment, (2) to study the impact of 3D sound on task performance and the feeling of ‘human presence and collaboration’, (3) to better understand the role of 3D sound in human–computer and human–human interactions, (4) to investigate if gender can affect the impact of 3D sound in AR environments. The outcomes of this research can have a useful impact on the development of audio AR systems, which provide more immersive, realistic and entertaining experiences by introducing 3D sound. Our results suggest that 3D sound in AR environment significantly improves the accuracy of depth judgment and improves task performance. Our results also suggest that 3D sound contributes significantly to the feeling of human presence and collaboration and helps the subjects to ‘identify spatial objects’.  相似文献   


2.
In this paper, we present a new immersive multiplayer game system developed for two different environments, namely, virtual reality (VR) and augmented reality (AR). To evaluate our system, we developed three game applications-a first-person-shooter game (for VR and AR environments, respectively) and a sword game (for the AR environment). Our immersive system provides an intuitive way for users to interact with the VR or AR world by physically moving around the real world and aiming freely with tangible objects. This encourages physical interaction between players as they compete or collaborate with other players. Evaluation of our system consists of users' subjective opinions and their objective performances. Our design principles and evaluation results can be applied to similar immersive game applications based on AR/VR.  相似文献   

3.
The virtual reality (VR) and augmented reality (AR) applications have been widely used in a variety of fields; one of the key requirements in a VR or AR system is to understand how users perceive depth in the virtual environment and AR. Three different graphics depth cues are designed in shuffleboard to explore what kind of graphics depth cues are beneficial for depth perception. We also conduct a depth‐matching experiment to compare performance in VR and AR systems using an optical see‐through head‐mounted display (HMD). The result shows that the absolute error increases as the distance becomes farther. Analysis from the inverse of distance shows that box depth cues have a significant effect on depth perception, while the points depth cues and line depth cues have no significant effect. The error in diopter in AR experiment is lower than that in VR experiment. Participants in the AR experiment under medium illuminance condition have less error than those under low and high illuminance conditions. Men have less error than women in certain display conditions, but the advantage disappears when there is a strong depth cue. Besides, there is no significant effect of completion time on depth perception.  相似文献   

4.
Several studies have been carried out on augmented reality (AR)-based environments that deal with user interfaces for manipulating and interacting with virtual objects aimed at improving immersive feeling and natural interaction. Most of these studies have utilized AR paddles or AR cubes for interactions. However, these interactions overly constrain the users in their ability to directly manipulate AR objects and are limited in providing natural feeling in the user interface. This paper presents a novel approach to natural and intuitive interactions through a direct hand touchable interface in various AR-based user experiences. It combines markerless augmented reality with a depth camera to effectively detect multiple hand touches in an AR space. Furthermore, to simplify hand touch recognition, the point cloud generated by Kinect is analyzed and filtered out. The proposed approach can easily trigger AR interactions, and allows users to experience more intuitive and natural sensations and provides much control efficiency in diverse AR environments. Furthermore, it can easily solve the occlusion problem of the hand and arm region inherent in conventional AR approaches through the analysis of the extracted point cloud. We present the effectiveness and advantages of the proposed approach by demonstrating several implementation results such as interactive AR car design and touchable AR pamphlet. We also present an analysis of a usability study to compare the proposed approach with other well-known AR interactions.  相似文献   

5.
The paper presents different issues dealing with both the preservation of cultural heritage using virtual reality (VR) and augmented reality (AR) technologies in a cultural context. While the VR/AR technologies are mentioned, the attention is paid to the 3D visualization, and 3D interaction modalities illustrated through three different demonstrators: the VR demonstrators (immersive and semi-immersive) and the AR demonstrator including tangible user interfaces. To show the benefits of the VR and AR technologies for studying and preserving cultural heritage, we investigated the visualisation and interaction with reconstructed underwater archaeological sites. The base idea behind using VR and AR techniques is to offer archaeologists and general public new insights on the reconstructed archaeological sites allowing archaeologists to study directly from within the virtual site and allowing the general public to immersively explore a realistic reconstruction of the sites. Both activities are based on the same VR engine, but drastically differ in the way they present information and exploit interaction modalities. The visualisation and interaction techniques developed through these demonstrators are the results of the ongoing dialogue between the archaeological requirements and the technological solutions developed.  相似文献   

6.
A fundamental problem in optical, see-through augmented reality (AR) is characterizing how it affects the perception of spatial layout and depth. This problem is important because AR system developers need to both place graphics in arbitrary spatial relationships with real-world objects, and to know that users would perceive them in the same relationships. Furthermore, AR makes possible enhanced perceptual techniques that have no real-world equivalent, such as X-ray vision, where AR users are supposed to perceive graphics as being located behind opaque surfaces. This paper reviews and discusses protocols for measuring egocentric depth judgments in both virtual and augmented environments, and discusses the well-known problem of depth underestimation in virtual environments. It then describes two experiments that measured egocentric depth judgments in AR. Experiment I used a perceptual matching protocol to measure AR depth judgments at medium and far-field distances of 5 to 45 meters. The experiment studied the effects of upper versus lower visual field location, the X-ray vision condition, and practice on the task. The experimental findings include evidence for a switch in bias, from underestimating to overestimating the distance of AR-presented graphics, at ~ 23 meters, as well as a quantification of how much more difficult the X-ray vision condition makes the task. Experiment II used blind walking and verbal report protocols to measure AR depth judgments at distances of 3 to 7 meters. The experiment examined real-world objects, real-world objects seen through the AR display, virtual objects, and combined real and virtual objects. The results give evidence that the egocentric depth of AR objects is underestimated at these distances, but to a lesser degree than has previously been found for most virtual reality environments. The results are consistent with previous studies that have implicated a restricted field-of-view, combined with an inability for observers to sc  相似文献   

7.
This paper presents an innovative 3D reconstruction of ancient fresco paintings through the real‐time revival of their fauna and flora, featuring groups of virtual animated characters with artificial‐life dramaturgical behaviours in an immersive, fully mobile augmented reality (AR) environment. The main goal is to push the limits of current AR and virtual storytelling technologies and to explore the processes of mixed narrative design of fictional spaces (e.g. fresco paintings) where visitors can experience a high degree of realistic immersion. Based on a captured/real‐time video sequence of the real scene in a video‐see‐through HMD set‐up, these scenes are enhanced by the seamless accurate real‐time registration and 3D rendering of realistic complete simulations of virtual flora and fauna (virtual humans and plants) in a real‐time storytelling scenario‐based environment. Thus the visitor of the ancient site is presented with an immersive and innovative multi‐sensory interactive trip to the past. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

8.
This paper proposes tangible interfaces and interactions for authoring 3D virtual and immersive scenes easily and intuitively in tangible augmented reality (AR) environment. It provides tangible interfaces for manipulating virtual objects in a natural and intuitive manner and supports adaptive and accurate vision-based tracking in AR environments. In particular, RFID is used to directly integrate physical objects with virtual objects and to systematically support the tangible query of the relation between physical objects and virtual ones, which can provide more intuitive tangibility and a new way of virtual object manipulation. Moreover, the proposed approach offers an easy and intuitive switching mechanism between tangible environment and virtual environment. This paper also proposes a context-adaptive marker tracking method which can remove an inconsistent problem while embedding virtual objects into physical ones in tangible AR environments. The context-adaptive tracking method not only adjusts the locations of invisible markers by interpolating the locations of existing reference markers and those of previous ones, but also removes a jumping effect of movable virtual objects when their references are changed from one marker to another. Several case studies for generating tangible virtual scenes and comparison with previous work are given to show the effectiveness and novelty of the proposed approach.  相似文献   

9.
This study presents a 3D virtual reality (VR) keyboard system with realistic haptic feedback. The system uses two five-fingered data gloves to track finger positions and postures, uses micro-speakers to create simulated vibrations, and uses a head-mounted display (HMD) for 3D display. When users press a virtual key in the VR environment, the system can provide realistic simulated key click haptic feedback to users. The results of this study show that the advantages of the haptic VR keyboard are that users can use it when wearing HMDs (users do not need to remove HMDs to use the VR keyboard), the haptic VR keyboard can pop-up display at any location in the VR environments (users do not need to go to a specific location to use an actual physical keyboard), and the haptic VR keyboard can be used to provide realistic key click haptic feedback (which other studies have shown enhances user performance). The results also show that the haptic VR keyboard system can be used to create complex vibrations that simulate measured vibrations from a real keyboard and enhance keyboard interaction in a fully immersive VR environment.  相似文献   

10.
Mobile augmented reality (AR) applications have become feasible with the evolution of mobile hardware. For example, the advent of the smartphone allowed implementing real-time mobile AR, which triggered the release of various applications. Recently, rapid development of display technology, especially for stereoscopic displays, has encouraged researches to implement more immersive and realistic AR. In this paper, we present a framework of binocular augmented reality based on stereo camera tracking. Our framework was implemented on a smartphone and supports autostereoscopic display and video see-through display in which a smartphone can be docked. We modified edge-based 3-D object tracking in order to estimate poses of left and right cameras jointly; this guarantees consistent registration across left and right views. Then, virtual contents were overlaid onto camera images using estimated poses, and the augmented stereo images were distorted to be shown through a video see-through display. The feasibility of the proposed framework is shown by experiments and demonstrations.  相似文献   

11.
Kwak  Suhwan  Choe  Jongin  Seo  Sanghyun 《Multimedia Tools and Applications》2020,79(23-24):16141-16154

Rapid developments in augmented reality (AR) and related technologies have led to increasing interest in immersive content. AR environments are created by combining virtual 3D models with a real-world video background. It is important to merge these two worlds seamlessly if users are to enjoy AR applications, but, all too often, the illumination and shading of virtual objects is not consider the real world lighting condition or does not match that of nearby real objects. In addition, visual artifacts produced when blending real and virtual objects further limit realism. In this paper, we propose a harmonic rendering technique that minimizes the visual discrepancy between the real and virtual environments to maintain visual coherence in outdoor AR. To do this, we introduce a method of estimating and approximating the Sun’s position and the sunlight direction to estimate the real sunlight intensity, as this is the most significant illumination source in outdoor AR and it provides a more realistic lighting environment for such content, reducing the mismatch between real and virtual objects.

  相似文献   

12.
Virtual Reality - Technologies such as virtual reality (VR), an immersive computer-based environment that induces a feeling of mental and physical presence, are becoming increasingly popular for...  相似文献   

13.
Head-mounted displays (HMDs) allow users to observe virtual environments (VEs) from an egocentric perspective. However, several experiments have provided evidence that egocentric distances are perceived as compressed in VEs relative to the real world. Recent experiments suggest that the virtual view frustum set for rendering the VE has an essential impact on the user's estimation of distances. In this article we analyze if distance estimation can be improved by calibrating the view frustum for a given HMD and user. Unfortunately, in an immersive virtual reality (VR) environment, a full per user calibration is not trivial and manual per user adjustment often leads to mini- or magnification of the scene. Therefore, we propose a novel per user calibration approach with optical see-through displays commonly used in augmented reality (AR). This calibration takes advantage of a geometric scheme based on 2D point - 3D line correspondences, which can be used intuitively by inexperienced users and requires less than a minute to complete. The required user interaction is based on taking aim at a distant target marker with a close marker, which ensures non-planar measurements covering a large area of the interaction space while also reducing the number of required measurements to five. We found the tendency that a calibrated view frustum reduced the average distance underestimation of users in an immersive VR environment, but even the correctly calibrated view frustum could not entirely compensate for the distance underestimation effects.  相似文献   

14.
One of Industry 4.0’s greatest challenges for companies is the digitization of their processes and the integration of new related technologies such as virtual reality (VR) and augmented reality (AR), which can be used for training purposes, design, or assistance during industrial operations. Moreover, recent results and industrial proofs of concept show that these technologies demonstrate critical advantages in the industry. Nevertheless, the authoring and editing process of virtual and augmented content remains time-consuming, especially in complex industrial scenarios. While the use of interactive virtual environments through virtual and augmented reality presents new possibilities for many domains, a wider adoption of VR/AR is possible only if the authoring process is simplified, allowing for more rapid development and configuration without the need for advanced IT skills. To meet this goal, this study presents a new framework: INTERVALES. First, framework architecture is proposed, along with its different modules; this study then shows that the framework can be updated by not only IT workers, but also other job experts. The UML data model is presented to format and simplify the authoring processes for both VR and AR. This model takes into account virtual and augmented environments, the possible interactions, and ease operations orchestration. Finally, this paper presents the implementation of an industrial use case composed of collaborative robotic (cobotic) and manual assembly workstations in VR and AR based on INTERVALES data.  相似文献   

15.
Advances in display devices are facilitating the integration of stereoscopic visualisation in our daily lives. However, autostereoscopic visualisation has not been extensively exploited. In this paper, we present a system that combines augmented reality (AR) and autostereoscopic visualisation. We also present the first study that compares different aspects using an autostereoscopic display with AR and virtual reality (VR), in which 39 children from 8 to 10 years old participated. In our study, no statistically significant differences were found between AR and VR. However, the scores were very high in nearly all of the questions, and the children also scored the AR version higher in all cases. Moreover, the children explicitly preferred the AR version (81%). For the AR version, a strong and significant correlation was found between the use of the autostereoscopic screen in games and seeing the virtual object on the marker. For the VR version, two strong and significant correlations were found. The first correlation was between the ease of play and the use of the rotatory controller. The second correlation was between depth perception and the game global score. Therefore, the combinations of AR and VR with autostereoscopic visualisation are possibilities for developing edutainment systems for children.  相似文献   

16.
In this paper, we introduce the concept of Extended VR (extending viewing space and interaction space of back-projection VR systems), by describing the use of a hand-held semi-transparent mirror to support augmented reality tasks with back-projection systems. This setup overcomes the problem of occlusion of virtual objects by real ones linked with such display systems. The presented approach allows an intuitive and effective application of immersive or semi-immersive virtual reality tasks and interaction techniques to an augmented surrounding space. Thereby, we use the tracked mirror as an interactive image-plane that merges the reflected graphics, which are displayed on the projection plane, with the transmitted image of the real environment. In our implementation, we also address traditional augmented reality problems, such as real-object registration and virtual-object occlusion. The presentation is complemented by a hypothesis of conceivable further setups that apply transflective surfaces to support an Extended VR environment.  相似文献   

17.
Wearable augmented reality (AR) smart glasses have been utilized in various applications such as training, maintenance, and collaboration. However, most previous research on wearable AR technology did not effectively supported situation-aware task assistance because of AR marker-based static visualization and registration. In this study, a smart and user-centric task assistance method is proposed, which combines deep learning-based object detection and instance segmentation with wearable AR technology to provide more effective visual guidance with less cognitive load. In particular, instance segmentation using the Mask R-CNN and markerless AR are combined to overlay the 3D spatial mapping of an actual object onto its surrounding real environment. In addition, 3D spatial information with instance segmentation is used to provide 3D task guidance and navigation, which helps the user to more easily identify and understand physical objects while moving around in the physical environment. Furthermore, 2.5D or 3D replicas support the 3D annotation and collaboration between different workers without predefined 3D models. Therefore, the user can perform more realistic manufacturing tasks in dynamic environments. To verify the usability and usefulness of the proposed method, we performed quantitative and qualitative analyses by conducting two user studies: 1) matching a virtual object to a real object in a real environment, and 2) performing a realistic task, that is, the maintenance and inspection of a 3D printer. We also implemented several viable applications supporting task assistance using the proposed deep learning-based task assistance in wearable AR.  相似文献   

18.
Nonimmersive virtual reality (VR), which places the user in a 3D environment that can be directly manipulated with a conventional graphics workstation using a monitor, a keyboard; and a mouse, is discussed. The scene is displayed with the same 3D depth cues used in immersive VR: perspective view, hidden-surface elimination, color, texture, lighting, shading and shadows. As in immersive VR, animation and simulation are interactively controlled in response to the user's direct manipulation. Much of the technology used to support immersive and nonimmersive VR is the same. They use the same 3D modeling and rendering and many of the same interaction techniques. The advantages and applications of nonimmersive VR systems are discussed. Immersive and nonimmersive VR systems are compared and hybrid possibilities are reviewed  相似文献   

19.
This paper investigates the use of virtual reality (VR)-based methods for the verification of performance factors related to manual assembly processes. An immersive and interactive virtual environment has been created to provide functionality for realistic process experimentation. Ergonomic models and functions have been embedded into the VR environment to support verification and constrain experimentation to ergonomically acceptable conditions. A specific assembly test case is presented, for which a semi-empirical time model is developed employing statistical design experimentation in the virtual environment. The virtual experimentation results enable the quantification and prediction of the influence of a number of process parameters and their combination at the process cycle time.  相似文献   

20.
ABSTRACT

The paper aims to analyse the problem of quality evaluation and personalisation of virtual reality/augmented reality/mixed reality (VR/AR/MR). First of all, systematic review of relevant scientific literature on the research topic was conducted. After that, findings of the systematic review concerning evaluation of quality and personalisation of VR/AR/MR learning environments are presented. The author’s VR/AR/MR learning systems/environments quality evaluation and personalisation framework is also presented in the paper. Evaluation of quality of VR/AR/MR platforms/environments should be based on (a) applying both expert-centred (top-down) and user-centred (bottom-up) quality evaluation methods and (b) separating ‘internal quality’ criteria, and ‘quality in use’ criteria in the set of quality criteria (model). Personalisation of VR/AR/MR platforms/environments should be based on learners’ models/profiles using students’ learning styles, intelligent technologies, and Semantic Web applications.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号