首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 968 毫秒
1.
We've been exploring how augmented reality (AR) technology can create fundamentally new forms of remote collaboration for mobile devices. AR involves the overlay of virtual graphics and audio on reality. Typically, the user views the world through a handheld or head-mounted display (HMD) that's either see-through or overlays graphics on video of the surrounding environment. Unlike other computer interfaces that draw users away from the real world and onto the screen, AR interfaces enhance the real world experience. For example, with this technology doctors could see virtual ultrasound information superimposed on a patient's body.  相似文献   

2.
This article presents interactive visualizations to support the comprehension of spatial relationships between virtual and real world objects for augmented reality (AR) applications. To enhance the clarity of such relationships we discuss visualization techniques and their suitability for AR. We apply them on different AR applications with different goals, e.g. in X-Ray vision or in applications which draw a user's attention to an object of interest. We demonstrate how Focus and Context (F+C) visualizations are used to affect the user's perception of hidden or nearby objects by presenting contextual information in the area of augmentation. We discuss the organization and the possible sources of data for visualizations in augmented reality and present cascaded and multi level F+C visualizations to address complex, cluttered scenes that are inevitable in real environments. This article also shows filters and tools to interactively control the amount of augmentation. It compares the impact of real world context preserving to a pure virtual and uniform enhancement of these structures for augmentations of real world imagery. Finally this paper discusses the stylization of sparse object representations for AR to improve x-ray vision.  相似文献   

3.
基于ARToolKit的地下管网增强现实系统研究   总被引:4,自引:0,他引:4  
增强现实是把计算机生成的虚拟图像或其它信息叠加到用户所看到的真实世界中的一种技术,是虚拟现实领域的一个难点热点。ARToolKit为增强现实技术的应用提供了一种方便快捷的开发工具,但其不足之处在于它是基于计算机视觉的注册方式,不适合户外增强现实系统的开发,需要做进一步的改进。论文利用ARToolKit在室内实现了管网三维可视化的增强显示,取得了满意的效果;根据户外的实际,提出了以全球定位系统载波相位差分(GPSRTK)和惯性导航系统(INS)相结合进行精确定位、以电子罗盘和倾角计相结合测定视线方向的混合注册方法;对利用ARToolKit建立户外地下管网增强现实系统的框架及功能作了进一步的探讨。  相似文献   

4.
Metaverse is a virtual world that maps and interacts with the real world. It is a digital living space with a new social system created through Virtual Reality (VR), Augmented Reality (AR), Mixed Reality (MR), Artificial Intelligence (AI), Cloud Computing (CC) and other technologies. With the rapid development of block chain technology, interactive technology, network and computing technology, communication network technology, digital twin technology, artificial intelligence technology, Internet of Things technology and electronic game technology, the metaverse has also ushered in a stage of rapid development. However, enormous difficulties and challenges remain in how to create a perfect metaverse system that can truly break the barriers between reality and virtual. The quality of content, device and interaction in the metaverse all have an important impact on the Quality of Experience (QoE). The purpose of this paper is to provide the latest research progress of the current metaverse QoE, which will be helpful to the future research of the metaverse QoE and provide the direction for further research.  相似文献   

5.
Does visually perceived distance differ when objects are viewed in augmented reality (AR), as opposed to the real world? What are the differences? These questions are theoretically interesting, and the answers are important for the development of many tablet- and phone-based AR applications, including mobile AR navigation systems. This article presents a thorough literature review of distance judgment experimental protocols, and results from several areas of perceptual psychology. In addition to distance judgments of real and virtual objects, this section also discusses previous work in measuring the geometry of virtual picture space and considers how this work might be relevant to tablet AR. Then, the article presents the results of two experiments. In each experiment, observers bisected egocentric distances of 15 and 30 m in tablet-based AR and in the real world, in both indoor corridor and outdoor field environments. In AR, observers bisected the distances to virtual humans, while in the real world, they bisected the distances to real humans. This is the first reported research that directly compares distance judgments of real and virtual objects in a tablet AR system. Four key findings were: (1) In AR, observers expanded midpoint intervals at 15 m, but compressed midpoints at 30 m. (2) Observers were accurate in the real world. (3) The environmental setting—corridor or open field—had no effect. (4) The picture perception literature is important in understanding how distances are likely judged in tablet-based AR. Taken together, these findings suggest the depth distortions that AR application developers should expect with mobile and especially tablet-based AR.  相似文献   

6.
One of Industry 4.0’s greatest challenges for companies is the digitization of their processes and the integration of new related technologies such as virtual reality (VR) and augmented reality (AR), which can be used for training purposes, design, or assistance during industrial operations. Moreover, recent results and industrial proofs of concept show that these technologies demonstrate critical advantages in the industry. Nevertheless, the authoring and editing process of virtual and augmented content remains time-consuming, especially in complex industrial scenarios. While the use of interactive virtual environments through virtual and augmented reality presents new possibilities for many domains, a wider adoption of VR/AR is possible only if the authoring process is simplified, allowing for more rapid development and configuration without the need for advanced IT skills. To meet this goal, this study presents a new framework: INTERVALES. First, framework architecture is proposed, along with its different modules; this study then shows that the framework can be updated by not only IT workers, but also other job experts. The UML data model is presented to format and simplify the authoring processes for both VR and AR. This model takes into account virtual and augmented environments, the possible interactions, and ease operations orchestration. Finally, this paper presents the implementation of an industrial use case composed of collaborative robotic (cobotic) and manual assembly workstations in VR and AR based on INTERVALES data.  相似文献   

7.
康波 《计算机测量与控制》2006,14(11):1431-1434,1455
增强现实是一种正在发展的新技术,它将由计算机产生的虚拟场景或信息准确叠加到真实环境中,可以增强用户对真实世界的感知与交互能力;对用户视场及视点的跟踪是增强现实中实现虚、实场景配准的关键技术之一;通过对增强现实中跟踪技术性能要求的讨论,介绍了基于磁场、声学、惯性、光学传感等多种跟踪技术,分析了各种方法的跟踪性能与局限性,认为以基于视觉跟踪为主的混合跟踪技术将是增强现实系统的主流跟踪技术,重点论述了基于视觉的跟踪与基于视觉一惯性的混合跟踪技术及其有待解决的问题,并针对户外增强现实中的跟踪技术做了专门讨论。  相似文献   

8.
This article addresses the problem of creating interactive mixed reality applications where virtual objects interact in images of real world scenarios. This is relevant to create games and architectural or space planning applications that interact with visual elements in the images such as walls, floors and empty spaces. These scenarios are intended to be captured by the users with regular cameras or using previously taken photographs. Introducing virtual objects in photographs presents several challenges, such as pose estimation and the creation of a visually correct interaction between virtual objects and the boundaries of the scene. The two main research questions addressed in this article include, the study of the feasibility of creating interactive augmented reality (AR) applications where virtual objects interact in a real world scenario using the image detected high-level features and, also, verifying if untrained users are capable and motivated enough to perform AR initialization steps. The proposed system detects the scene automatically from an image with additional features obtained using basic annotations from the user. This operation is significantly simple to accommodate the needs of non-expert users. The system analyzes one or more photos captured by the user and detects high-level features such as vanishing points, floor and scene orientation. Using these features it will be possible to create mixed and augmented reality applications where the user interactively introduces virtual objects that blend with the picture in real time and respond to the physical environment. To validate the solution several system tests are described and compared using available external image datasets.  相似文献   

9.
Most augmented reality (AR) applications are primarily concerned with letting a user browse a 3D virtual world registered with the real world. More advanced AR interfaces let the user interact with the mixed environment, but the virtual part is typically rather finite and deterministic. In contrast, autonomous behavior is often desirable in ubiquitous computing (Ubicomp), which requires the computers embedded into the environment to adapt to context and situation without explicit user intervention. We present an AR framework that is enhanced by typical Ubicomp features by dynamically and proactively exploiting previously unknown applications and hardware devices, and adapting the appearance of the user interface to persistently stored and accumulated user preferences. Our framework explores proactive computing, multi‐user interface adaptation, and user interface migration. We employ mobile and autonomous agents embodied by real and virtual objects as an interface and interaction metaphor, where agent bodies are able to opportunistically migrate between multiple AR applications and computing platforms to best match the needs of the current application context. We present two pilot applications to illustrate design concepts. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

10.
Kwak  Suhwan  Choe  Jongin  Seo  Sanghyun 《Multimedia Tools and Applications》2020,79(23-24):16141-16154

Rapid developments in augmented reality (AR) and related technologies have led to increasing interest in immersive content. AR environments are created by combining virtual 3D models with a real-world video background. It is important to merge these two worlds seamlessly if users are to enjoy AR applications, but, all too often, the illumination and shading of virtual objects is not consider the real world lighting condition or does not match that of nearby real objects. In addition, visual artifacts produced when blending real and virtual objects further limit realism. In this paper, we propose a harmonic rendering technique that minimizes the visual discrepancy between the real and virtual environments to maintain visual coherence in outdoor AR. To do this, we introduce a method of estimating and approximating the Sun’s position and the sunlight direction to estimate the real sunlight intensity, as this is the most significant illumination source in outdoor AR and it provides a more realistic lighting environment for such content, reducing the mismatch between real and virtual objects.

  相似文献   

11.
This paper presents a novel computer entertainment system which recaptures human touch and physical interaction with the real-world environment as essential elements of the game play, whilst also maintaining the exciting fantasy features of traditional computer entertainment. Our system called ‘Touch-Space’ is an embodied (ubiquitous, tangible, and social) computing based Mixed Reality (MR) game space which regains the physical and social aspects of traditional game play. In this novel game space, the real-world environment is an essential and intrinsic game element, and the human’s physical context influences the game play. It also provides the full spectrum of game interaction experience ranging from the real physical environment (human to human and human to physical world interaction), to augmented reality, to the virtual environment. It allows tangible interactions between players and virtual objects, and collaborations between players in different levels of reality. Thus, the system re-invigorates computer entertainment systems with social human-to-human and human-to-physical touch interactions. Correspondence to: Professor A. Cheok, National University of Singapore, 10 Kent Ridge Crescent, Singapore 119260. Email: adriancheok@nus.edu.sg  相似文献   

12.
In this paper, we present a new immersive multiplayer game system developed for two different environments, namely, virtual reality (VR) and augmented reality (AR). To evaluate our system, we developed three game applications-a first-person-shooter game (for VR and AR environments, respectively) and a sword game (for the AR environment). Our immersive system provides an intuitive way for users to interact with the VR or AR world by physically moving around the real world and aiming freely with tangible objects. This encourages physical interaction between players as they compete or collaborate with other players. Evaluation of our system consists of users' subjective opinions and their objective performances. Our design principles and evaluation results can be applied to similar immersive game applications based on AR/VR.  相似文献   

13.
为加强增强现实的沉浸感与真实性,实时地进行虚实物体间的碰撞检测至关重要.因此,提出一种基于增强现实和单目视觉的任意形状虚实物体碰撞检测估计算法.通过改进现有的单目二维虚实碰撞检测及响应算法,针对现有碰撞检测算法存在的计算复杂度高的问题,提出一种仅需计算实际物体4个特征点的三维碰撞检测算法;并通过对象分割、特征点提取、碰撞检测和碰撞响应等过程取得与真实世界物理特性一致的三维虚实碰撞响应估计效果.在增强现实有标记和无标记环境下分别进行实验结果表明,碰撞检测的计算量与二维算法相近,且具有景深效果,实现了基于单目视觉的增强现实三维虚实碰撞响应预测和处理.  相似文献   

14.
增强现实是把计算机产生的虚拟物体合成到用户看到的真实世界中的一种技术。介绍了增强现实中虚拟物体所涉及的一些关键技术,包括虚拟物体的建模、摄像头标定、骨骼动画以及虚拟场景的优化,说明了如何将这些技术应用到实际系统中。  相似文献   

15.
首先通过检测视频定位标记表面辐照度的变化和真实光源之间的关系建立模型;然后在交互过程中迭代计算真实场景中光源的强度和方向,并将它集成到一个高效的基于场景管理的增强现实应用开发框架中.实验结果表明,使用该算法自动生成的虚拟光源可以近似地逼近真实场景的光照,在具有一个或多个真实光源的增强现实环境中使虚拟物体和真实物体能产生近似一致的光照效果.  相似文献   

16.
What if we could visualize and interact with information directly in the context of our surroundings? Our research group is exploring how augmented reality (AR) could someday make this possible. AR integrates a complementary virtual world with the physical world-for example, by using head-tracked see-through head-worn displays to overlay graphics on what we see. Instead of looking back and forth between the real world and a PDA, we look directly at the real world and the virtual information overlaid on it. At the heart of this approach is context-aware computing, computing systems that are sensitive to the context in which they operate, ranging from human relationships to physical location. For example, information might be tied to specific locations within a global, Earth-centered, coordinate system. How can we design effective mobile AR user interfaces? We've been trying to answer this question in part by developing experimental AR research prototypes. In AR, as in work on information visualization using desktop technologies, the amount of information available can far exceed what a system can legibly display at a given time, necessitating information filtering. Julier et al. (2000) have developed information filtering techniques for AR that depend on the user's goals, object importance, and proximity. We assume that a system can accomplish information filtering of this sort and that our system is displaying everything it should.  相似文献   

17.
增强现实技术将计算机生成的虚拟物体叠加到真实场景中.在传统的AR系统中,虚拟物体和真实场景视觉上存在较为明显的差异,达不到虚拟物体和真实场景无缝结合的要求.本文将增强现实技术与非真实感的绘制技术有机的结合起来,减小这种视觉差异,研究并实现了卡通风格的增强现实系统.  相似文献   

18.
在传统的AR系统中,虚拟物体和真实场景在视觉上存在较为明显的差异,达不到虚拟物体和真实场景无缝结合的要求。将增强现实技术与NPR有机地结合起来,减小这种视觉差异,研究并实现了水彩画风格的增强现实系统。  相似文献   

19.
With the recent growth in the development of augmented reality (AR) technologies, it is becoming important to study human perception of AR scenes. In order to detect whether users will suffer more from visual and operator fatigue when watching virtual objects through optical see‐through head‐mounted displays (OST‐HMDs), compared with watching real objects in the real world, we propose a comparative experiment including a virtual magic cube task and a real magic cube task. The scores of the subjective questionnaires (SQ) and the values of the critical flicker frequency (CFF) were obtained from 18 participants. In our study, we use several electrooculogram (EOG) and heart rate variability (HRV) measures as objective indicators of visual and operator fatigue. Statistical analyses were performed to deal with the subjective and objective indicators in the two tasks. Our results suggest that participants were very likely to suffer more from visual and operator fatigue when watching virtual objects presented by the OST‐HMD. In addition, the present study provides hints that HRV and EOG measures could be used to explore how visual and operator fatigue are induced by AR content. Finally, three novel HRV measures are proposed to be used as potential indicators of operator fatigue.  相似文献   

20.
Virtual objects can be visualized inside real objects using augmented reality (AR). This visualization is called AR X-ray because it gives the impression of seeing through the real object. In standard AR, virtual information is overlaid on top of the real world. To position a virtual object inside an object, AR X-ray requires partially occluding the virtual object with visually important regions of the real object. In effect, the virtual object becomes less legible compared to when it is completely unoccluded. Legibility is an important consideration for various applications of AR X-ray. In this research, we explored legibility in two implementations of AR X-ray, namely, edge-based and saliency-based. In our first experiment, we explored on the tolerable amounts of occlusion to comfortably distinguish small virtual objects. In our second experiment, we compared edge-based and saliency-based AR X-ray methods when visualizing virtual objects inside various real objects. Moreover, we benchmarked the legibility of these two methods against alpha blending. From our experiments, we observed that users have varied preferences for proper amounts of occlusion cues for both methods. The partial occlusions generated by the edge-based and saliency-based methods need to be adjusted depending on the lighting condition and the texture complexity of the occluding object. In most cases, users identify objects faster with saliency-based AR X-ray than with edge-based AR X-ray. Insights from this research can be directly applied to the development of AR X-ray applications.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号