首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We propose a novel approach to simulate the illumination of augmented outdoor scene based on a legacy photograph. Unlike previous works which only take surface radiosity or lighting related prior information as the basis of illumination estimation, our method integrates both of these two items. By adopting spherical harmonics, we deduce a linear model with only six illumination parameters. The illumination of an outdoor scene is finally calculated by solving a linear least square problem with the color constraint of the sunlight and the skylight. A high quality environment map is then set up, leading to realistic rendering results. We also explore the problem of shadow casting between real and virtual objects without knowing the geometry of objects which cast shadows. An efficient method is proposed to project complex shadows (such as tree's shadows) on the ground of the real scene to the surface of the virtual object with texture mapping. Finally, we present an unified scheme for image composition of a real outdoor scene with virtual objects ensuring their illumination consistency and shadow consistency. Experiments demonstrate the effectiveness and flexibility of our method.  相似文献   

2.
In augmented reality, one of key tasks to achieve a convincing visual appearance consistency between virtual objects and video scenes is to have a coherent illumination along the whole sequence. As outdoor illumination is largely dependent on the weather, the lighting condition may change from frame to frame. In this paper, we propose a full image-based approach for online tracking of outdoor illumination variations from videos captured with moving cameras. Our key idea is to estimate the relative intensities of sunlight and skylight via a sparse set of planar feature-points extracted from each frame. To address the inevitable feature misalignments, a set of constraints are introduced to select the most reliable ones. Exploiting the spatial and temporal coherence of illumination, the relative intensities of sunlight and skylight are finally estimated by using an optimization process. We validate our technique on a set of real-life videos and show that the results with our estimations are visually coherent along the video sequences.  相似文献   

3.
In augmented reality, it is essential that the rendered virtual objects are embedded harmonically into the view of the background scenes and their appearance should reflect the changing lighting condition of the real scene to ensure illumination consistency. In this paper, we propose a novel method to solve for the sunlight and skylight basis images of static outdoor scenes from a time-lapse image sequence. It is proved that the resulted basis images encapsulate the geometry and material reflectivity of the scene, correspond to the global illumination effects of the outdoor scene under a unit intensity of the sunlight and skylight. Our method is fully automatic. Unlike previous methods, it gets rid of the constraints that the reflectance of all objects in scenes should be ideal diffuse, or the weather condition should be overcast or sunshine. During decomposition, we first detect shadowed pixels by analyzing the time-lapse curve of each pixel through k-means clustering, the basis images of sunlight and skylight are then solved by an iterative procedure with the decomposition equation. The basis images are further optimized by exploiting their constraints and priors. Experimental results demonstrate the effectiveness and flexibility of the proposed method. Our method can also be applied in image understanding and compressing.  相似文献   

4.
Achieving convincing visual consistency between virtual objects and a real scene mainly relies on the lighting effects of virtual-real composition scenes. The problem becomes more challenging in lighting virtual objects in a single real image. Recently,scene understanding from a single image has made great progress. The estimated geometry,semantic labels and intrinsic components provide mostly coarse information,and are not accurate enough to re-render the whole scene. However,carefully integrating the estimated coarse information can lead to an estimate of the illumination parameters of the real scene. We present a novel method that uses the coarse information estimated by current scene understanding technology to estimate the parameters of a ray-based illumination model to light virtual objects in a real scene. Our key idea is to estimate the illumination via a sparse set of small 3D surfaces using normal and semantic constraints. The coarse shading image obtained by intrinsic image decomposition is considered as the irradiance of the selected small surfaces. The virtual objects are illuminated by the estimated illumination parameters. Experimental results show that our method can convincingly light virtual objects in a single real image,without any pre-recorded 3D geometry,reflectance,illumination acquisition equipment or imaging information of the image.  相似文献   

5.
StOMP algorithm is well suited to large-scale underdetermined applications in sparse vector estimations. It can reduce computation complexity and has some attractive asymptotical statistical properties.However,the estimation speed is at the cost of accuracy violation. This paper suggests an improvement on the StOMP algorithm that is more efficient in finding a sparse solution to the large-scale underdetermined problems. Also,compared with StOMP,this modified algorithm can not only more accurately estimate parameters for the distribution of matched filter coefficients,but also improve estimation accuracy for the sparse vector itself. Theoretical success boundary is provided based on a large-system limit for approximate recovery of sparse vector by modified algorithm,which validates that the modified algorithm is more efficient than StOMP. Actual computations with simulated data show that without significant increment in computation time,the proposed algorithm can greatly improve the estimation accuracy.  相似文献   

6.
Virtual objects can be visualized inside real objects using augmented reality (AR). This visualization is called AR X-ray because it gives the impression of seeing through the real object. In standard AR, virtual information is overlaid on top of the real world. To position a virtual object inside an object, AR X-ray requires partially occluding the virtual object with visually important regions of the real object. In effect, the virtual object becomes less legible compared to when it is completely unoccluded. Legibility is an important consideration for various applications of AR X-ray. In this research, we explored legibility in two implementations of AR X-ray, namely, edge-based and saliency-based. In our first experiment, we explored on the tolerable amounts of occlusion to comfortably distinguish small virtual objects. In our second experiment, we compared edge-based and saliency-based AR X-ray methods when visualizing virtual objects inside various real objects. Moreover, we benchmarked the legibility of these two methods against alpha blending. From our experiments, we observed that users have varied preferences for proper amounts of occlusion cues for both methods. The partial occlusions generated by the edge-based and saliency-based methods need to be adjusted depending on the lighting condition and the texture complexity of the occluding object. In most cases, users identify objects faster with saliency-based AR X-ray than with edge-based AR X-ray. Insights from this research can be directly applied to the development of AR X-ray applications.  相似文献   

7.
Does visually perceived distance differ when objects are viewed in augmented reality (AR), as opposed to the real world? What are the differences? These questions are theoretically interesting, and the answers are important for the development of many tablet- and phone-based AR applications, including mobile AR navigation systems. This article presents a thorough literature review of distance judgment experimental protocols, and results from several areas of perceptual psychology. In addition to distance judgments of real and virtual objects, this section also discusses previous work in measuring the geometry of virtual picture space and considers how this work might be relevant to tablet AR. Then, the article presents the results of two experiments. In each experiment, observers bisected egocentric distances of 15 and 30 m in tablet-based AR and in the real world, in both indoor corridor and outdoor field environments. In AR, observers bisected the distances to virtual humans, while in the real world, they bisected the distances to real humans. This is the first reported research that directly compares distance judgments of real and virtual objects in a tablet AR system. Four key findings were: (1) In AR, observers expanded midpoint intervals at 15 m, but compressed midpoints at 30 m. (2) Observers were accurate in the real world. (3) The environmental setting—corridor or open field—had no effect. (4) The picture perception literature is important in understanding how distances are likely judged in tablet-based AR. Taken together, these findings suggest the depth distortions that AR application developers should expect with mobile and especially tablet-based AR.  相似文献   

8.
Interactive virtual relighting of real scenes   总被引:2,自引:0,他引:2  
Computer augmented reality (CAR) is a rapidly emerging field which enables users to mix real and virtual worlds. Our goal is to provide interactive tools to perform common illumination, i.e., light interactions between real and virtual objects, including shadows and relighting (real and virtual light source modification). In particular, we concentrate on virtually modifying real light source intensities and inserting virtual lights and objects into a real scene; such changes can be very useful for virtual lighting design and prototyping. To achieve this, we present a three-step method. We first reconstruct a simplified representation of real scene geometry using semiautomatic vision-based techniques. With the simplified geometry, and by adapting recent hierarchical radiosity algorithms, we construct an approximation of real scene light exchanges. We next perform a preprocessing step, based on the radiosity system, to create unoccluded illumination textures. These replace the original scene textures which contained real light effects such as shadows from real lights. This texture is then modulated by a ratio of the radiosity (which can be changed) over a display factor which corresponds to the radiosity for which occlusion has been ignored. Since our goal is to achieve a convincing relighting effect, rather than an accurate solution, we present a heuristic correction process which results in visually plausible renderings. Finally, we perform an interactive process to compute new illumination with modified real and virtual light intensities  相似文献   

9.

In this paper, we propose an approach for supporting the design and implementation of interactive and realistic Augmented Reality (AR). Despite the advances in AR technology, most software applications still fail to support AR experiences where virtual objects appear as merged into the real setting. To alleviate this situation, we propose to combine the use of model-based AR techniques with the advantages of current game engines to develop AR scenes in which the virtual objects collide, are occluded, project shadows and, in general, are integrated into the augmented environment more realistically. To evaluate the feasibility of the proposed approach, we extended an existing game platform named GREP to enhance it with AR capacities. The realism of the AR experiences produced with the software was assessed in an event in which more than 100 people played two AR games simultaneously.

  相似文献   

10.
With the recent growth in the development of augmented reality (AR) technologies, it is becoming important to study human perception of AR scenes. In order to detect whether users will suffer more from visual and operator fatigue when watching virtual objects through optical see‐through head‐mounted displays (OST‐HMDs), compared with watching real objects in the real world, we propose a comparative experiment including a virtual magic cube task and a real magic cube task. The scores of the subjective questionnaires (SQ) and the values of the critical flicker frequency (CFF) were obtained from 18 participants. In our study, we use several electrooculogram (EOG) and heart rate variability (HRV) measures as objective indicators of visual and operator fatigue. Statistical analyses were performed to deal with the subjective and objective indicators in the two tasks. Our results suggest that participants were very likely to suffer more from visual and operator fatigue when watching virtual objects presented by the OST‐HMD. In addition, the present study provides hints that HRV and EOG measures could be used to explore how visual and operator fatigue are induced by AR content. Finally, three novel HRV measures are proposed to be used as potential indicators of operator fatigue.  相似文献   

11.
Augmented Reality (AR) provides new ways for situated visualization and human-computer interaction in physical environments. Current evaluation procedures for AR applications rely primarily on questionnaires and interviews, providing qualitative means to assess usability and task solution strategies. Eye tracking extends these existing evaluation methodologies by providing indicators for visual attention to virtual and real elements in the environment. However, the analysis of viewing behavior, especially the comparison of multiple participants, is difficult to achieve in AR. Specifically, the definition of areas of interest (AOIs), which is often a prerequisite for such analysis, is cumbersome and tedious with existing approaches. To address this issue, we present a new visualization approach to define AOIs, label fixations, and investigate the resulting annotated scanpaths. Our approach utilizes automatic annotation of gaze on virtual objects and an image-based approach that also considers spatial context for the manual annotation of objects in the real world. Our results show, that with our approach, eye tracking data from AR scenes can be annotated and analyzed flexibly with respect to data aspects and annotation strategies.  相似文献   

12.
A fundamental problem in optical, see-through augmented reality (AR) is characterizing how it affects the perception of spatial layout and depth. This problem is important because AR system developers need to both place graphics in arbitrary spatial relationships with real-world objects, and to know that users would perceive them in the same relationships. Furthermore, AR makes possible enhanced perceptual techniques that have no real-world equivalent, such as X-ray vision, where AR users are supposed to perceive graphics as being located behind opaque surfaces. This paper reviews and discusses protocols for measuring egocentric depth judgments in both virtual and augmented environments, and discusses the well-known problem of depth underestimation in virtual environments. It then describes two experiments that measured egocentric depth judgments in AR. Experiment I used a perceptual matching protocol to measure AR depth judgments at medium and far-field distances of 5 to 45 meters. The experiment studied the effects of upper versus lower visual field location, the X-ray vision condition, and practice on the task. The experimental findings include evidence for a switch in bias, from underestimating to overestimating the distance of AR-presented graphics, at ~ 23 meters, as well as a quantification of how much more difficult the X-ray vision condition makes the task. Experiment II used blind walking and verbal report protocols to measure AR depth judgments at distances of 3 to 7 meters. The experiment examined real-world objects, real-world objects seen through the AR display, virtual objects, and combined real and virtual objects. The results give evidence that the egocentric depth of AR objects is underestimated at these distances, but to a lesser degree than has previously been found for most virtual reality environments. The results are consistent with previous studies that have implicated a restricted field-of-view, combined with an inability for observers to sc  相似文献   

13.
This paper focuses on how virtual objects' shadows as well as differences in alignment between virtual and real lighting influence distance perception in optical see‐through (OST) augmented reality (AR). Four hypotheses are proposed: (H1) Participants underestimate distances in OST AR; (H2) Virtual objects' shadows improve distance judgment accuracy in OST AR; (H3) Shadows with different realism levels have different influence on distance perception in OST AR; (H4) Different levels of lighting misalignment between real and virtual lights have different influence on distance perception in OST AR scenes. Two experiments were designed with an OST head mounted display (HMD), the Microsoft HoloLens. Participants had to match the position of a virtual object displayed in the OST‐HMD with a real target. Distance judgment accuracy was recorded under the different shadows and lighting conditions. The results validate hypotheses H2 and H4 but surprisingly showed no impact of the shape of virtual shadows on distance judgment accuracy thus rejecting hypothesis H3. Regarding hypothesis H1, we detected a trend toward underestimation; given the high variance of the data, more experiments are needed to confirm this result. Moreover, the study also reveals that perceived distance errors and completion time of trials increase along with targets' distance.  相似文献   

14.
Handheld devices like smartphones and tablets have emerged as one of the most promising platforms for Augmented Reality (AR). The increased usage of these portable handheld devices has enabled handheld AR applications to reach the end-users; hence, it is timely and important to seriously consider the user experience of such applications. AR visualizations for occluded objects enable an observer to look through objects. AR visualizations have been predominantly evaluated using Head-Worn Displays (HWDs), handheld devices have rarely been used. However, unless we gain a better understanding of the perceptual and cognitive effects of handheld AR systems, effective interfaces for handheld devices cannot be designed. Similarly, human perception of AR systems in outdoor environments, which provide a higher degree of variation than indoor environments, has only been insufficiently explored.In this paper, we present insights acquired from five experiments we performed using handheld devices in outdoor locations. We provide design recommendations for handheld AR systems equipped with visualizations for occluded objects. Our key conclusions are the following: (1) Use of visualizations for occluded objects improves the depth perception of occluded objects akin to non-occluded objects. (2) To support different scenarios, handheld AR systems should provide multiple visualizations for occluded objects to complement each other. (3) Visual clutter in AR visualizations reduces the visibility of occluded objects and deteriorates depth judgment; depth judgment can be improved by providing clear visibility of the occluded objects. (4) Similar to virtual reality interfaces, both egocentric and exocentric distances are underestimated in handheld AR. (5) Depth perception will improve if handheld AR systems can dynamically adapt their geometric field of view (GFOV) to match the display field of view (DFOV). (6) Large handheld displays are hard to carry and use; however, they enable users to better grasp the depth of multiple graphical objects that are presented simultaneously.  相似文献   

15.
Realistic images can be computed at interactive frame rates for Computer Graphics applications. Meanwhile, High Dynamic Range (HDR) rendering has a growing success in video games and virtual reality applications, as it improves the image quality and the player’s immersion feeling. In this paper, we propose a new method, based on a physical lighting model, to compute in real time a HDR illumination in virtual environments. Our method allows to re-use existing virtual environments as input, and computes HDR images in photometric units. Then, from these HDR images, displayable 8-bit images are rendered with a tone mapping operator and displayed on a standard display device. The HDR computation and the tone mapping are implemented in OpenSceneGraph with pixel shaders. The lighting model, together with a perceptual tone mapping, improves the perceptual realism of the rendered images at low cost. The method is illustrated with a practical application where the dynamic range of the virtual environment is a key rendering issue: night-time driving simulation.  相似文献   

16.
基于视觉的增强现实技术研究综述   总被引:1,自引:0,他引:1  
增强现实技术是当前计算机视觉领域中的热点问题之一,本文系统综述了基于视觉的增强现实 技术的研究现状.首先,对基于视觉的增强现实技术的研究现状进行了概述;其次,详细地阐述了该领域目 前出现的新技术,包括三维注册技术、摄像机标定技术、跟踪技术、虚实光照的一致性以及AR 实空间建模技 术,继而论述了该领域在研究中存在的问题,并对该领域中需要进一步解决的问题进行了讨论;最后,对增 强现实技术的进一步研究进行了展望.  相似文献   

17.
The irradiance volume   总被引:4,自引:0,他引:4  
A major goal in computer graphics is realistic image synthesis. To this end, illumination methods have evolved from simple local shading models to physically based global illumination algorithms. Local illumination methods consider only the light energy transfer between an emitter and a surface (direct lighting), while global methods account for light energy interactions between all surfaces in an environment, considering both direct and indirect lighting. Even though the realistic effects that global illumination algorithms provide are frequently desirable, the computational expense of these methods is too great for many applications. Dynamic environments and scenes containing a very large number of surfaces often pose problems for global illumination methods. This article presents a different approach to calculating the global illumination of objects. Instead of striving for accuracy at the expense of performance, we rephrase the goal: to achieve a reasonable approximation with high performance. This places global illumination effects within reach of many applications in which visual appearance is more important than absolute numerical accuracy  相似文献   

18.
Image‐based lighting has allowed the creation of photo‐realistic computer‐generated content. However, it requires the accurate capture of the illumination conditions, a task neither easy nor intuitive, especially to the average digital photography enthusiast. This paper presents an approach to directly estimate an HDR light probe from a single LDR photograph, shot outdoors with a consumer camera, without specialized calibration targets or equipment. Our insight is to use a person's face as an outdoor light probe. To estimate HDR light probes from LDR faces we use an inverse rendering approach which employs data‐driven priors to guide the estimation of realistic, HDR lighting. We build compact, realistic representations of outdoor lighting both parametrically and in a data‐driven way, by training a deep convolutional autoencoder on a large dataset of HDR sky environment maps. Our approach can recover high‐frequency, extremely high dynamic range lighting environments. For quantitative evaluation of lighting estimation accuracy and relighting accuracy, we also contribute a new database of face photographs with corresponding HDR light probes. We show that relighting objects with HDR light probes estimated by our method yields realistic results in a wide variety of settings.  相似文献   

19.
在传统的AR系统中,虚拟物体和真实场景在视觉上存在较为明显的差异,达不到虚拟物体和真实场景无缝结合的要求。将增强现实技术与NPR有机地结合起来,减小这种视觉差异,研究并实现了水彩画风格的增强现实系统。  相似文献   

20.
在互动电子游戏、增强现实等对实时计算要求很高的交互式图形应用中,大量使用复杂环境光源对虚拟物体进行照明,使其和真实场景的光照一致,虚实融合.提出了用Cook Torrance光照模型进行虚实场景的光照计算;利用球面调和基函数的方法,实时地计算高动态范围环境映射光照系数,得到高动态范围环境映射的二次多项式表达形式,在着色器计算该式得到漫反射分量;通过环境映射技术对镜面反射进行模拟,全部光照计算在GPU中完成.实验结果表明,该方法在动态变化的复杂环境光源下,完成对虚拟物体光照实时计算,绘制速度每秒30帧以上,绘制结果具有较强的真实感.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号