首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Photo‐realistic rendering of virtual objects into real scenes is one of the most important research problems in computer graphics. Methods for capture and rendering of mixed reality scenes are driven by a large number of applications, ranging from augmented reality to visual effects and product visualization. Recent developments in computer graphics, computer vision, and imaging technology have enabled a wide range of new mixed reality techniques including methods for advanced image based lighting, capturing spatially varying lighting conditions, and algorithms for seamlessly rendering virtual objects directly into photographs without explicit measurements of the scene lighting. This report gives an overview of the state‐of‐the‐art in this field, and presents a categorization and comparison of current methods. Our in‐depth survey provides a tool for understanding the advantages and disadvantages of each method, and gives an overview of which technique is best suited to a specific problem.  相似文献   

2.
Acquired 3D point clouds make possible quick modeling of virtual scenes from the real world. With modern 3D capture pipelines, each point sample often comes with additional attributes such as normal vector and color response. Although rendering and processing such data has been extensively studied, little attention has been devoted using the light transport hidden in the recorded per‐sample color response to relight virtual objects in visual effects (VFX) look‐dev or augmented reality (AR) scenarios. Typically, standard relighting environment exploits global environment maps together with a collection of local light probes to reflect the light mood of the real scene on the virtual object. We propose instead a unified spatial approximation of the radiance and visibility relationships present in the scene, in the form of a colored point cloud. To do so, our method relies on two core components: High Dynamic Range (HDR) expansion and real‐time Point‐Based Global Illumination (PBGI). First, since an acquired color point cloud typically comes in Low Dynamic Range (LDR) format, we boost it using a single HDR photo exemplar of the captured scene that can cover part of it. We perform this expansion efficiently by first expanding the dynamic range of a set of renderings of the point cloud and then projecting these renderings on the original cloud. At this stage, we propagate the expansion to the regions not covered by the renderings or with low‐quality dynamic range by solving a Poisson system. Then, at rendering time, we use the resulting HDR point cloud to relight virtual objects, providing a diffuse model of the indirect illumination propagated by the environment. To do so, we design a PBGI algorithm that exploits the GPU's geometry shader stage as well as a new mipmapping operator, tailored for G‐buffers, to achieve real‐time performances. As a result, our method can effectively relight virtual objects exhibiting diffuse and glossy physically‐based materials in real time. Furthermore, it accounts for the spatial embedding of the object within the 3D environment. We evaluate our approach on manufactured scenes to assess the error introduced at every step from the perfect ground truth. We also report experiments with real captured data, covering a range of capture technologies, from active scanning to multiview stereo reconstruction.  相似文献   

3.
This paper presents methods for photo‐realistic rendering using strongly spatially variant illumination captured from real scenes. The illumination is captured along arbitrary paths in space using a high dynamic range, HDR, video camera system with position tracking. Light samples are rearranged into 4‐D incident light fields (ILF) suitable for direct use as illumination in renderings. Analysis of the captured data allows for estimation of the shape, position and spatial and angular properties of light sources in the scene. The estimated light sources can be extracted from the large 4D data set and handled separately to render scenes more efficiently and with higher quality. The ILF lighting can also be edited for detailed artistic control.  相似文献   

4.
俞益洲 《计算机学报》2000,23(9):898-898
Image- based modeling and rendering techniques greatly advanced the level of photorealism incomputer graphics.They were originally proposed to accelerate rendering with the ability to varyviewpoint only.My work in this area focused on capturing and modeling real scenes for novel visual interactionssuch as varying lighting condition and scene configuration in addition to viewpoint.Thiswork can leadto Applications such as virtual navigation of a real scene,interaction with the scene,novel scenec…  相似文献   

5.
Image-based crowd rendering   总被引:4,自引:0,他引:4  
Populated virtual urban environments are important in many applications, from urban planning to entertainment. At the current stage of technology, users can interactively navigate through complex, polygon-based scenes rendered with sophisticated lighting effects and high-quality antialiasing techniques. As a result, animated characters (or agents) that users can interact with are also becoming increasingly common. However, rendering crowded scenes with thousands of different animated virtual people in real time is still challenging. To address this, we developed an image-based rendering approach for displaying multiple avatars. We take advantage of the properties of the urban environment and the way a viewer and the avatars move within it to produce fast rendering, based on positional and directional discretization. To display many different individual people at interactive frame rates, we combined texture compression with multipass rendering  相似文献   

6.
Light field display (LFD) is considered as a promising technology to reconstruct the light rays’ distribution of the real 3D scene, which approximates the original light field of target displayed objects with all depth cues in human vision including binocular disparity, motion parallax, color hint and correct occlusion relationship. Currently, computer-generated content is widely used for the LFD system, therefore rich 3D content can be provided. This paper firstly introduces applications of light field technologies in display system. Additionally, virtual stereo content rendering techniques and their application scenes are thoroughly combed and pointed out its pros and cons. Moreover, according to the different characteristics of light field system, the coding and correction algorithms in virtual stereo content rendering techniques are reviewed. Through the above discussion, there are still many problems in the existing rendering techniques for LFD. New rendering algorithms should be introduced to solve the real-time light-field rendering problem for large-scale virtual scenes.  相似文献   

7.
VECW:一个虚拟环境的构造和漫游系统   总被引:7,自引:0,他引:7  
如何真实地在计算机中表达现实世界是计算机图形学的一个重要研究方向。基于几何造型的图像绘制混合是一种很有应用前景的方法。文中在实现前人提出的从建筑物照片中提取建筑物的几模型和相应纹理映射的算法的基础上,给出了一个虚拟环境的构造和漫游系统,简称VECW。  相似文献   

8.
We present an application of interactive global illumination and spatially augmented reality to architectural daylight modeling that allows designers to explore alternative designs and new technologies for improving the sustainability of their buildings. Images of a model in the real world, captured by a camera above the scene, are processed to construct a virtual 3D model. To achieve interactive rendering rates, we use a hybrid rendering technique, leveraging radiosity to simulate the interreflectance between diffuse patches and shadow volumes to generate per-pixel direct illumination. The rendered images are then projected on the real model by four calibrated projectors to help users study the daylighting illumination. The virtual heliodon is a physical design environment in which multiple designers, a designer and a client, or a teacher and students can gather to experience animated visualizations of the natural illumination within a proposed design by controlling the time of day, season, and climate. Furthermore, participants may interactively redesign the geometry and materials of the space by manipulating physical design elements and see the updated lighting simulation.  相似文献   

9.
The shading on curved surfaces is a cue to shape. Current computer vision methods for analyzing shading use physically unrealistic models, have serious mathematical problems, cannot exploit geometric information if it is available, and are not reliable in practice. We introduce a novel method of accounting for variations in irradiance resulting from interreflections, complex sources and the like. Our approach uses a spatially varying source model with a local shading model. Fast spatial variation in the source is penalised, consistent with the rendering community’s insight that interreflections are spatially slow. This yields a physically plausible shading model. Because modern cameras can make accurate reports of observed radiance, our method compels the reconstructed surface to have shading exactly consistent with that of the image. For inference, we use a variational formulation, with a selection of regularization terms which guarantee that a solution exists. Our method is evaluated on physically accurate renderings of virtual objects, and on images of real scenes, for a variety of different kinds of boundary condition. Reconstructions for single sources compare well with photometric stereo reconstructions and with ground truth.  相似文献   

10.
运动车辆的实时建模、动力学行为仿真是赛车虚拟环境中的重要组成部分。论文设计并实现了一实时赛车游戏引擎,在引擎车辆建模中提出了一种基于多体动力学理论实时逼真的赛车建模方法,对赛车的真实受力状况进行简化,模拟车辆的各种动力学行为和车辆运动的真实感绘制。并实现了引擎中的碰撞检测技术、音效处理技术、虚拟环境的实时绘制技术。增强用户漫游虚拟环境的沉浸感。  相似文献   

11.
一个基于全景图的虚拟环境漫游系统   总被引:5,自引:0,他引:5  
虚拟现实的实现有两种方法。传统上,使用三维图形学方法进行建模和绘制,这种方法需要繁琐的建模工作和昂贵的专用绘图硬件,而且用三维模型很难真实表现自然景观。基于图像的绘制是实现虚拟现实系统的一种新方法,它克服了三维图形方法的上述缺点,近年来得到了日益广泛的应用。文章讨论了一个基于图像的虚拟环境漫游系统的实现,分析了此类系统的模型,介绍了系统实现中摄像机定标、图像拼接、实时图像变换等关键技术。  相似文献   

12.
Monte‐Carlo path tracing techniques can generate stunning visualizations of medical volumetric data. In a clinical context, such renderings turned out to be valuable for communication, education, and diagnosis. Because a large number of computationally expensive lighting samples is required to converge to a smooth result, progressive rendering is the only option for interactive settings: Low‐sampled, noisy images are shown while the user explores the data, and as soon as the camera is at rest the view is progressively refined. During interaction, the visual quality is low, which strongly impedes the user's experience. Even worse, when a data set is explored in virtual reality, the camera is never at rest, leading to constantly low image quality and strong flickering. In this work we present an approach to bring volumetric Monte‐Carlo path tracing to the interactive domain by reusing samples over time. To this end, we transfer the idea of temporal antialiasing from surface rendering to volume rendering. We show how to reproject volumetric ray samples even though they cannot be pinned to a particular 3D position, present an improved weighting scheme that makes longer history trails possible, and define an error accumulation method that downweights less appropriate older samples. Furthermore, we exploit reprojection information to adaptively determine the number of newly generated path tracing samples for each individual pixel. Our approach is designed for static, medical data with both volumetric and surface‐like structures. It achieves good‐quality volumetric Monte‐Carlo renderings with only little noise, and is also usable in a VR context.  相似文献   

13.
杨兵  李凤霞  战守义 《计算机应用》2005,25(10):2362-2364
针对基于Vega等高层平台的视景仿真系统中缺乏实时阴影绘制的缺点,提出改进方法,在场景中加入阴影绘制。研究了虚拟环境中的实时阴影生成算法和Vega提供的扩展机制,确定采用阴影映射(shadow mapping)算法,用OpenGL实现阴影绘制,并使用Vega平台提供的回调机制将阴影绘制集成到场景中,从而使生成的场景真实感更强。  相似文献   

14.
Kwak  Suhwan  Choe  Jongin  Seo  Sanghyun 《Multimedia Tools and Applications》2020,79(23-24):16141-16154

Rapid developments in augmented reality (AR) and related technologies have led to increasing interest in immersive content. AR environments are created by combining virtual 3D models with a real-world video background. It is important to merge these two worlds seamlessly if users are to enjoy AR applications, but, all too often, the illumination and shading of virtual objects is not consider the real world lighting condition or does not match that of nearby real objects. In addition, visual artifacts produced when blending real and virtual objects further limit realism. In this paper, we propose a harmonic rendering technique that minimizes the visual discrepancy between the real and virtual environments to maintain visual coherence in outdoor AR. To do this, we introduce a method of estimating and approximating the Sun’s position and the sunlight direction to estimate the real sunlight intensity, as this is the most significant illumination source in outdoor AR and it provides a more realistic lighting environment for such content, reducing the mismatch between real and virtual objects.

  相似文献   

15.
基于图像的光照   总被引:1,自引:0,他引:1  
基于图像的光照技术是利用现实世界的光照图像来照明现实的或计算机生成的场景及物体的一种方法。介绍了基于图像光照的基本方法和步骤,在LightWave 3D图形软件中建立三维物体模型,加入高动态范围图像作为场景照明环境,通过光通量与高动态范围图像的结合,获得了逼真的光照及环境映射图像。  相似文献   

16.
This paper proposes a new methodology for measuring the error of unbiased physically based rendering algorithms. The current state of the art includes mean squared error (MSE) based metrics and visual comparisons of equal‐time renderings of competing algorithms. Neither is satisfying as MSE does not describe behavior and can exhibit significant variance, and visual comparisons are inherently subjective. Our contribution is two‐fold: First, we propose to compute many short renderings instead of a single long run and use the short renderings to estimate MSE expectation and variance as well as per‐pixel standard deviation. An algorithm that achieves good results in most runs, but with occasional outliers is essentially unreliable, which we wish to quantify numerically. We use per‐pixel standard deviation to identify problematic lighting effects of rendering algorithms. The second contribution is the error spectrum ensemble (ESE), a tool for measuring the distribution of error over frequencies. The ESE serves two purposes: It reveals correlation between pixels and can be used to detect outliers, which offset the amount of error substantially.  相似文献   

17.
Acquisition, Synthesis, and Rendering of Bidirectional Texture Functions   总被引:1,自引:1,他引:0  
One of the main challenges in computer graphics is still the realistic rendering of complex materials such as fabric or skin. The difficulty arises from the complex meso structure and reflectance behavior defining the unique look‐and‐feel of a material. A wide class of such realistic materials can be described as 2D‐texture under varying light‐ and view direction, namely, the Bidirectional Texture Function (BTF). Since an easy and general method for modeling BTFs is not available, current research concentrates on image‐based methods, which rely on measured BTFs (acquired real‐world data) in combination with appropriate synthesis methods. Recent results have shown that this approach greatly improves the visual quality of rendered surfaces and therefore the quality of applications such as virtual prototyping. This state‐of‐the‐art report (STAR) will present the techniques for the main tasks involved in producing photo‐realistic renderings using measured BTFs in details.  相似文献   

18.
大规模复杂场景交互绘制技术综述   总被引:2,自引:0,他引:2  
大规模复杂场景的快速绘制是虚拟现实、实时仿真和三维交互设计等许多重要应用的底层支撑技术,也是诸多研究领域面临的一个基本问题.随着近几年三维扫描和建模技术的飞速发展,三维场景的规模和复杂度不断增大,大规模复杂场景的交互绘制受到了国内外研究者越来越多的重视并取得了一系列研究成果.首先简要回顾了大规模复杂场景交互绘制的研究进展情况;然后通过对其中涉及的主要关键技术进行总结分析,并对国内外典型的绘制系统进行比较和分类,阐述了大规模复杂场景交互绘制的主要研究内容,给出了大规模复杂场景交互绘制系统所应包含的基本组成部分和一般框架;最后对今后的发展方向做出了展望.  相似文献   

19.
While many methods exist for simulating diffuse light inter-reflections, relatively few of them are adapted to dynamic scenes. Despite approximations made on the formal rendering equation, managing dynamic environments at interactive or real-time frame rates still remains one of the most challenging problems. This paper presents a lighting simulation system based on photon streaming, performed continuously on the central processor unit. The power corresponding to each photon impact is accumulated onto predefined points, called virtual light accumulators (or VLA). VLA are used during the rendering phase as virtual light sources. We also introduce a priority management system that automatically adapts to brutal changes during lighting simulation (for instance due to visibility changes or fast object motion). Our system naturally benefits from multi-core architecture. The rendering process is performed in real time using a graphics processor unit, independently from the lighting simulation process. As shown in the results, our method provides high framerates for dynamic scenes, with moving viewpoint, objects and light sources.  相似文献   

20.
We present a novel technique for capturing spatially or temporally resolved light probe sequences, and using them for image based lighting. For this purpose we have designed and built a real-time light probe, a catadioptric imaging system that can capture the full dynamic range of the lighting incident at each point in space at video frame rates, while being moved through a scene. The real-time light probe uses a digital imaging system which we have programmed to capture high quality, photometrically accurate color images of 512×512 pixels with a dynamic range of 10000000:1 at 25 frames per second. By tracking the position and orientation of the light probe, it is possible to transform each light probe into a common frame of reference in world coordinates, and map each point and direction in space along the path of motion to a particular frame and pixel in the light probe sequence. We demonstrate our technique by rendering synthetic objects illuminated by complex real world lighting, first by using traditional image based lighting methods and temporally varying light probe illumination, and second an extension to handle spatially varying lighting conditions across large objects and object motion along an extended path.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号