首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Stereo Light Probe   总被引:1,自引:0,他引:1  
In this paper we present a practical, simple and robust method to acquire the spatially‐varying illumination of a real‐world scene. The basic idea of the proposed method is to acquire the radiance distribution of the scene using high‐dynamic range images of two reflective balls. The use of two light probes instead of a single one allows to estimate, not only the direction and intensity of the light sources, but also the actual position in space of the light sources. To robustly achieve this goal we first rectify the two input spherical images, then, using a region‐based stereo matching algorithm, we establish correspondences and compute the position of each light. The radiance distribution so obtained can be used for augmented reality applications, photo‐realistic rendering and accurate reflectance properties estimation. The accuracy and the effectiveness of the method have been tested by measuring the computed light position and rendering synthetic version of a real object in the same scene. The comparison with standard method that uses a simple spherical lighting environment is also shown.  相似文献   

2.
提出一种聚类立即辐射度方法,以实现增强现实等领域需要高度真实感的全局光照算法来实现实时交互的绘制效果要求。为此,改进了传统的立即辐射度方法,将大量的用于表达间接光照的虚拟点光源聚类到少量的虚拟面光源中,并使用实时软阴影算法快速计算可见性。同时,借助图形硬件GPU加速场景绘制。实验结果表明,算法在增强现实环境等领域中支持完全动态场景,且在保证良好视觉效果的前提下获得了实时绘制帧率。  相似文献   

3.
虚拟实景空间中的纹理映射技术研究   总被引:4,自引:2,他引:2  
在传统的纹理映射的基础上,分析了虚拟实景空间中的纹理映射技术的主要特点,并采取图形与图象相结合的方法,以照片作为输入,建立了能自由漫游的虚拟环境,对改进基于图象的渲染作了一些探索。  相似文献   

4.
提出一种基于蒙特卡洛积分,利用半球谐函数对光滑平面进行的快速全局照明计算方法。该方法通过在光滑平面上的辐亮度进行取样,然后把其放进高速缓存器中,经过计算再对其它点进行插值。为了提高计算速度,物体表面的入射辐亮度被半球谐化,并且物体表面的双向反射率分布函数也被定义成两个半球面上的笛卡儿积。插值时,利用梯度方向插值,并且用了一种简便的方法来计算一个点的梯度。该方法能极大提高了全局照明的计算速度。这对于照明工程、高质量的动画制作及虚拟现实等领域都具有非常广阔的应用前景。  相似文献   

5.
目前在互联网上有海量的室外场景照片,它们不仅内容覆盖面广,对同一场景有大量不同视点和光照条件下的采样,而且获取成本很低.如何利用这些照片中包含的丰富场景信息,快速、方便地构造各种逼真的虚拟场景,是互联网迅猛发展给虚拟现实、计算机图形学和计算机视觉带来的新的研究课题之一.文中分析、总结了近年来国内外互联网图像的真实场景几何建模、自然光照建模和材质反射属性建模的最新研究进展,并对其未来的发展趋势提出了一些看法.  相似文献   

6.
In this paper, we present a simple and robust mixed reality (MR) framework that allows for real-time interaction with virtual humans in mixed reality environments under consistent illumination. We will look at three crucial parts of this system: interaction, animation and global illumination of virtual humans for an integrated and enhanced presence. The interaction system comprises of a dialogue module, which is interfaced with a speech recognition and synthesis system. Next to speech output, the dialogue system generates face and body motions, which are in turn managed by the virtual human animation layer. Our fast animation engine can handle various types of motions, such as normal key-frame animations, or motions that are generated on-the-fly by adapting previously recorded clips. Real-time idle motions are an example of the latter category. All these different motions are generated and blended on-line, resulting in a flexible and realistic animation. Our robust rendering method operates in accordance with the previous animation layer, based on an extended for virtual humans precomputed radiance transfer (PRT) illumination model, resulting in a realistic rendition of such interactive virtual characters in mixed reality environments. Finally, we present a scenario that illustrates the interplay and application of our methods, glued under a unique framework for presence and interaction in MR.  相似文献   

7.
Constructing Virtual Cities by Using Panoramic Images   总被引:1,自引:0,他引:1  
Simultaneously acquired omni-directional images contain rays of 360 degree viewing directions. To take advantage of this unique characteristic, we have been developing several methods for constructing virtual cities. In this paper, we first describe a system to generate the appearance of a virtual city; the system, which is based on image-based rendering (IBR) techniques, utilizes the characteristics of omni-directional images to reduce the number of samplings required to construct such IBR images. We then describe a method to add geometric information to the IBR images; this method is based on the analysis of a sequence of omni-directional images. Then, we describe a method to seamlessly superimpose a new building model onto a previously created virtual city image; the method enables us to estimate illumination distributions by using an omni-directional camera. Finally, to demonstrate the methods' effectiveness, we describe how we implemented and applied them to urban scenes.  相似文献   

8.
基于图像的三维光照效果的动态重现   总被引:1,自引:0,他引:1  
韩慧健  徐琳 《计算机应用》2005,25(9):2123-2125
传统图像纹理可以给虚拟物体增加丰富的细节,但由于纹理是在特定光照条件下获取的照片,当虚拟光照条件变化时,纹理细节缺少随之动态变化的真实感。凹凸纹理通过扰动法向量可以实现细节的基本阴影变化,但生成一张真实照片的凹凸纹理是困难的。文中介绍了BRDF和BTF的概念和理论,提出了一种基于图像的适用于漫反射物体表面细节动态重现的方法,可以使映射后的纹理在不同虚拟光照条件下呈现动态变化的效果。  相似文献   

9.
从单幅高动态范围图像恢复环境中物体的材质   总被引:2,自引:0,他引:2  
孙其民  吴恩华 《软件学报》2002,13(9):1852-1857
提出一种从单幅高动态范围图像恢复一般环境中物体材质的方法,适用于单一材质物体,对物体形状和光照条件没有任何特殊要求.在一般光照环境中,获取被考察物体的一幅高动态范围图像以及用来近似物体光照的一个或几个高动态范围环境映照,然后用模拟退火算法求解逆向绘制问题.在求解过程中采用了基于图像的光照和光线跟踪技术,充分考虑了物体自身互反射的影响.最后得到了物体表面反射模型的最优参数.若与基于图像的建模技术相结合,可以根据真实物体的照片建立真实感模型.  相似文献   

10.
Image-based modeling and rendering has been demonstrated as a cost-effective and efficient approach to virtual reality applications. The computational model that most image-based techniques are based on is the plenoptic function. Since the original formulation of the plenoptic function does not include illumination, most previous image-based virtual reality applications simply assume that the illumination is fixed. We propose a formulation of the plenoptic function, called the plenoptic illumination function, which explicitly specifies the illumination component. Techniques based on this new formulation can be extended to support relighting as well as view interpolation. To relight images with various illumination configurations, we also propose a local illumination model, which utilizes the rules of image superposition. We demonstrate how this new formulation can be applied to extend two existing image-based representations, panorama representation such as QuickTime VR and two-plane parameterization, to support relighting with trivial modifications. The core of this framework is compression, and we therefore show how to exploit two types of data correlation, the intra-pixel and the inter-pixel correlations, in order to achieve a manageable storage size.  相似文献   

11.
Rendering realistic organic materials is a challenging issue. The human eye is an important part of nonverbal communication which, consequently, requires specific modeling and rendering techniques to enhance the realism of virtual characters. We propose an image-based method for estimating both iris morphology and scattering features in order to generate convincing images of virtual eyes. In this regard, we develop a technique to unrefract iris photographs. We model the morphology of the human iris as an irregular multilayered tissue. We then approximate the scattering features of the captured iris. Finally, we propose a real-time rendering technique based on the subsurface texture mapping representation and introduce a precomputed refraction function as well as a caustic function, which accounts for the light interactions at the corneal interface.  相似文献   

12.
This paper proposes a method for efficiently rendering indirect highlights. Indirect highlights are caused by the primary light source reflecting off two or more glossy surfaces. Accurately simulating such highlights is important to convey the realistic appearance of materials such as chrome and shiny metal. Our method models the glossy BRDF at a surface point as a directional distribution, using a spherical von Mises‐Fisher (vMF) distribution. As our main contribution, we merge multiple vMFs into a combined multimodal distribution. This effectively creates a filtered radiance response function, allowing us to efficiently estimate indirect highlights. We demonstrate our method in a near‐interactive application for rendering scenes with highly glossy objects. Our results produce realistic reflections under both local and environment lighting.  相似文献   

13.
In mixed reality (MR) design review, the aesthetics of a virtual prototype is assessed by integrating a virtual model into a real-world environment and inspecting the interaction between the model and the environment (lighting, shadows and reflections) from different points of view. The visualization of the virtual model has to be as realistically as possible to provide a solid basis for this assessment and interactive rendering speed is mandatory to allow the designer to examine the scene from arbitrary positions. In this article we present a real-time rendering engine specifically tailored to the needs of MR visualization. The renderer utilizes pre-computed radiance transfer to calculate dynamic soft-shadows, high dynamic range images and image-based lighting to capture incident real-world lighting, approximate bidirectional texture functions to render materials with self-shadowing, and frame post-processing filters (bloom filter and an adaptive tone mapping operator). The proposed combination of rendering techniques provides a trade-off between rendering quality and required computing resources which enables high quality rendering in mobile MR scenarios. The resulting image fidelity is superior to radiosity-based techniques because glossy materials and dynamic environment lighting with soft-shadows are supported. Ray tracing-based techniques provide higher quality images than the proposed system, but they require a cluster of computers to achieve interactive frame rates which prevents these techniques from being used in mobile MR (especially outdoor) scenarios. The renderer was developed in the European research project IMPROVE (FP6-IST-2-004785) and is currently extended in the MAXIMUS project (FP7-ICT-1-217039) where hybrid rendering techniques which fuse PRT and ray tracing are developed.  相似文献   

14.
通过对渐进式光子映射算法进行扩展,提出了一种基于自适应光子发射的渐进式光子映射算法.渐进式光子映射是一个多遍的全局光照算法,通过不断发射光子并渐进更新场景各点的光能估计能使其最终能收敛到无偏差的结果.由于渐进式光子映射完全使用密度估计来计算各点的光能,因此其收敛速度受光子分布影响较大.利用渐进式光子映射算法中固有的场景统计信息以及其多遍的特点,设计了一个自适应的光子发射策略,使得发射的光子能更多的分布在对最终绘制有效的区域,提高了原算法的绘制效率.  相似文献   

15.
将图像和几何模型相结合,对于需要操纵和改变外观轮廓形状的目标视景采用基于几何模型的方法建模,对于环境视景采用基于图像(IBR)的方法建模,并对这两种模型进行图像合成,实现的场景既具有很好的真实感,又具有很好的实时性和可操纵性。实现了一个图像和几何模型相结合的虚拟视景系统,介绍了系统的结构、目标视景三维模型生成、环境视景全景图的拼合及虚拟场景图像合成与显示等关键技术。  相似文献   

16.
可微绘制技术是当前虚拟现实、计算机图形学与计算机视觉领域研究的热点,其目标是改造计算机图形学中以光栅化或光线跟踪算法为主的真实感绘制流程,支持梯度信息回传以计算由输出图像的变化导致的输入几何、材质属性变化,通过与优化及深度学习技术等相结合支持从数据中学习绘制模型和逆向推理,是可微学习技术在计算机图形学绘制技术中的应用的具体体现,在增强/虚拟现实内容生成、三维重建、表观采集建模和逆向光学设计等领域中有广泛的应用前景。本文对可微绘制当前的发展状况进行调研,重点对该技术在真实感绘制、3维重建和表观采集建模中的研究和应用情况进行综述,并对可微绘制技术发展趋势进行展望,以期推动可微技术在学术界和产业界的进一步发展。  相似文献   

17.
Despite great efforts in recent years to accelerate global illumination computation, the real-time ray tracing of fully dynamic scenes to support photorealistic indirect illumination effects has yet to be achieved in computer graphics. In this paper, we propose an extended ray tracing model that can be readily implemented on a GPU to facilitate the interactive generation of diffuse indirect illumination, the quality of which is comparable to that generated by the traditional, time-consuming photon mapping method and final gathering. Our method employs three types of (multilevel) grids to represent the indirect light in a scene using a form that facilitates the efficient estimation of the reflected radiance caused by diffuse interreflection. This method includes the mathematical tool of spherical harmonics and a rendering scheme that performs the final gathering step with a minimal cost during ray tracing, which guarantees the interactive frame rates. We evaluated our technique using several dynamic scenes with nontrivial complexity, which demonstrated its effectiveness.  相似文献   

18.
空间动态可变材质的交互式全局光照明绘制   总被引:1,自引:1,他引:0  
孙鑫  周昆  石教英 《软件学报》2008,19(7):1783-1793
提出了一种空间动态可变材质的交互式全局光照明绘制算法.如果在绘制过程中允许用户对物体的材质作修改,并且对一个物体的不同部分的材质作不同的修改,则称为空间动态可变材质.由于最终出射的辐射亮度和材质呈非线性关系,因此现有许多交互式全局光照明算法不允许用户修改物体的材质.如果一个物体各部分的材质可以不相同,那么材质对最终的出射的辐射亮度的影响更为复杂,目前没有任何交互式全局光照明绘制算法能够在绘制过程中对一个物体不同部分的材质作不同的修改.将一个空间动态可变材质区域划分成许多子区域来近似模拟,每个子区域内部材质处处相同.光在场景传播过程中可能先后被不同的子区域反射,并以此将最终出射的辐射亮度分为许多部分.用一组基材质来线性表示所有的材质,这组基材质被赋予场景中的所有子区域,从而得到不同的基材质的分布.预计算所有这些基材质分布下的各部分最终出射的辐射亮度.绘制时根据各子区域材质在基材质上的系数组合相应的预计算数据,就能交互式绘制全局光照明效果.  相似文献   

19.
Acquired 3D point clouds make possible quick modeling of virtual scenes from the real world. With modern 3D capture pipelines, each point sample often comes with additional attributes such as normal vector and color response. Although rendering and processing such data has been extensively studied, little attention has been devoted using the light transport hidden in the recorded per‐sample color response to relight virtual objects in visual effects (VFX) look‐dev or augmented reality (AR) scenarios. Typically, standard relighting environment exploits global environment maps together with a collection of local light probes to reflect the light mood of the real scene on the virtual object. We propose instead a unified spatial approximation of the radiance and visibility relationships present in the scene, in the form of a colored point cloud. To do so, our method relies on two core components: High Dynamic Range (HDR) expansion and real‐time Point‐Based Global Illumination (PBGI). First, since an acquired color point cloud typically comes in Low Dynamic Range (LDR) format, we boost it using a single HDR photo exemplar of the captured scene that can cover part of it. We perform this expansion efficiently by first expanding the dynamic range of a set of renderings of the point cloud and then projecting these renderings on the original cloud. At this stage, we propagate the expansion to the regions not covered by the renderings or with low‐quality dynamic range by solving a Poisson system. Then, at rendering time, we use the resulting HDR point cloud to relight virtual objects, providing a diffuse model of the indirect illumination propagated by the environment. To do so, we design a PBGI algorithm that exploits the GPU's geometry shader stage as well as a new mipmapping operator, tailored for G‐buffers, to achieve real‐time performances. As a result, our method can effectively relight virtual objects exhibiting diffuse and glossy physically‐based materials in real time. Furthermore, it accounts for the spatial embedding of the object within the 3D environment. We evaluate our approach on manufactured scenes to assess the error introduced at every step from the perfect ground truth. We also report experiments with real captured data, covering a range of capture technologies, from active scanning to multiview stereo reconstruction.  相似文献   

20.
阐述了虚实结合构建虚拟战场场景的基本思想,将虚景和实景相结合,对于需要操纵和改变外观轮廓形状的目标视景采用虚景即基于几何模型的方法建模,对于环境视景采用实景即基于图像(IBR)的方法建模,并对这两种模型进行图像合成,实现的场景既具有很好的真实感,又具有很好的实时性和可操纵性。实现了一个图像和几何模型相结合的虚拟战场视景系统,介绍了系统的结构、目标视景三维模型生成、环境视景的生成及虚拟场景图像生成与显示等关键技术。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号