共查询到11条相似文献,搜索用时 31 毫秒
1.
2.
IBR(Image Based Rendering)作为计算机图形学一个重要领域,已有大量的技术提出以重现先前绘制的或实际的图像,如:通过扭曲输入图像在图像间插值,它们利用场景的深度信息或图像间的相关性在多幅图像间实现。光场渲染是在不需要图像的深度信息或相关性的条件下,通过相机把在任何一个位置,任何一个视角观察到的场景的样子拍摄下来,然后在渲染的过程中,对于每个视点简单地合成那些有用的图像,从而最终得到新的图像。在本文中我们将研究一套将光场渲染技术在虚拟展示中加以应用的软件实现解决方案。 相似文献
3.
IBR(Image Based Renderin曲作为计算机图形学一个重要领域,已有大量的技术提出以重现先前绘制的或实际的图像,如:通过扭曲输入图像在图像间插值,它们利用场景的深度信息或图像间的相关性在多幅图像间实现。光场渲染是在不需要图像的深度信息或相关性的条件下,通过相机把在任何一个位置,任何一个视角观察到的场景的样子拍摄下来,然后在渲染的过程中,对于每个视点简单地合成那些有用的图像.从而最终得到新的图像。在本文中我们将研究一套将光场渲染技术在虚拟展示中加以应用的软件实现解决方案。 相似文献
4.
Image‐based rendering techniques are a powerful alternative to traditional polygon‐based computer graphics. This paper presents a novel light field rendering technique which performs per‐pixel depth correction of rays for high‐quality reconstruction. Our technique stores combined RGB and depth values in a parabolic 2D texture for every light field sample acquired at discrete positions on a uniform spherical setup. Image synthesis is implemented on the GPU as a fragment program which extracts the correct image information from adjacent cameras for each fragment by applying per‐pixel depth correction of rays. We show that the presented image‐based rendering technique provides a significant improvement compared to previous approaches. We explain two different rendering implementations which make use of a uniform parametrisation to minimise disparity problems and ensure full six degrees of freedom for virtual view synthesis. While one rendering algorithm implements an iterative refinement approach for rendering light fields with per pixel depth correction, the other approach employs a raycaster, which provides superior rendering quality at moderate frame rates. GPU based per‐fragment depth correction of rays, used in both implementations, helps reducing ghosting artifacts to a non‐noticeable amount and provides a rendering technique that performs without exhaustive pre‐processing for 3D object reconstruction and without real‐time ray‐object intersection calculations at rendering time. 相似文献
5.
Data Intermixing and Multi-volume Rendering 总被引:1,自引:0,他引:1
The main difference between multi-volume rendering and mono-volume rendering is data intermixing. In this paper, we present three levels of data intermixing and their rendering pipelines in direct multi-volume rendering, which discriminate image level intensity intermixing, accumulation level opacity intermixing, and illumination model level parameter intermixing. In the context of radiotherapy treatment planning, different data intermixing methods are applied to three volumes, including CT volume, Dose volume, and Segmentation volume, to compare the features of different data intermixing methods. 相似文献
6.
Rendering Natural Waters 总被引:13,自引:0,他引:13
Creating and rendering realistic water is one of the most daunting tasks in computer graphics. Realistic rendering of water requires that the sunlight and skylight illumination are correct, the water surface is modeled accurately and that the light transport within water body is properly handled. This paper describes a method for wave generation on a water surface using a physically-based approach. The wave generation uses data from the oceanographical observations and it is controlled by intuitive parameters such as wind speed and wind direction. The optical behavior of the water surfaces is complex but is well-described in the ocean science literature. We present a simple and intuitive light transport approach that is easy to use for many different water types such as deep ocean water, muddy coastal water, and fresh water bodies. We demonstrate our model for a number of water and atmospheric conditions. 相似文献
7.
Recently, many image-based modeling and rendering techniques have been successfully designed to render photo-realistic images without the need for explicit 3D geometry. However, these techniques (e.g., light field rendering (Levoy, M. and Hanrahan, P., 1996. In SIGGRAPH 1996 Conference Proceedings, Annual Conference Series, Aug. 1996, pp. 31–42) and Lumigraph (Gortler, S.J., Grzeszczuk, R., Szeliski, R., and Cohen, M.F., 1996. In SIGGRAPH 1996 Conference Proceedings, Annual Conference Series, Aug. 1996, pp. 43–54)) may require a substantial number of images. In this paper, we adopt a geometric approach to investigate the minimum sampling problem for light field rendering, with and without geometry information of the scene. Our key observation is that anti-aliased light field rendering is equivalent to eliminating the double image artifacts caused by view interpolation.Specifically, we present a closed-form solution of the minimum sampling rate for light field rendering. The minimum sampling rate is determined by the resolution of the camera and the depth variation of the scene. This rate is ensured if the optimal constant depth for rendering is chosen as the harmonic mean of the maximum and minimum depths of the scene. Moreover, we construct the minimum sampling curve in the joint geometry and image space, with the consideration of depth discontinuity. The minimum sampling curve quantitatively indicates how reduced geometry information can be compensated by increasing the number of images, and vice versa. Experimental results demonstrate the effectiveness of our theoretical analysis. 相似文献
8.
Se Baek Oh Sriram Kashyap Rohit Garg Sharat Chandran Ramesh Raskar 《Computer Graphics Forum》2010,29(2):507-516
Ray–based representations can model complex light transport but are limited in modeling diffraction effects that require the simulation of wavefront propagation. This paper provides a new paradigm that has the simplicity of light path tracing and yet provides an accurate characterization of both Fresnel and Fraunhofer diffraction. We introduce the concept of a light field transformer at the interface of transmissive occluders. This generates mathematically sound, virtual, and possibly negative‐valued light sources after the occluder. From a rendering perspective the only simple change is that radiance can be temporarily negative. We demonstrate the correctness of our approach both analytically, as well by comparing values with standard experiments in physics such as the Young's double slit. Our implementation is a shader program in OpenGL that can generate wave effects on arbitrary surfaces. 相似文献
9.
一种基于逃逸时间算法的M-集渲染方法 总被引:3,自引:0,他引:3
用逃逸时间算法绘制传统的分形集———Mandelbrot集(M-集)和Julia集(J-集)时,常用迭代次数来控制色彩,得到的往往是黑白两色的或缺乏色彩渐变的图形。文章提出了基于逃逸时间算法、利用“距离”来控制色彩变化的M-集渲染方法,这种方法可以同时渲染M-集的外部结构和内部结构。适当选择控制色彩变化的距离函数,还可以得到富有3D效果的分形图形。 相似文献
10.
This paper introduces a data distribution scheme and an alignment algorithm for parallel volume rendering. The algorithm performs a single wrap-around shear transformation which requires only a regular inter-processor communication pattern. The alignment can be implemented incrementally consisting of short distance shifts, thus significantly reducing the communication overhead. The alignment process is a non-destructive transformation, consisting of a single non-scaling shear operation. This is a unique feature which provides the basis for the incremental algorithm. 相似文献