首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 218 毫秒
1.
Light field reconstruction algorithms can substantially decrease the noise in stochastically rendered images. Recent algorithms for defocus blur alone are both fast and accurate. However, motion blur is a considerably more complex type of camera effect, and as a consequence, current algorithms are either slow or too imprecise to use in high quality rendering. We extend previous work on real‐time light field reconstruction for defocus blur to handle the case of simultaneous defocus and motion blur. By carefully introducing a few approximations, we derive a very efficient sheared reconstruction filter, which produces high quality images even for a low number of input samples. Our algorithm is temporally robust, and is about two orders of magnitude faster than previous work, making it suitable for both real‐time rendering and as a post‐processing pass for offline rendering.  相似文献   

2.
3.
This paper presents a GPU‐based rendering algorithm for real‐time defocus blur effects, which significantly improves the accumulation buffering. The algorithm combines three distinctive techniques: (1) adaptive discrete geometric level of detail (LOD), made popping‐free by blending visibility samples across the two adjacent geometric levels; (2) adaptive visibility/shading sampling via sample reuse; (3) visibility supersampling via height‐field ray casting. All the three techniques are seamlessly integrated to lower the rendering cost of smooth defocus blur with high visibility sampling rates, while maintaining most of the quality of brute‐force accumulation buffering.  相似文献   

4.
Defocus Magnification   总被引:1,自引:0,他引:1  
A blurry background due to shallow depth of field is often desired for photographs such as portraits, but, unfortunately, small point-and-shoot cameras do not permit enough defocus because of the small diameter of their lenses. We present an image-processing technique that increases the defocus in an image to simulate the shallow depth of field of a lens with a larger aperture. Our technique estimates the spatially-varying amount of blur over the image, and then uses a simple image-based technique to increase defocus. We first estimate the size of the blur kernel at edges and then propagate this defocus measure over the image. Using our defocus map, we magnify the existing blurriness, which means that we blur blurry regions and keep sharp regions sharp. In contrast to more difficult problems such as depth from defocus, we do not require precise depth estimation and do not need to disambiguate textureless regions.  相似文献   

5.
We present an efficient ray‐tracing technique to render bokeh effects produced by parametric aspheric lenses. Contrary to conventional spherical lenses, aspheric lenses do generally not permit a simple closed‐form solution of ray‐surface intersections. We propose a numerical root‐finding approach, which uses tight proxy surfaces to ensure a good initialization and convergence behavior. Additionally, we simulate mechanical imperfections resulting from the lens fabrication via a texture‐based approach. Fractional Fourier transform and spectral dispersion add additional realism to the synthesized bokeh effect. Our approach is well‐suited for execution on graphics processing units (GPUs) and we demonstrate complex defocus‐blur and lens‐flare effects.  相似文献   

6.
针对目前三维云模拟绘制效率低、计算资源消耗大、绘制效果差等问题,提出基于八叉树邻域分析的光线跟踪算法,并用于WRF模式云数据的三维模拟。使用八叉树结构优化传统光线跟踪算法的数据存储结构,通过存储节点编码和划分层次改进邻域分析算法,通过简化光线的折射公式优化Whitted光照模型,借助OpenGL和Vapor工具实现云数据的三维可视化。实验结果表明,该方法降低了绘制时间,提高了渲染效率,更好体现了云的真实物理特征。  相似文献   

7.
Virtual Eye: retinal image visualization of the human eye   总被引:3,自引:0,他引:3  
In computer graphics, ray tracing produces realistic images of 3D scenes. Most work in the field has focused on modeling reflection to account for the interaction of light with different materials, deriving illumination algorithms to simulate light transport throughout the environment, and designing new optimization techniques. Additional work by (Cook et al., 1984) targeted simulation of depth of field and motion blur. Kolb and Hanrahan (1995) presented a realistic camera model that simulates aberration and radiation and produces images showing a variety of optical effects. Our challenge is to combine these two well-developed scientific fields to simulate human vision. We have begun to do so with the development of the Virtual Eye, a method to visualize retinal images. This article describes how the Virtual Eye simulates retinal perception and discusses its potential value in clinical applications such as planning and evaluating surgical techniques or new lens combinations  相似文献   

8.
Efficient intersection queries are important for ray tracing. However, building and maintaining the acceleration structures is demanding, especially for fully dynamic scenes. In this paper, we propose a quantized intersection framework based on compact voxels to quantize the intersection as an approximation. With high‐resolution voxels, the scene geometry can be well represented, which enables more accurate simulation of global illumination, such as detailed glossy reflections. In terms of memory usage in our graphics processing unit implementation, voxels are binarized and compactly encoded in a few 2D textures. We evaluate the rendering quality at various voxel resolutions. Empirically, high‐fidelity rendering can be achieved at the voxel resolution of 1 K3 or above, which produces images very similar to those of ray tracing. Moreover, we demonstrate the feasibility of our framework for various illumination effects with several applications, including first‐bounce indirect illumination, glossy refraction, path tracing, direct illumination, and ambient occlusion.  相似文献   

9.
We propose a unified rendering approach that jointly handles motion and defocus blur for transparent and opaque objects at interactive frame rates. Our key idea is to create a sampled representation of all parts of the scene geometry that are potentially visible at any point in time for the duration of a frame in an initial rasterization step. We store the resulting temporally‐varying fragments (t‐fragments) in a bounding volume hierarchy which is rebuild every frame using a fast spatial median construction algorithm. This makes our approach suitable for interactive applications with dynamic scenes and animations. Next, we perform spatial sampling to determine all t‐fragments that intersect with a specific viewing ray at any point in time. Viewing rays are sampled according to the lens uv‐sampling for depth‐of‐field effects. In a final temporal sampling step, we evaluate the predetermined viewing ray/t‐fragment intersections for one or multiple points in time. This allows us to incorporate all standard shading effects including transparency. We describe the overall framework, present our GPU implementation, and evaluate our rendering approach with respect to scalability, quality, and performance.  相似文献   

10.
散景效果的真实感绘制   总被引:3,自引:1,他引:2  
针对现有方法绘制的散景效果真实感较差的问题,提出一种基于几何光学理论的散景效果真实感绘制方法.该方法以光线传播的折射定律为基础,利用序列光线追踪方法对相机镜头的光学成像特性进行精确建模;对相机镜头的内部结构进行精确模拟,包括孔径光阑和渐晕光阑,以绘制出由孔径形状和渐晕共同作用的散景效果;利用几何光学理论和序列光线追踪方法精确计算出出射光瞳的位置和大小,以辅助光线采样,提高光线追踪效率.绘制结果表明,利用该方法能够绘制出较为逼真的散景效果,正确模拟了孔径形状和渐晕对散景效果的影响,并具有较高的光线追踪效率.  相似文献   

11.
结合人眼光学建模和计算机图形学的真实感绘制技术,提出一种基于Navarro示意眼模型的人眼视觉真实感绘制方法.利用Navarro模型与传统的单透镜模型相比能够更精确地模拟人眼的特性,将Navarro模型引入视觉真实感绘制中;采用光线追踪方法,加入非球面折射面的计算,精确地模拟人眼的成像特性.实验结果表明此种方法能够更精...  相似文献   

12.
We present user‐controllable and plausible defocus blur for a stochastic rasterizer. We modify circle of confusion coefficients per vertex to express more general defocus blur, and show how the method can be applied to limit the foreground blur, extend the in‐focus range, simulate tilt‐shift photography, and specify per‐object defocus blur. Furthermore, with two simplifying assumptions, we show that existing triangle coverage tests and tile culling tests can be used with very modest modifications. Our solution is temporally stable and handles simultaneous motion blur and depth of field.  相似文献   

13.
A computer graphics system called PERIS is described. It offers two levels of user interfaces to serve both as a testbed to facilitate further research work and as a pragmatic system for CAD/CAM applications. Flexibility of the system is achieved by providing tools of solid modeling, surface modeling and procedural model representations for physical environment modeling and by including interfaces of immediate display, scan line rendering, ray tracing and two-way ray tracing for realistic image synthesis. A linear octree data structure is established to support the modeling and rendering processes, greatly reducing the computations involved. Meanwhile, we also introduce a new illumination model which unifies most of the existing models on a theoretical basis. An improved Cook-Torrance model is then derived. By examining the merits and limitations of the classic ray tracing methods, we propose a new rendering technique of two-way ray tracing that allows more accurate simulation of light propagation in the environment.  相似文献   

14.
In this paper we propose a novel technique to perform real-time rendering of translucent inhomogeneous materials, one of the most well-known problems of computer graphics. The developed technique is based on an adaptive volumetric point sampling, done in a preprocessing stage, which associates to each sample the optical depth for a predefined set of directions. This information is then used by a rendering algorithm that combines the object’s surface rasterization with a ray tracing algorithm, implemented on the graphics processor, to compose the final image. This approach allows us to simulate light scattering phenomena for inhomogeneous isotropic materials in real time with an arbitrary number of light sources. We tested our algorithm by comparing the produced images with the result of ray tracing and showed that the technique is effective.  相似文献   

15.
We present a novel framework for real-time multi-perspective rendering. While most existing approaches are based on ray-tracing, we present an alternative approach by emulating multi-perspective rasterization on the classical perspective graphics pipeline. To render a general multi-perspective camera, we first decompose the camera into piecewise linear primitive cameras called the general linear cameras or GLCs. We derive the closed-form projection equations for GLCs and show how to rasterize triangles onto GLCs via a two-pass rendering algorithm. In the first pass, we compute the GLC projection coefficients of each scene triangle using a vertex shader. The linear raster on the graphics hardware then interpolates these coefficients at each pixel. Finally, we use these interpolated coefficients to compute the projected pixel coordinates using a fragment shader. In the second pass, we move the pixels to their actual projected positions. To avoid holes, we treat neighboring pixels as triangles and re-render them onto the GLC image plane. We demonstrate our real-time multi-perspective rendering framework in a wide range of applications including synthesizing panoramic and omnidirectional views, rendering reflections on curved mirrors, and creating multi-perspective faux animations. Compared with the GPU-based ray tracing methods, our rasterization approach scales better with scene complexity and it can render scenes with a large number of triangles at interactive frame rates.  相似文献   

16.
Creating bokeh effect in synthesized images can improve photorealism and emphasize interesting subjects. Therefore, we present a novel method for rendering realistic bokeh effects, especially chromatic effects, which are absent for existing methods. This new method refers to two key techniques: an accurate dispersive lens model and an efficient spectral rendering scheme. This lens model is implemented based on optical data of real lenses and considers wavelength dependency of physical lenses by introducing a sequential dispersive ray tracing algorithm inside this model. This spectral rendering scheme is proposed to support rendering of lens dispersion and integration between this new model and bidirectional ray tracing. The rendering experiments demonstrate that our method is able to simulate realistic spectral bokeh effects caused by lens stops and aberrations, especially chromatic aberration, and feature high rendering efficiency.  相似文献   

17.
Progressive addition lenses are a relatively new approach to compensate for defects of the human visual system. While traditional spectacles use rotationally symmetric lenses, progressive lenses require the specification of free-form surfaces. This poses difficult problems for the optimal design and its visual evaluation.
This paper presents two new techniques for the visualization of optical systems and the optimization of progressive lenses. Both are based on the same wavefront tracing approach to accurately evaluate the refraction properties of complex optical systems.
We use the results of wavefront tracing for continuously re-focusing the eye during rendering. Together with distribution ray tracing, this yields high-quality images that accurately simulate the visual quality of an optical system. The design of progressive lenses is difficult due to the trade-off between the desired properties of the lens and unavoidable optical errors, such as astigmatism and distortions. We use wavefront tracing to derive an accurate error functional describing the desired properties and the optical error across a lens. Minimizing this error yields optimal free-form lens surfaces.
While the basic approach is much more general, in this paper, we describe its application to the particular problem of designing and evaluating progressive lenses and demonstrate the benefits of the new approach with several example images.  相似文献   

18.
Fast, realistic lighting for video games   总被引:3,自引:0,他引:3  
A novel, view-independent technology produces natural-looking lighting effects faster than radiosity and ray tracing. The approach is suited for 3D real-time interactive applications and production rendering.  相似文献   

19.
We present a practical real‐time approach for rendering lens‐flare effects. While previous work employed costly ray tracing or complex polynomial expressions, we present a coarser, but also significantly faster solution. Our method is based on a first‐order approximation of the ray transfer in an optical system, which allows us to derive a matrix that maps lens flare‐producing light rays directly to the sensor. The resulting approach is easy to implement and produces physically‐plausible images at high framerates on standard off‐the‐shelf graphics hardware.  相似文献   

20.
This paper examines the effectiveness of load balancing strategies for ray tracing on large parallel computer systems and cluster computers. Popular static load balancing strategies are shown to be inadequate for rendering complex images with contemporary ray tracing algorithms, and for rendering NTSC resolution images on 128 or more computers. Strategies based on image tiling are shown to be ineffective except on very small numbers of computers. A dynamic load balancing strategy, based on a diffusion model, is applied to a parallel Monte Carlo rendering system. The diffusive strategy is shown to remedy the defects of the static strategies. A hybrid strategy that combines static and dynamic approaches produces nearly optimal performance on a variety of images and computer systems. The theoretical results should be relevant to other rendering and image processing applications.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号