首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper presents a novel framework for elliptical weighted average (EWA) surface splatting with time‐varying scenes. We extend the theoretical basis of the original framework by replacing the 2D surface reconstruction filters by 3D kernels which unify the spatial and temporal component of moving objects. Based on the newly derived mathematical framework we introduce a rendering algorithm that supports the generation of high‐quality motion blur for point‐based objects using a piecewise linear approximation of the motion. The rendering algorithm applies ellipsoids as rendering primitives which are constructed by extending planar EWA surface splats into the temporal dimension along the instantaneous motion vector. Finally, we present an implementation of the proposed rendering algorithm with approximated occlusion handling using advanced features of modern GPUs and show its capability of producing motion‐blurred result images at interactive frame rates.  相似文献   

2.
We propose a unified rendering approach that jointly handles motion and defocus blur for transparent and opaque objects at interactive frame rates. Our key idea is to create a sampled representation of all parts of the scene geometry that are potentially visible at any point in time for the duration of a frame in an initial rasterization step. We store the resulting temporally‐varying fragments (t‐fragments) in a bounding volume hierarchy which is rebuild every frame using a fast spatial median construction algorithm. This makes our approach suitable for interactive applications with dynamic scenes and animations. Next, we perform spatial sampling to determine all t‐fragments that intersect with a specific viewing ray at any point in time. Viewing rays are sampled according to the lens uv‐sampling for depth‐of‐field effects. In a final temporal sampling step, we evaluate the predetermined viewing ray/t‐fragment intersections for one or multiple points in time. This allows us to incorporate all standard shading effects including transparency. We describe the overall framework, present our GPU implementation, and evaluate our rendering approach with respect to scalability, quality, and performance.  相似文献   

3.
Interactive real-time motion blur   总被引:1,自引:0,他引:1  
Motion blurring of fast-moving objects is highly desirable for virtual environments and 3D user interfaces. However, all currently known algorithms for generating motion blur are too slow for inclusion in interactive 3D applications. We introduce a new motion-blur algorithm that works in three dimensions on a per object basis. The algorithm operates in real time even for complex objects consisting of several thousand polygons. While it only approximates true motion blur, the generated results are smooth and visually consistent. We achieve this performance break-through by taking advantage of hardware-assisted rendering of semitransparent polygons, a feature commonly available in today's workstations.  相似文献   

4.
We present a novel approach for rendering volumetric data including the Doppler effect of light. Similar to the acoustic Doppler effect, which is caused by relative motion between a sound emitter and an observer, light waves also experience compression or expansion when emitter and observer exhibit relative motion. We account for this by employing spectral volume rendering in an emission–absorption model, with the volumetric matter moving according to an accompanying vector field, and emitting and attenuating light at wavelengths subject to the Doppler effect. By introducing a novel piecewise linearear representation of the involved light spectra, we achieve accurate volume rendering at interactive frame rates. We compare our technique to rendering with traditional point-based spectral representation, and demonstrate its utility using a simulation of galaxy formation.  相似文献   

5.
To reduce perceived motion blur on liquid crystal displays, typically various techniques such as overdrive, scanning backlight, black‐data insertion, black‐field insertion, and frame rate up‐conversion are widely employed by the liquid crystal display industry. These techniques aim to steepen the edge transitions by improving the dynamic behavior of the light modulation. However, depending on the implementation, this may result in the perception of irregularly shaped motion‐induced edge‐blur profiles. It is not yet fully understood how these irregularities in the steepened edge‐blur profiles contribute to the perceived sharpness of moving objects. To better understand the consequences of several motion‐blur reduction techniques, a perception experiment is designed to evaluate the perceived sharpness of typical motion‐induced edge‐blur profiles at several contrast levels. Relevant characteristics of these profiles are determined on the basis of the perception results by means of regression analysis. As a result, a sharpness metric with two parameters is established, where one parameter relates to the edge slope and the other to the overshoot/undershoot part of the motion‐induced edge‐blur profile.  相似文献   

6.
Abstract— LCD motion blur is a well‐known phenomenon, and a lot of research is attributed to characterize and improve it. Until recently, most studies were focused on explaining the effects visible in black‐and‐white patterns, and hence color effects were ignored. However, when a colored pattern is moving over a colored background, an additional motion‐induced artifact becomes visible, which is referred to as chromatic aberration. To describe this phenomenon, our model to characterize the appearance of moving achromatic patterns is extended in such a way that it now calculates the apparent image from the temporal step response of the individual primary colors. The results of a perception experiment indicate that there is a good correspondence between the apparent image predicted with the model and the actual image perceived during motion.  相似文献   

7.
A patrol type of surveillance has been performed everywhere from police city patrol to railway inspection. Different from static cameras or sensors distributed in a space, such surveillance has its benefits of low cost, long distance, and efficiency in detecting infrequent changes. However, the challenges are how to archive daily recorded videos in the limited storage space and how to build a visual representation for quick and convenient access to the archived videos. We tackle the problems by acquiring and visualizing route panoramas of rail scenes. We analyze the relation between train motion and the video sampling and the constraints such as resolution, motion blur and stationary blur etc. to obtain a desirable panoramic image. The route panorama generated is a continuous image with complete and non-redundant scene coverage and compact data size, which can be easily streamed over the network for fast access, maneuver, and automatic retrieval in railway environment monitoring. Then, we visualize the railway scene based on the route panorama rendering for interactive navigation, inspection, and scene indexing.  相似文献   

8.
Molecular visualization is an important tool for analysing the results of biochemical simulations. With modern GPU ray casting approaches, it is only possible to render several million of atoms interactively unless advanced acceleration methods are employed. Whole‐cell simulations consist of at least several billion atoms even for simplified cell models. However, many instances of only a few different proteins occur in the intracellular environment, which can be exploited to fit the data into the graphics memory. For each protein species, one model is stored and rendered once per instance. The proposed method exploits recent algorithmic advances for particle rendering and the repetitive nature of intracellular proteins to visualize dynamic results from mesoscopic simulations of cellular transport processes. We present two out‐of‐core optimizations for the interactive visualization of data sets composed of billions of atoms as well as details on the data preparation and the employed rendering techniques. Furthermore, we apply advanced shading methods to improve the image quality including methods to enhance depth and shape perception besides non‐photorealistic rendering methods. We also show that the method can be used to render scenes that are composed of triangulated instances, not only implicit surfaces.  相似文献   

9.
In this paper, we present a novel visualization technique-kinetic visualization-that uses motion along a surface to aid in the perception of 3D shape and structure of static objects. The method uses particle systems, with rules such that particles flow over the surface of an object to not only bring out, but also attract attention to information on a shape that might not be readily visible with a conventional rendering method which uses lighting and view changes. Replacing still images with animations in this fashion, we demonstrate with both surface and volumetric models in the accompanying videos that, in many cases, the resulting visualizations effectively enhance the perception of three-dimensional shape and structure. We also describe how, for both types of data, a texture-based representation of this motion can be used for interactive visualization using PC graphics hardware. Finally, the results of a user study that we have conducted are presented, which show evidence that the supplemental motion cues can be helpful.  相似文献   

10.
We present a novel approach for interactive rendering of massive 3D models. Our approach integrates adaptive sampling-based simplification, visibility culling, out-of-core data management and level-of-detail. We use a unified scene graph representation for acceleration techniques. In preprocessing, we subdivide large objects, and build a BVH clustering hierarchy. We make use of a novel adaptive sampling method to generate LOD models: AdaptiveVoxels. The AdaptiveVoxels reduces the preprocessing cost and our out-of-core rendering algorithm improves rendering efficiency. We have implemented our algorithm on a desktop PC. We can render massive CAD and isosurface models, consisting of hundreds of millions of triangles interactively with little loss in image quality.  相似文献   

11.
This paper proposes an algorithm which uses image registration to estimate a non‐uniform motion blur point spread function (PSF) caused by camera shake. Our study is based on a motion blur model which models blur effects of camera shakes using a set of planar perspective projections (i.e., homographies). This representation can fully describe motions of camera shakes in 3D which cause non‐uniform motion blurs. We transform the non‐uniform PSF estimation problem into a set of image registration problems which estimate homographies of the motion blur model one‐by‐one through the Lucas‐Kanade algorithm. We demonstrate the performance of our algorithm using both synthetic and real world examples. We also discuss the effectiveness and limitations of our algorithm for non‐uniform deblurring.  相似文献   

12.
We present a real‐time framework which allows interactive visualization of relativistic effects for time‐resolved light transport. We leverage data from two different sources: real‐world data acquired with an effective exposure time of less than 2 picoseconds, using an ultra‐fast imaging technique termed femto‐photography, and a transient renderer based on ray‐tracing. We explore the effects of time dilation, light aberration, frequency shift and radiance accumulation by modifying existing models of these relativistic effects to take into account the time‐resolved nature of light propagation. Unlike previous works, we do not impose limiting constraints in the visualization, allowing the virtual camera to explore freely a reconstructed 3D scene depicting dynamic illumination. Moreover, we consider not only linear motion, but also acceleration and rotation of the camera. We further introduce, for the first time, a pinhole camera model into our relativistic rendering framework, and account for subsequent changes in focal length and field of view as the camera moves through the scene.  相似文献   

13.
Light field reconstruction algorithms can substantially decrease the noise in stochastically rendered images. Recent algorithms for defocus blur alone are both fast and accurate. However, motion blur is a considerably more complex type of camera effect, and as a consequence, current algorithms are either slow or too imprecise to use in high quality rendering. We extend previous work on real‐time light field reconstruction for defocus blur to handle the case of simultaneous defocus and motion blur. By carefully introducing a few approximations, we derive a very efficient sheared reconstruction filter, which produces high quality images even for a low number of input samples. Our algorithm is temporally robust, and is about two orders of magnitude faster than previous work, making it suitable for both real‐time rendering and as a post‐processing pass for offline rendering.  相似文献   

14.
Reflections, refractions, and caustics are very important for rendering global illumination images. Although many methods can be applied to generate these effects, the rendering performance is not satisfactory for interactive applications. In this paper, complex ray-object intersections are simplified so that the intersections can be computed on a GPU, and an iterative computing scheme based on the depth buffers is used for correcting the approximate results caused by the simplification. As a result, reflections and refractions of environment maps and nearby geometry can be rendered on a GPU interactively without preprocessing. We can even achieve interactive recursive reflections and refractions by using an object-impostor technique. Moreover, caustic effects caused by reflections and refractions can be rendered by placing the eye at the light. Rendered results prove that our method is sufficiently efficient to render plausible images interactively for many interactive applications  相似文献   

15.
Abstract— The scanning‐backlight technique to improve the motion performance of LCDs is introduced. This technique, however, has some drawbacks such as double edges and color aberration, which may become visible in moving patterns. A method combining accurate measurements of temporal luminance transitions with the simulation of human‐eye tracking and spatiotemporal integration is used to model the motion‐induced profile of an edge moving on a scanning‐backlight LCD‐TV panel that exhibits the two drawbacks mentioned above. The model results are validated with a perception experiment including different refresh rates, and a high correspondence is found between the simulated apparent edge and the one that is perceived during actual motion. Apart from the motion‐induced edge blur, the perception of a moving line or square‐wave grating can also be predicted by the same method starting from the temporal impulse and frame‐sequential response curves, respectively. Motion‐induced image degradation is evaluated for both a scanning‐ and continuous‐backlight mode based on three different characteristics: edge blur, line spreading, and modulation depth of square‐wave grating. The results indicate that the scanning‐backlight mode results in better motion performance.  相似文献   

16.
It is difficult to render caustic patterns at interactive frame rates. This paper introduces new rendering techniques that relax current constraints, allowing scenes with moving, non-rigid scene objects, rigid caustic objects, and rotating directional light sources to be rendered in real-time with GPU hardware acceleration. Because our algorithm estimates the intensity and the direction of caustic light, rendering of non-Lambertian surfaces is supported. Previous caustics algorithms have separated the problem into pre-rendering and rendering phases, storing intermediate results in data structures such as photon maps or radiance transfer functions. Our central idea is to use specially parameterized spot lights, called caustic spot lights (CSLs), as the intermediate representation of a two-phase algorithm. CSLs are flexible enough that a small number can approximate the light leaving a caustic object, yet simple enough that they can be efficiently evaluated by a pixel shader program during accelerated rendering.We extend our approach to support changing lighting direction by further dividing the pre-rendering phase into per-scene and per-frame components: the per-frame phase computes frame-specific CSLs by interpolating between CSLs that were pre-computed with differing light directions.  相似文献   

17.
We propose a method for rendering volumetric data sets at interactive frame rates while supporting dynamic ambient occlusion as well as an approximation to color bleeding. In contrast to ambient occlusion approaches for polygonal data, techniques for volumetric data sets have to face additional challenges, since by changing rendering parameters, such as the transfer function or the thresholding, the structure of the data set and thus the light interactions may vary drastically. Therefore, during a preprocessing step which is independent of the rendering parameters we capture light interactions for all combinations of structures extractable from a volumetric data set. In order to compute the light interactions between the different structures, we combine this preprocessed information during rendering based on the rendering parameters defined interactively by the user. Thus our method supports interactive exploration of a volumetric data set but still gives the user control over the most important rendering parameters. For instance, if the user alters the transfer function to extract different structures from a volumetric data set the light interactions between the extracted structures are captured in the rendering while still allowing interactive frame rates. Compared to known local illumination models for volume rendering our method does not introduce any substantial rendering overhead and can be integrated easily into existing volume rendering applications. In this paper we will explain our approach, discuss the implications for interactive volume rendering and present the achieved results.  相似文献   

18.
Direct volume rendering has become a popular method for visualizing volumetric datasets. Even though computers are continually getting faster, it remains a challenge to incorporate sophisticated illumination models into direct volume rendering while maintaining interactive frame rates. In this paper, we present a novel approach for advanced illumination in direct volume rendering based on GPU ray-casting. Our approach features directional soft shadows taking scattering into account, ambient occlusion and color bleeding effects while achieving very competitive frame rates. In particular, multiple dynamic lights and interactive transfer function changes are fully supported. Commonly, direct volume rendering is based on a very simplified discrete version of the original volume rendering integral, including the development of the original exponential extinction into a-blending. In contrast to a-blending forming a product when sampling along a ray, the original exponential extinction coefficient is an integral and its discretization a Riemann sum. The fact that it is a sum can cleverly be exploited to implement volume lighting effects, i.e. soft directional shadows, ambient occlusion and color bleeding. We will show how this can be achieved and how it can be implemented on the GPU.  相似文献   

19.
The quality of stereoscopic 3D cinematic content is a major determinant for user experience in immersive cinema in both traditional theatres and cinematic virtual reality. One of the most important parameters is the frame rate of the content which has historically been 24 frames per second for movies, but higher frame rates are being considered for cinema and are standard for virtual reality. A typical behavioural response to immersive stereoscopic 3D content is vection, the visually-induced perception of self-motion elicited by moving scenes. In this work we investigated how participants’ vection varied with simulated virtual camera speed, frame rate, and motion blur produced by the virtual camera’s exposure, while viewing depictions of movement through a realistic virtual environment. We also investigated how their postural sway varied with these parameters and how sway covaried with levels of perceived self-motion. Results show that while average perceived vection significantly increased with 3D content frame rate and motion speed, motion blur had no significant effect on perceived vection. We also found that levels of postural sway induced by vection correlated positively with subjective ratings.  相似文献   

20.
Depth-of-Field Rendering by Pyramidal Image Processing   总被引:1,自引:0,他引:1  
We present an image-based algorithm for interactive rendering depth-of-field effects in images with depth maps. While previously published methods for interactive depth-of-field rendering suffer from various rendering artifacts such as color bleeding and sharpened or darkened silhouettes, our algorithm achieves a significantly improved image quality by employing recently proposed GPU-based pyramid methods for image blurring and pixel disocclusion. Due to the same reason, our algorithm offers an interactive rendering performance on modern GPUs and is suitable for real-time rendering for small circles of confusion. We validate the image quality provided by our algorithm by side-by-side comparisons with results obtained by distributed ray tracing.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号