首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Empty‐space skipping is an essential acceleration technique for volume rendering. Image‐order empty‐space skipping is not well suited to GPU implementation, since it must perform checks on, essentially, a per‐sample basis, as in kd‐tree traversal, which can lead to a great deal of divergent branching at runtime, which is very expensive in a modern GPU pipeline. In contrast, object‐order empty‐space skipping is extremely fast on a GPU and has negligible overheads compared with approaches without empty‐space skipping, since it employs the hardware unit for rasterisation. However, previous object‐order algorithms have been able to skip only exterior empty space and not the interior empty space that lies inside or between volume objects. In this paper, we address these issues by proposing a multi‐layer depth‐peeling approach that can obtain all of the depth layers of the tight‐fitting bounding geometry of the isosurface by a single rasterising pass. The maximum count of layers peeled by our approach can be up to thousands, while maintaining 32‐bit float‐point accuracy, which was not possible previously. By raytracing only the valid ray segments between each consecutive pair of depth layers, we can skip both the interior and exterior empty space efficiently. In comparisons with 3 state‐of‐the‐art GPU isosurface rendering algorithms, this technique achieved much faster rendering across a variety of data sets.  相似文献   

2.
Variable bit rate compression can achieve better quality and compression rates than fixed bit rate methods. None the less, GPU texturing uses lossy fixed bit rate methods like DXT to allow random access and on‐the‐fly decompression during rendering. Changes in games and GPUs since DXT was developed make its compression artifacts less acceptable, and texture bandwidth less of an issue, but texture size is a serious and growing problem. Games use a large total volume of texture data, but have a much smaller active set. We present a new paradigm that separates GPU decompression from rendering. Rendering is from uncompressed data, avoiding the need for random access decompression. We demonstrate this paradigm with a new variable bit rate lossy texture compression algorithm that is well suited to the GPU, including a new GPU‐friendly formulation of range decoding, and a new texture compression scheme averaging 12.4:1 lossy compression ratio on 471 real game textures with a quality level similar to traditional DXT compression. The total game texture set are stored in the GPU in compressed form, and decompressed for use in a fraction of a second per scene.  相似文献   

3.
Dart‐throwing can generate ideal Poisson‐disk distributions with excellent blue noise properties, but is very computationally expensive if a maximal point set is desired. In this paper, we observe that the Poisson‐disk sampling problem can be posed in terms of importance sampling by representing the available space to be sampled as a probability density function (pdf). This allows us to develop an efficient algorithm for the generation of maximal Poisson‐disk distributions with quality similar to naïve dart‐throwing but without rejection of samples. In our algorithm, we first position samples in one dimension based on its marginal cumulative distribution function (cdf). We then throw samples in the other dimension only in the regions which are available for sampling. After each 2D sample is placed, we update the cdf and data structures to keep track of the available regions. In addition to uniform sampling, our method is able to perform variable‐density sampling with small modifications. Finally, we also propose a new min‐conflict metric for variable‐density sampling which results in better adaptation of samples to the underlying importance field.  相似文献   

4.
In many cases, only the combination of geometric and volumetric data sets is able to describe a single phenomenon under observation when visualizing large and complex data. When semi‐transparent geometry is present, correct rendering results require sorting of transparent structures. Additional complexity is introduced as the contributions from volumetric data have to be partitioned according to the geometric objects in the scene. The A‐buffer, an enhanced framebuffer with additional per‐pixel information, has previously been introduced to deal with the complexity caused by transparent objects. In this paper, we present an optimized rendering algorithm for hybrid volume‐geometry data based on the A‐buffer concept. We propose two novel components for modern GPUs that tailor memory utilization to the depth complexity of individual pixels. The proposed components are compatible with modern A‐buffer implementations and yield performance gains of up to eight times compared to existing approaches through reduced allocation and reuse of fast cache memory. We demonstrate the applicability of our approach and its performance with several examples from molecular biology, space weather and medical visualization containing both, volumetric data and geometric structures.  相似文献   

5.
Poisson‐disk sampling is a popular sampling method because of its blue noise power spectrum, but generation of these samples is computationally very expensive. In this paper, we propose an efficient method for fast generation of a large number of blue noise samples using a small initial patch of Poisson‐disk samples that can be generated with any existing approach. Our main idea is to convolve this set of samples with another to generate our final set of samples. We use the convolution theorem from signal processing to show that the spectrum of the resulting sample set preserves the blue noise properties. Since our method is approximate, we have error with respect to the true Poisson‐disk samples, but we show both mathematically and practically that this error is only a function of the number of samples in the small initial patch and is therefore bounded. Our method is parallelizable and we demonstrate an implementation of it on a GPU, running more than 10 times faster than any previous method and generating more than 49 million 2D samples per second. We can also use the proposed approach to generate multidimensional blue noise samples.  相似文献   

6.
We present a non‐photorealistic rendering technique to transform color images and videos into painterly abstractions. It is based on a generalization of the Kuwahara filter that is adapted to the local shape of features, derived from the smoothed structure tensor. Contrary to conventional edge‐preserving filters, our filter generates a painting‐like flattening effect along the local feature directions while preserving shape boundaries. As opposed to conventional painting algorithms, it produces temporally coherent video abstraction without extra processing. The GPU implementation of our method processes video in real‐time. The results have the clearness of cartoon illustrations but also exhibit directional information as found in oil paintings.  相似文献   

7.
Diorama artists produce a spectacular 3D effect in a confined space by generating depth illusions that are faithful to the ordering of the objects in a large real or imaginary scene. Indeed, cognitive scientists have discovered that depth perception is mostly affected by depth order and precedence among objects. Motivated by these findings, we employ ordinal cues to construct a model from a single image that similarly to Dioramas, intensifies the depth perception. We demonstrate that such models are sufficient for the creation of realistic 3D visual experiences. The initial step of our technique extracts several relative depth cues that are well known to exist in the human visual system. Next, we integrate the resulting cues to create a coherent surface. We introduce wide slits in the surface, thus generalizing the concept of cardboard cutout layers. Lastly, the surface geometry and texture are extended alongside the slits, to allow small changes in the viewpoint which enriches the depth illusion.  相似文献   

8.
SecondSkin estimates an appearance model for an object visible in a video sequence, without the need for complex interaction or any calibration apparatus. This model can then be transferred to other objects, allowing a non‐expert user to insert a synthetic object into a real video sequence so that its appearance matches that of an existing object, and changes appropriately throughout the sequence. As the method does not require any prior knowledge about the scene, the lighting conditions, or the camera, it is applicable to video which was not captured with this purpose in mind. However, this lack of prior knowledge precludes the recovery of separate lighting and surface reflectance information. The SecondSkin appearance model therefore combines these factors. The appearance model does require a dominant light‐source direction, which we estimate via a novel process involving a small amount of user interaction. The resulting model estimate provides exactly the information required to transfer the appearance of the original object to new geometry composited into the same video sequence.  相似文献   

9.
Edge‐preserving image filtering is a valuable tool for a variety of applications in image processing and computer vision. Motivated by a new simple but effective local Laplacian filter, we propose a scalable and efficient image filtering framework to extend this edge‐preserving image filter and construct an uniform implementation in O (N) time. The proposed framework is built upon a practical global‐to‐local strategy. The input image is first remapped globally by a series of tentative remapping functions to generate a virtual candidate image sequence (Virtual Image Pyramid Sequence, VIPS). This sequence is then recombined locally to a single output image by a flexible edge‐aware pixel‐level fusion rule. To avoid halo artifacts, both the output image and the virtual candidate image sequence are transformed into multi‐resolution pyramid representations. Four examples, single image dehazing, multi‐exposure fusion, fast edge‐preserving filtering and tone‐mapping, are presented as the concrete applications of the proposed framework. Experiments on filtering effect and computational efficiency indicate that the proposed framework is able to build a wide range of fast image filtering that yields visually compelling results.  相似文献   

10.
Many works focus on multi‐spectral capture and analysis, but multi‐spectral display still remains a challenge. Most prior works on multi‐primary displays use ad‐hoc narrow band primaries that assure a larger color gamut, but cannot assure a good spectral reproduction. Content‐dependent spectral analysis is the only way to produce good spectral reproduction, but cannot be applied to general data sets. Wide primaries are better suited for assuring good spectral reproduction due to greater coverage of the spectral range, but have not been explored much. In this paper we explore the use of wide band primaries for accurate spectral reproduction for the first time and present the first content‐independent multi‐spectral display achieved using superimposed projections with modified wide band primaries. We present a content‐independent primary selection method that selects a small set of n primaries from a large set of m candidate primaries where m > n. Our primary selection method chooses primaries with complete coverage of the range of visible wavelength (for good spectral reproduction accuracy), low interdependency (to limit the primaries to a small number) and higher light throughput (for higher light efficiency). Once the primaries are selected, the input values of the different primary channels to generate a desired spectrum are computed using an optimization method that minimizes spectral mismatch while maximizing visual quality. We implement a real prototype of multi‐spectral display consisting of 9‐primaries using three modified conventional 3‐primary projectors, and compare it with a conventional display to demonstrate its superior performance. Experiments show our display is capable of providing large gamut assuring a good visual appearance while displaying any multi‐spectral images at a high spectral accuracy.  相似文献   

11.
We present a new algorithm for efficient rendering of high‐quality depth‐of‐field (DoF) effects. We start with a single rasterized view (reference view) of the scene, and sample the light field by warping the reference view to nearby views. We implement the algorithm using NVIDIA's CUDA to achieve parallel processing, and exploit the atomic operations to resolve visibility when multiple pixels warp to the same image location. We then directly synthesize DoF effects from the sampled light field. To reduce aliasing artifacts, we propose an image‐space filtering technique that compensates for spatial undersampling using MIP mapping. The main advantages of our algorithm are its simplicity and generality. We demonstrate interactive rendering of DoF effects in several complex scenes. Compared to existing methods, ours does not require ray tracing and hence scales well with scene complexity.  相似文献   

12.
We introduce a novel method for enabling stereoscopic viewing of a scene from a single pre‐segmented image. Rather than attempting full 3D reconstruction or accurate depth map recovery, we hallucinate a rough approximation of the scene's 3D model using a number of simple depth and occlusion cues and shape priors. We begin by depth‐sorting the segments, each of which is assumed to represent a separate object in the scene, resulting in a collection of depth layers. The shapes and textures of the partially occluded segments are then completed using symmetry and convexity priors. Next, each completed segment is converted to a union of generalized cylinders yielding a rough 3D model for each object. Finally, the object depths are refined using an iterative ground fitting process. The hallucinated 3D model of the scene may then be used to generate a stereoscopic image pair, or to produce images from novel viewpoints within a small neighborhood of the original view. Despite the simplicity of our approach, we show that it compares favorably with state‐of‐the‐art depth ordering methods. A user study was conducted showing that our method produces more convincing stereoscopic images than existing semi‐interactive and automatic single image depth recovery methods.  相似文献   

13.
In this paper, we propose an interactive technique for constructing a 3D scene via sparse user inputs. We represent a 3D scene in the form of a Layered Depth Image (LDI) which is composed of a foreground layer and a background layer, and each layer has a corresponding texture and depth map. Given user‐specified sparse depth inputs, depth maps are computed based on superpixels using interpolation with geodesic‐distance weighting and an optimization framework. This computation is done immediately, which allows the user to edit the LDI interactively. Additionally, our technique automatically estimates depth and texture in occluded regions using the depth discontinuity. In our interface, the user paints strokes on the 3D model directly. The drawn strokes serve as 3D handles with which the user can pull out or push the 3D surface easily and intuitively with real‐time feedback. We show our technique enables efficient modeling of LDI that produce sufficient 3D effects.  相似文献   

14.
We describe a fast sampling algorithm for generating uniformly‐distributed point patterns with good blue noise characteristics. The method, based on constrained farthest point optimization, is provably optimal and may be easily parallelized, resulting in an algorithm whose performance/quality tradeoff is superior to other state‐of‐the‐art approaches.  相似文献   

15.
Annoying shaky motion is one of the significant problems in home videos, since hand shake is an unavoidable effect when capturing by using a hand‐held camcorder. Video stabilization is an important technique to solve this problem, but the stabilized videos resulting from some current methods usually have decreased resolution and are still not so stable. In this paper, we propose a robust and practical method of full‐frame video stabilization while considering user's capturing intention to remove not only the high frequency shaky motions but also the low frequency unexpected movements. To guess the user's capturing intention, we first consider the regions of interest in the video to estimate which regions or objects the user wants to capture, and then use a polyline to estimate a new stable camcorder motion path while avoiding the user's interested regions or objects being cut out. Then, we fill the dynamic and static missing areas caused by frame alignment from other frames to keep the same resolution and quality as the original video. Furthermore, we smooth the discontinuous regions by using a three‐dimensional Poisson‐based method. After the above automatic operations, a full‐frame stabilized video can be achieved and the important regions and objects can also be preserved.  相似文献   

16.
Recently, the problem of intrinsic shape matching has received a lot of attention. A number of algorithms have been proposed, among which random‐sampling‐based techniques have been particularly successful due to their generality and efficiency. We introduce a new sampling‐based shape matching algorithm that uses a planning step to find optimized “landmark” points. These points are matched first in order to maximize the information gained and thus minimize the sampling costs. Our approach makes three main contributions: First, the new technique leads to a significant improvement in performance, which we demonstrate on a number of benchmark scenarios. Second, our technique does not require any keypoint detection. This is often a significant limitation for models that do not show sufficient surface features. Third, we examine the actual numerical degrees of freedom of the matching problem for a given piece of geometry. In contrast to previous results, our estimates take into account unprecise geodesics and potentially numerically unfavorable geometry of general topology, giving a more realistic complexity estimate.  相似文献   

17.
Repeated scene elements are copious and ubiquitous in natural images. Cutout of those repeated elements usually involves tedious and laborious user interaction by previous image segmentation methods. In this paper, we present RepSnapping, a novel method oriented to cutout of repeated scene elements with much less user interaction. By exploring inherent similarity between repeated elements, a new optimization model is introduced to thread correlated elements in the segmentation procedure. The model proposed here enables efficient solution using max‐flow/min cut on an extended graph. Experiments indicate that RepSnapping facilitates cutout of repeated elements better than the state‐of‐the‐art interactive image segmentation and repetition detection methods.  相似文献   

18.
This paper proposes a novel system that “rephotographs” a historical photograph with a collection of images. Rather than finding the accurate viewpoint of the historical photo, users only need to take a number of photographs around the target scene. We adopt the structure from motion technique to estimate the spatial relationship among these photographs, and construct a set of 3D point cloud. Based on the user‐specified correspondences between the projected 3D point cloud and historical photograph, the camera parameters of the historical photograph are estimated. We then combine forward and backward warping images to render the result. Finally, inpainting and content‐preserving warping are used to refine it, and the photograph at the same viewpoint of the historical one is produced by this photo collection.  相似文献   

19.
We present a novel stereo‐to‐multiview video conversion method for glasses‐free multiview displays. Different from previous stereo‐to‐multiview approaches, our mapping algorithm utilizes the limited depth range of autostereoscopic displays optimally and strives to preserve the scene's artistic composition and perceived depth even under strong depth compression. We first present an investigation of how perceived image quality relates to spatial frequency and disparity. The outcome of this study is utilized in a two‐step mapping algorithm, where we (i) compress the scene depth using a non‐linear global function to the depth range of an autostereoscopic display and (ii) enhance the depth gradients of salient objects to restore the perceived depth and salient scene structure. Finally, an adapted image domain warping algorithm is proposed to generate the multiview output, which enables overall disparity range extension.  相似文献   

20.
High‐quality video editing usually requires accurate layer separation in order to resolve occlusions. However, most of the existing bilayer segmentation algorithms require either considerable user intervention or a simple stationary camera configuration with known background, which is difficult to meet for many real world online applications. This paper demonstrates that various visually appealing montage effects can be online created from a live video captured by a rotating camera, by accurately retrieving the camera state and segmenting out the dynamic foreground. The key contribution is that a novel fast bilayer segmentation method is proposed which can effectively extract the dynamic foreground under rotational camera configuration, and is robust to imperfect background estimation and complex background colors. Our system can create a variety of live visual effects, including but not limited to, realistic virtual object insertion, background substitution and blurring, non‐photorealistic rendering and camouflage effect. A variety of challenging examples demonstrate the effectiveness of our method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号