首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
In this paper, we describe a novel approach for the reconstruction of animated meshes from a series of time‐deforming point clouds. Given a set of unordered point clouds that have been captured by a fast 3‐D scanner, our algorithm is able to compute coherent meshes which approximate the input data at arbitrary time instances. Our method is based on the computation of an implicit function in ?4 that approximates the time‐space surface of the time‐varying point cloud. We then use the four‐dimensional implicit function to reconstruct a polygonal model for the first time‐step. By sliding this template mesh along the time‐space surface in an as‐rigid‐as‐possible manner, we obtain reconstructions for further time‐steps which have the same connectivity as the previously extracted mesh while recovering rigid motion exactly. The resulting animated meshes allow accurate motion tracking of arbitrary points and are well suited for animation compression. We demonstrate the qualities of the proposed method by applying it to several data sets acquired by real‐time 3‐D scanners.  相似文献   

2.
We present an automatic image‐recoloring technique for enhancing color contrast for dichromats whose computational cost varies linearly with the number of input pixels. Our approach can be efficiently implemented on GPUs, and we show that for typical image sizes it is up to two orders of magnitude faster than the current state‐of‐the‐art technique. Unlike previous approaches, ours preserve temporal coherence and, therefore, is suitable for video recoloring. We demonstrate the effectiveness of our technique by integrating it into a visualization system and showing, for the first time, real‐time high‐quality recolored visualizations for dichromats.  相似文献   

3.
We present a real‐time method for rendering a depth‐of‐field effect based on the per‐pixel layered splatting where source pixels are scattered on one of the three layers of a destination pixel. In addition, the missing information behind foreground objects is filled with an additional image of the areas occluded by nearer objects. The method creates high‐quality depth‐of‐field results even in the presence of partial occlusion, without major artifacts often present in the previous real‐time methods. The method can also be applied to simulating defocused highlights. The entire framework is accelerated by GPU, enabling real‐time post‐processing for both off‐line and interactive applications.  相似文献   

4.
We present a simple and effective algorithm to transfer deformation between surface meshes with multiple components. The algorithm automatically computes spatial relationships between components of the target object, builds correspondences between source and target, and finally transfers deformation of the source onto the target while preserving cohesion between the target's components. We demonstrate the versatility of our approach on various complex models.  相似文献   

5.
We propose an algorithm to compute interactive indirect illumination in dynamic scenes containing millions of triangles. It makes use of virtual point lights (VPL) to compute bounced illumination and a point‐based scene representation to query indirect visibility, similar to Imperfect Shadow Maps (ISM). To ensure a high fidelity of indirect light and shadows, our solution is made view‐adaptive by means of two orthogonal improvements: First, the VPL distribution is chosen to provide more detail, that is, more dense VPL sampling, where these contribute most to the current view. Second, the scene representation for indirect visibility is adapted to ensure geometric detail where it affects indirect shadows in the current view.  相似文献   

6.
This paper proposes a novel framework that allows for a flexible and an efficient retrieval of motion capture data in huge databases. The method first converts an action sequence into a novel representation, i.e. the Self‐Similarity Matrix (SSM), which is based on the notion of self‐similarity. This conversion of the motion sequences into compact and low‐rank subspace representations greatly reduces the spatiotemporal dimensionality of the sequences. The SSMs are then used to construct order‐3 tensors, and we propose a low‐rank decomposition scheme that allows for converting the motion sequence volumes into compact lower dimensional representations, without losing the nonlinear dynamics of the motion manifold. Thus, unlike existing linear dimensionality reduction methods that distort the motion manifold and lose very critical and discriminative components, the proposed method performs well even when inter‐class differences are small or intra‐class differences are large. In addition, the method allows for an efficient retrieval and does not require the time‐alignment of the motion sequences. We evaluate the performance of our retrieval framework on the CMU mocap dataset under two experimental settings, both demonstrating promising retrieval rates.  相似文献   

7.
We propose a novel approach to simulate the illumination of augmented outdoor scene based on a legacy photograph. Unlike previous works which only take surface radiosity or lighting related prior information as the basis of illumination estimation, our method integrates both of these two items. By adopting spherical harmonics, we deduce a linear model with only six illumination parameters. The illumination of an outdoor scene is finally calculated by solving a linear least square problem with the color constraint of the sunlight and the skylight. A high quality environment map is then set up, leading to realistic rendering results. We also explore the problem of shadow casting between real and virtual objects without knowing the geometry of objects which cast shadows. An efficient method is proposed to project complex shadows (such as tree's shadows) on the ground of the real scene to the surface of the virtual object with texture mapping. Finally, we present an unified scheme for image composition of a real outdoor scene with virtual objects ensuring their illumination consistency and shadow consistency. Experiments demonstrate the effectiveness and flexibility of our method.  相似文献   

8.
Higher‐order finite element methods have emerged as an important discretization scheme for simulation. They are increasingly used in contemporary numerical solvers, generating a new class of data that must be analyzed by scientists and engineers. Currently available visualization tools for this type of data are either batch oriented or limited to certain cell types and polynomial degrees. Other approaches approximate higher‐order data by resampling resulting in trade‐offs in interactivity and quality. To overcome these limitations, we have developed a distributed visualization system which allows for interactive exploration of non‐conforming unstructured grids, resulting from space‐time discontinuous Galerkin simulations, in which each cell has its own higher‐order polynomial solution. Our system employs GPU‐based raycasting for direct volume rendering of complex grids which feature non‐convex, curvilinear cells with varying polynomial degree. Frequency‐based adaptive sampling accounts for the high variations along rays. For distribution across a GPU cluster, the initial object‐space partitioning is determined by cell characteristics like the polynomial degree and is adapted at runtime by a load balancing mechanism. The performance and utility of our system is evaluated for different aeroacoustic simulations involving the propagation of shock fronts.  相似文献   

9.
This paper proposes an adaptive rendering technique for ray‐bundle tracing. Ray‐bundle tracing can be done by per‐pixel linked‐list construction on a GPU rasterization pipeline. This rasterization based approach offers significant benefits for the efficient generation of light maps (e.g., hardware acceleration, tessellation, and recycling of shaders used in real‐time graphics). However, it is inapplicable to large and complex scenes due to the limited capacity of the GPU memory because it requires a high‐resolution frame buffer and high‐capacity node buffer for the linked‐lists. In addition, memory overflow can potentially occur on the per‐pixel linked‐list since the memory usage of the lists is usually unknown before the rendering process. We introduce an adaptive tiling technique with memory usage prediction. Our method uses an appropriately tiled frame buffer, thus eliminating almost all of the overflow risks thanks to our adaptive tile subdivision scheme. Using this technique, we are able to render high‐quality light maps of large and complex scenes which cannot be computed using previous ray‐bundle based methods.  相似文献   

10.
We present a practical real‐time approach for rendering lens‐flare effects. While previous work employed costly ray tracing or complex polynomial expressions, we present a coarser, but also significantly faster solution. Our method is based on a first‐order approximation of the ray transfer in an optical system, which allows us to derive a matrix that maps lens flare‐producing light rays directly to the sensor. The resulting approach is easy to implement and produces physically‐plausible images at high framerates on standard off‐the‐shelf graphics hardware.  相似文献   

11.
Interactive computation of global illumination is a major challenge in current computer graphics research. Global illumination heavily affects the visual quality of generated images. It is therefore a key attribute for the perception of photo‐realistic images. Path tracing is able to simulate the physical behaviour of light using Monte Carlo techniques. However, the computational burden of this technique prohibits interactive rendering times on standard commodity hardware in high‐quality. Trying to solve the Monte Carlo integration with fewer samples results in characteristic noisy images. Global illumination filtering methods take advantage of the fact that the integral for neighbouring pixels may be very similar. Averaging samples of similar characteristics in screen‐space may approximate the correct integral, but may result in visible outliers. In this paper, we present a novel path tracing pipeline based on an edge‐aware filtering method for the indirect illumination which produces visually more pleasing results without noticeable outliers. The key idea is not to filter the noisy path traced images but to use it as a guidance to filter a second image composed from characteristic scene attributes that do not contain noise by default. We show that our approach better approximates the Monte Carlo integral compared to previous methods. Since the computation is carried out completely in screen‐space it is therefore applicable to fully dynamic scenes, arbitrary lighting and allows for high‐quality path tracing at interactive frame rates on commodity hardware.  相似文献   

12.
We propose a novel rendering method which supports interactive BRDF editing as well as relighting on a 3D scene. For interactive BRDF editing, we linearize an analytic BRDF model with basis BRDFs obtained from a principal component analysis. For each basis BRDF, the radiance transfer is precomputed and stored in vector form. In rendering time, illumination of a point is computed by multiplying the radiance transfer vectors of the basis BRDFs by the incoming radiance from gather samples and then linearly combining the results weighted by user‐controlled parameters. To improve the level of accuracy, a set of sub‐area samples associated with a gather sample refines the glossy reflection of the geometric details without increasing the precomputation time. We demonstrate this program with a number of examples to verify the real‐time performance of relighting and BRDF editing on 3D scenes with complex lighting and geometry.  相似文献   

13.
In this paper we show how to use two‐colored pixels as a generic tool for image processing. We apply two‐colored pixels as a basic operator as well as a supporting data structure for several image processing applications. Traditionally, images are represented by a regular grid of square pixels with one constant color each. In the two‐colored pixel representation, we reduce the image resolution and replace blocks of N × N pixels by one square that is split by a (feature) line into two regions with constant colors. We show how the conversion of standard mono‐colored pixel images into two‐colored pixel images can be computed efficiently by applying a hierarchical algorithm along with a CUDA‐based implementation. Two‐colored pixels overcome some of the limitations that classical pixel representations have, and their feature lines provide minimal geometric information about the underlying image region that can be effectively exploited for a number of applications. We show how to use two‐colored pixels as an interactive brush tool, achieving realtime performance for image abstraction and non‐photorealistic filtering. Additionally, we propose a realtime solution for image retargeting, defined as a linear minimization problem on a regular or even adaptive two‐colored pixel image. The concept of two‐colored pixels can be easily extended to a video volume, and we demonstrate this for the example of video retargeting.  相似文献   

14.
There is a vast number of applications that require distance field computation over triangular meshes. State‐of‐the‐art algorithms have quadratic or sub‐quadratic worst‐case complexity, making them impractical for interactive applications. While most of the research on this subject has been focused on reducing the computation complexity of the algorithms, in this work we propose an approximate algorithm that achieves similar results working in lower resolutions of the input meshes. The creation of lower resolution meshes is the essence of our proposal. The idea is to identify regions on the input mesh that can be unfolded into planar regions with minimal area distortion (i.e. quasi‐developable charts). Once charts are computed, their interior is re‐triangulated to reduce the number of triangles, which results in a collection of simplified charts that we call a base mesh. Due to the properties of quasi‐developable regions, we are able to compute distance fields over the base mesh instead of over the input mesh. This reduces the memory footprint and data processed for distance computations, which is the bottleneck of these algorithms. We present results that are one order of magnitude faster than current exact solutions, with low approximation errors.  相似文献   

15.
In this work we present a point classification algorithm for multi‐variate data. Our method is based on the concept of attribute subspaces, which are derived from a set of user specified attribute target values. Our classification approach enables users to visually distinguish regions of saliency through concurrent viewing of these subspaces in single images. We also allow a user to threshold the data according to a specified distance from attribute target values. Based on the degree of thresholding, the remaining data points are assigned radii of influence that are used for the final coloring. This limits the view to only those points that are most relevant, while maintaining a similar visual context.  相似文献   

16.
Empty‐space skipping is an essential acceleration technique for volume rendering. Image‐order empty‐space skipping is not well suited to GPU implementation, since it must perform checks on, essentially, a per‐sample basis, as in kd‐tree traversal, which can lead to a great deal of divergent branching at runtime, which is very expensive in a modern GPU pipeline. In contrast, object‐order empty‐space skipping is extremely fast on a GPU and has negligible overheads compared with approaches without empty‐space skipping, since it employs the hardware unit for rasterisation. However, previous object‐order algorithms have been able to skip only exterior empty space and not the interior empty space that lies inside or between volume objects. In this paper, we address these issues by proposing a multi‐layer depth‐peeling approach that can obtain all of the depth layers of the tight‐fitting bounding geometry of the isosurface by a single rasterising pass. The maximum count of layers peeled by our approach can be up to thousands, while maintaining 32‐bit float‐point accuracy, which was not possible previously. By raytracing only the valid ray segments between each consecutive pair of depth layers, we can skip both the interior and exterior empty space efficiently. In comparisons with 3 state‐of‐the‐art GPU isosurface rendering algorithms, this technique achieved much faster rendering across a variety of data sets.  相似文献   

17.
We propose a noise‐adaptive shape reconstruction method specialized to smooth, closed shapes. Our algorithm takes as input a defect‐laden point set with variable noise and outliers, and comprises three main steps. First, we compute a novel noise‐adaptive distance function to the inferred shape, which relies on the assumption that the inferred shape is a smooth submanifold of known dimension. Second, we estimate the sign and confidence of the function at a set of seed points, through minimizing a quadratic energy expressed on the edges of a uniform random graph. Third, we compute a signed implicit function through a random walker approach with soft constraints chosen as the most confident seed points computed in previous step.  相似文献   

18.
This paper presents a new method for estimating normals on unorganized point clouds that preserves sharp features. It is based on a robust version of the Randomized Hough Transform (RHT). We consider the filled Hough transform accumulator as an image of the discrete probability distribution of possible normals. The normals we estimate corresponds to the maximum of this distribution. We use a fixed‐size accumulator for speed, statistical exploration bounds for robustness, and randomized accumulators to prevent discretization effects. We also propose various sampling strategies to deal with anisotropy, as produced by laser scans due to differences of incidence. Our experiments show that our approach offers an ideal compromise between precision, speed, and robustness: it is at least as precise and noise‐resistant as state‐of‐the‐art methods that preserve sharp features, while being almost an order of magnitude faster. Besides, it can handle anisotropy with minor speed and precision losses.  相似文献   

19.
We present a flexible and highly efficient hardware‐assisted volume renderer grounded on the original Projected Tetrahedra (PT) algorithm. Unlike recent similar approaches, our method is exclusively based on the rasterization of simple geometric primitives and takes full advantage of graphics hardware. Both vertex and geometry shaders are used to compute the tetrahedral projection, while the volume ray integral is evaluated in a fragment shader; hence, volume rendering is performed entirely on the GPU within a single pass through the pipeline. We apply a CUDA‐based visibility ordering achieving rendering and sorting performance of over 6 M Tet/s for unstructured datasets. Furthermore, as each tetrahedron is processed independently, we employ a data‐parallel solution which is neither bound by GPU memory size nor does it rely on auxiliary volume information. In addition, iso‐surfaces can be readily extracted during the rendering process, and time‐varying data are handled without extra burden.  相似文献   

20.
This paper proposes two variants of a simple but efficient algorithm for structure‐preserving halftoning. Our algorithm extends Floyd‐Steinberg error diffusion; the goal of our extension is not only to produce good tone similarity but also to preserve structure and especially contrast, motivated by our intuition that human perception is sensitive to contrast. By enhancing contrast we attempt to preserve and enhance structure also. Our basic algorithm employs an adaptive, contrast‐aware mask. To enhance contrast, darker pixels should be more likely to be chosen as black pixels while lighter pixels should be more likely to be set as white. Therefore, when the positive error is diffused to nearby pixels in a mask, the dark pixels absorb less error and the light pixels absorb more. Conversely, negative error is distributed preferentially to dark pixels. We also propose using a mask with values that drop off steeply from the centre, intended to promote good spatial distribution. It is a very fast method whose speed mainly depends on the size of the mask. But this method suffers from distracting patterns. We then propose a variant on the basic idea which overcomes the first algorithm's shortcomings while maintaining its advantages through a priority‐aware scheme. Rather than proceeding in random or raster order, we sort the image first; each pixel is assigned a priority based on its up‐to‐date distance to black or to white, and pixels with extreme intensities are processed earlier. Since we use the same mask strategy as before, we promote good spatial distribution and high contrast. We use tone similarity, structure similarity, and contrast similarity to validate our algorithm. Comparisons with recent structure‐aware algorithms show that our method gives better results without sacrificing speed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号