首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
In this paper, we present a novel exemplar‐based technique for the interpolation between two textures that combines patch‐based and statistical approaches. Motivated by the notion of texture as a largely local phenomenon, we warp and blend small image neighborhoods prior to patch‐based texture synthesis. In addition, interpolating and enforcing characteristic image statistics faithfully handles high frequency detail. We are able to create both intermediate textures as well as continuous transitions. In contrast to previous techniques computing a global morphing transformation on the entire input exemplar images, our localized and patch‐based approach allows us to successfully interpolate between textures with considerable differences in feature topology for which no smooth global warping field exists.  相似文献   

2.
Significant progress has been made in high-quality hair rendering, but it remains difficult to choose parameter values that reproduce a given real hair appearance. In particular, for applications such as games where naive users want to create their own avatars, tuning complex parameters is not practical. Our approach analyses a single flash photograph and estimates model parameters that reproduce the visual likeness of the observed hair. The estimated parameters include color absorptions, three reflectance lobe parameters of a multiple-scattering rendering model, and a geometric noise parameter. We use a novel melanin-based model to capture the natural subspace of hair absorption parameters. At its core, the method assumes that images of hair with similar color distributions are also similar in appearance. This allows us to recast the issue as an image retrieval problem where the photo is matched with a dataset of rendered images; we thus also match the model parameters used to generate these images. An earth-mover's distance is used between luminance-weighted color distributions to gauge similarity. We conduct a perceptual experiment to evaluate this metric in the context of hair appearance and demonstrate the method on 64 photographs, showing that it can achieve a visual likeness for a large variety of input photos.  相似文献   

3.
Texture Splicing     
We propose a new texture editing operation called texture splicing. For this operation, we regard a texture as having repetitive elements (textons) seamlessly distributed in a particular pattern. Taking two textures as input, texture splicing generates a new texture by selecting the texton appearance from one texture and distribution from the other. Texture splicing involves self‐similarity search to extract the distribution, distribution warping, context‐dependent warping, and finally, texture refinement to preserve overall appearance. We show a variety of results to illustrate this operation.  相似文献   

4.
Variable bit rate compression can achieve better quality and compression rates than fixed bit rate methods. None the less, GPU texturing uses lossy fixed bit rate methods like DXT to allow random access and on‐the‐fly decompression during rendering. Changes in games and GPUs since DXT was developed make its compression artifacts less acceptable, and texture bandwidth less of an issue, but texture size is a serious and growing problem. Games use a large total volume of texture data, but have a much smaller active set. We present a new paradigm that separates GPU decompression from rendering. Rendering is from uncompressed data, avoiding the need for random access decompression. We demonstrate this paradigm with a new variable bit rate lossy texture compression algorithm that is well suited to the GPU, including a new GPU‐friendly formulation of range decoding, and a new texture compression scheme averaging 12.4:1 lossy compression ratio on 471 real game textures with a quality level similar to traditional DXT compression. The total game texture set are stored in the GPU in compressed form, and decompressed for use in a fraction of a second per scene.  相似文献   

5.
We investigate semi‐stochastic tilings based on Wang or corner tiles for the real‐time synthesis of example‐based textures. In particular, we propose two new tiling approaches: (1) to replace stochastic tilings with pseudo‐random tilings based on the Halton low‐discrepancy sequence, and (2) to allow the controllable generation of tilings based on a user‐provided probability distribution. Our first method prevents local repetition of texture content as common with stochastic approaches and yields better results with smaller sets of utilized tiles. Our second method allows to directly influence the synthesis result which—in combination with an enhanced tile construction method that merges multiple source textures—extends synthesis tasks to globally‐varying textures. We show that both methods can be implemented very efficiently in connection with tile‐based texture mapping and also present a general rule that allows to significantly reduce resulting tile sets.  相似文献   

6.
In this paper, we introduce a new texture metamorphosis approach for interpolating texture samples from a source texture into a target texture. We use a new energy optimization scheme derived from optimal control principles which exploits the structure of the metamorphosis optimality conditions. Our approach considers the change in pixel position and pixel appearance in a single framework. In contrast to previous techniques that compute a global warping based on feature masks of textures, our approach allows to transform one texture into another by considering both intensity values and structural features of textures simultaneously. We demonstrate the usefulness of our approach for different textures, such as stochastic, semi‐structural and regular textures, with different levels of complexities. Our method produces visually appealing transformation sequences with no user interaction.  相似文献   

7.
In this paper we introduce a new fixed‐rate texture compression scheme based on the energy compaction properties of a modified Haar transform. The coefficients of this transform are quantized and stored using standard block compression methods, such as DXTC and BC7, ensuring simple implementation and very fast decoding speeds. Furthermore, coefficients with the highest contribution to the final image are quantized with higher accuracy, improving the overall compression quality. The proposed modifications to the standard Haar transform, along with a number of additional optimizations, improve the coefficient quantization and reduce the compression error. The resulting method offers more flexibility than the currently available texture compression formats, providing a variety of additional low bitrate encoding modes for the compression of grayscale and color textures.  相似文献   

8.
Recent color transfer methods use local information to learn the transformation from a source to an exemplar image, and then transfer this appearance change to a target image. These solutions achieve very successful results for general mood changes, e.g., changing the appearance of an image from “sunny” to “overcast”. However, such methods have a hard time creating new image content, such as leaves on a bare tree. Texture transfer, on the other hand, can synthesize such content but tends to destroy image structure. We propose the first algorithm that unifies color and texture transfer, outperforming both by leveraging their respective strengths. A key novelty in our approach resides in teasing apart appearance changes that can be modeled simply as changes in color versus those that require new image content to be generated. Our method starts with an analysis phase which evaluates the success of color transfer by comparing the exemplar with the source. This analysis then drives a selective, iterative texture transfer algorithm that simultaneously predicts the success of color transfer on the target and synthesizes new content where needed. We demonstrate our unified algorithm by transferring large temporal changes between photographs, such as change of season – e.g., leaves on bare trees or piles of snow on a street – and flooding.  相似文献   

9.
We present a novel multi‐view, projective texture mapping technique. While previous multi‐view texturing approaches lead to blurring and ghosting artefacts if 3D geometry and/or camera calibration are imprecise, we propose a texturing algorithm that warps (“floats”) projected textures during run‐time to preserve crisp, detailed texture appearance. Our GPU implementation achieves interactive to real‐time frame rates. The method is very generally applicable and can be used in combination with many image‐based rendering methods or projective texturing applications. By using Floating Textures in conjunction with, e.g., visual hull rendering, light field rendering, or free‐viewpoint video, improved rendering results are obtained from fewer input images, less accurately calibrated cameras, and coarser 3D geometry proxies.  相似文献   

10.
Many interesting real‐world textures are inhomogeneous and/or anisotropic. An inhomogeneous texture is one where various visual properties exhibit significant changes across the texture's spatial domain. Examples include perceptible changes in surface color, lighting, local texture pattern and/or its apparent scale, and weathering effects, which may vary abruptly, or in a continuous fashion. An anisotropic texture is one where the local patterns exhibit a preferred orientation, which also may vary across the spatial domain. While many example‐based texture synthesis methods can be highly effective when synthesizing uniform (stationary) isotropic textures, synthesizing highly non‐uniform textures, or ones with spatially varying orientation, is a considerably more challenging task, which so far has remained underexplored. In this paper, we propose a new method for automatic analysis and controlled synthesis of such textures. Given an input texture exemplar, our method generates a source guidance map comprising: (i) a scalar progression channel that attempts to capture the low frequency spatial changes in color, lighting, and local pattern combined, and (ii) a direction field that captures the local dominant orientation of the texture. Having augmented the texture exemplar with this guidance map, users can exercise better control over the synthesized result by providing easily specified target guidance maps, which are used to constrain the synthesis process.  相似文献   

11.
A procedural pattern generation process, called multi‐scale “assemblage” is introduced. An assemblage is defined as a multi‐scale composition of “multi‐variate” statistical figures, that can be kernel functions for defining noise‐like texture basis functions, or that can be patterns for defining structured procedural textures. This paper presents two main contributions: 1) a new procedural random point distribution function, that, unlike point jittering, allow us to take into account some spatial dependencies among figures and 2) a “multi‐variate” approach that, instead of defining finite sets of constant figures, allows us to generate nearly infinite variations of figures on‐the‐fly. For both, we use a “statistical shape model”, which is a representation of shape variations. Thanks to a direct GPU implementation, assemblage textures can be used to generate new classes of procedural textures for real‐time rendering by preserving all characteristics of usual procedural textures, namely: infinity, definition independency (provided the figures are also definition independent) and extreme compactness.  相似文献   

12.
13.
Modeling of realistic garments is essential for online shopping and many other applications including virtual characters. Most of existing methods either require a multi‐camera capture setup or a restricted mannequin pose. We address the garment modeling problem according to a single input image. We design an all‐pose garment outline interpretation, and a shading‐based detail modeling algorithm. Our method first estimates the mannequin pose and body shape from the input image. It further interprets the garment outline with an oriented facet decided according to the mannequin pose to generate the initial 3D garment model. Shape details such as folds and wrinkles are modeled by shape‐from‐shading techniques, to improve the realism of the garment model. Our method achieves similar result quality as prior methods from just a single image, significantly improving the flexibility of garment modeling.  相似文献   

14.
This work presents a new representation used as a rendering primitive of surfaces. Our representation is defined by an arbitrary cubic cell complex: a projection‐based parameterization domain for surfaces where geometry and appearance information are stored as tile textures. This representation is used by our ray casting rendering algorithm called projection mapping, which can be used for rendering geometry and appearance details of surfaces from arbitrary viewpoints. The projection mapping algorithm uses a fragment shader based on linear and binary searches of the relief mapping algorithm. Instead of traditionally rendering the surface, only front faces of our rendering primitive (our arbitrary cubic cell complex) are drawn, and geometry and appearance details of the surface are rendered back by using projection mapping. Alternatively, another method is proposed for mapping appearance information on complex surfaces using our arbitrary cubic cell complexes. In this case, instead of reconstructing the geometry as in projection mapping, the original mesh of a surface is directly passed to the rendering algorithm. This algorithm is applied in the texture mapping of cultural heritage sculptures.  相似文献   

15.
We present a method of generating mipmaps that takes into account the distortions due to the parameterization of a surface. Existing algorithms for generating mipmaps assume that the texture is isometrically mapped to the surface and ignore the actual surface parameterization. Our method correctly downsamples warped textures by assigning texels weights proportional to their area on a surface. We also provide a least‐squares approach to filtering over these warped domains that takes into account the postfilter used by the GPU. Our method improves texture filtering for most models but only modifies mipmap generation, requires no modification of art assets or rasterization algorithms, and does not affect run‐time performance.  相似文献   

16.
We propose a novel approach to simulate the illumination of augmented outdoor scene based on a legacy photograph. Unlike previous works which only take surface radiosity or lighting related prior information as the basis of illumination estimation, our method integrates both of these two items. By adopting spherical harmonics, we deduce a linear model with only six illumination parameters. The illumination of an outdoor scene is finally calculated by solving a linear least square problem with the color constraint of the sunlight and the skylight. A high quality environment map is then set up, leading to realistic rendering results. We also explore the problem of shadow casting between real and virtual objects without knowing the geometry of objects which cast shadows. An efficient method is proposed to project complex shadows (such as tree's shadows) on the ground of the real scene to the surface of the virtual object with texture mapping. Finally, we present an unified scheme for image composition of a real outdoor scene with virtual objects ensuring their illumination consistency and shadow consistency. Experiments demonstrate the effectiveness and flexibility of our method.  相似文献   

17.
Creating variations of an image object is an important task, which usually requires manipulating the skeletal structure of the object. However, most existing methods (such as image deformation) only allow for stretching the skeletal structure of an object: modifying skeletal topology remains a challenge. This paper presents a technique for synthesizing image objects with different skeletal structures while respecting to an input image object. To apply this technique, a user firstly annotates the skeletal structure of the input object by specifying a number of strokes in the input image, and draws corresponding strokes in an output domain to generate new skeletal structures. Then, a number of the example texture pieces are sampled along the strokes in the input image and pasted along the strokes in the output domain with their orientations. The result is obtained by optimizing the texture sampling and seam computation. The proposed method is successfully used to synthesize challenging skeletal structures, such as skeletal branches, and a wide range of image objects with various skeletal structures, to demonstrate its effectiveness.  相似文献   

18.
This paper presents a novel filtering‐based method for decomposing an image into structures and textures. Unlike previous filtering algorithms, our method adaptively smooths image gradients to filter out textures from images. A new gradient operator, the interval gradient, is proposed for adaptive gradient smoothing. Using interval gradients, textures can be distinguished from structure edges and smoothly varying shadings. We also propose an effective gradient‐guided algorithm to produce high‐quality image filtering results from filtered gradients. Our method avoids gradient reversal in the filtering results and preserves sharp features better than existing filtering approaches, while retaining simplicity and highly parallel implementation. The proposed method can be utilized for various applications that require accurate structure‐texture decomposition of images.  相似文献   

19.
High‐quality texture minification techniques, including trilinear and anisotropic filtering, require texture data to be arranged into a collection of pre‐filtered texture maps called mipmaps. In this paper, we present a compression scheme for mipmapped textures which achieves much higher quality than current native schemes by exploiting image coherence across mipmap levels. The basic idea is to use a high‐quality native compressed format for the upper levels of the mipmap pyramid (to retain efficient minification filtering) together with a novel compact representation of the detail provided by the highest‐resolution mipmap. Key elements of our approach include delta‐encoding of the luminance signal, efficient encoding of coherent regions through texel runs following a Hilbert scan, a scheme for run encoding supporting fast random‐access, and a predictive approach for encoding indices of variable‐length blocks. We show that our scheme clearly outperforms native 6:1 compressed texture formats in terms of image quality while still providing real‐time rendering of trilinearly filtered textures.  相似文献   

20.
We present an automatic method to recover high‐resolution texture over an object by mapping detailed photographs onto its surface. Such high‐resolution detail often reveals inaccuracies in geometry and registration, as well as lighting variations and surface reflections. Simple image projection results in visible seams on the surface. We minimize such seams using a global optimization that assigns compatible texture to adjacent triangles. The key idea is to search not only combinatorially over the source images, but also over a set of local image transformations that compensate for geometric misalignment. This broad search space is traversed using a discrete labeling algorithm, aided by a coarse‐to‐fine strategy. Our approach significantly improves resilience to acquisition errors, thereby allowing simple and easy creation of textured models for use in computer graphics.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号