首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Cutting and Pasting Irregularly Shaped Patches for Texture Synthesis   总被引:1,自引:0,他引:1  
This paper proposes a patch‐based texture synthesis approach that cuts and stitches irregularly shaped texture patches to generate new texture images with minimized visual discontinuity. It works well on a wide range of textures. A semiautomatic algorithm is developed to obtain the irregularly shaped patches. To synthesize strictly structured textures, a regular pasting method is proposed to identify the texture structures and subsequently position the irregularly shaped patches according to the identified structures. The results and comparisons with related work are given.  相似文献   

2.
In this paper, we introduce a new texture metamorphosis approach for interpolating texture samples from a source texture into a target texture. We use a new energy optimization scheme derived from optimal control principles which exploits the structure of the metamorphosis optimality conditions. Our approach considers the change in pixel position and pixel appearance in a single framework. In contrast to previous techniques that compute a global warping based on feature masks of textures, our approach allows to transform one texture into another by considering both intensity values and structural features of textures simultaneously. We demonstrate the usefulness of our approach for different textures, such as stochastic, semi‐structural and regular textures, with different levels of complexities. Our method produces visually appealing transformation sequences with no user interaction.  相似文献   

3.
We investigate semi‐stochastic tilings based on Wang or corner tiles for the real‐time synthesis of example‐based textures. In particular, we propose two new tiling approaches: (1) to replace stochastic tilings with pseudo‐random tilings based on the Halton low‐discrepancy sequence, and (2) to allow the controllable generation of tilings based on a user‐provided probability distribution. Our first method prevents local repetition of texture content as common with stochastic approaches and yields better results with smaller sets of utilized tiles. Our second method allows to directly influence the synthesis result which—in combination with an enhanced tile construction method that merges multiple source textures—extends synthesis tasks to globally‐varying textures. We show that both methods can be implemented very efficiently in connection with tile‐based texture mapping and also present a general rule that allows to significantly reduce resulting tile sets.  相似文献   

4.
In this paper, we present a novel exemplar‐based technique for the interpolation between two textures that combines patch‐based and statistical approaches. Motivated by the notion of texture as a largely local phenomenon, we warp and blend small image neighborhoods prior to patch‐based texture synthesis. In addition, interpolating and enforcing characteristic image statistics faithfully handles high frequency detail. We are able to create both intermediate textures as well as continuous transitions. In contrast to previous techniques computing a global morphing transformation on the entire input exemplar images, our localized and patch‐based approach allows us to successfully interpolate between textures with considerable differences in feature topology for which no smooth global warping field exists.  相似文献   

5.
The goal of texture synthesis is to generate an arbitrarily large high‐quality texture from a small input sample. Generally, it is assumed that the input image is given as a flat, square piece of texture, thus it has to be carefully prepared from a picture taken under ideal conditions. Instead we would like to extract the input texture from any surface from within an arbitrary photograph. This introduces several challenges: Only parts of the photograph are covered with the texture of interest, perspective and scene geometry introduce distortions, and the texture is non‐uniformly sampled during the capture process. This breaks many of the assumptions used for synthesis. In this paper we combine a simple novel user interface with a generic per‐pixel synthesis algorithm to achieve high‐quality synthesis from a photograph. Our interface lets the user locally describe the geometry supporting the textures by combining rational Bézier patches. These are particularly well suited to describe curved surfaces under projection. Further, we extend per‐pixel synthesis to account for arbitrary texture sparsity and distortion, both in the input image and in the synthesis output. Applications range from synthesizing textures directly from photographs to high‐quality texture completion.  相似文献   

6.
In this paper we investigate low-bitrate compression of scalar textures such as alpha maps, down to one or two bits per pixel. We present two new techniques for 4 × 4 blocks, based on the idea from ETC to use index tables. We demonstrate that although the visual quality of the alpha maps is greatly reduced at these low bit rates, the quality of the final rendered images appears to be sufficient for a wide range of applications, thus allowing bandwidth savings of up to 75%. The 2 bpp version improves PSNR with over 2 dB compared to BTC at the same bit rate. The 1 bpp version is, to the best of our knowledge, the first public 1 bpp texture compression algorithm, which makes comparison hard. However, compared to just DXT5-compressing a subsampled texture, our 1 bpp technique improves PSNR with over 2 dB. Finally, we show that some aspects of the presented algorithms are also useful for the more common bit rate of four bits per pixel, achieving PSNR scores around 1 dB better than DXT5, over a set of test images.  相似文献   

7.
The goal of example‐based texture synthesis methods is to generate arbitrarily large textures from limited exemplars in order to fit the exact dimensions and resolution required for a specific modeling task. The challenge is to faithfully capture all of the visual characteristics of the exemplar texture, without introducing obvious repetitions or unnatural looking visual elements. While existing non‐parametric synthesis methods have made remarkable progress towards this goal, most such methods have been demonstrated only on relatively low‐resolution exemplars. Real‐world high resolution textures often contain texture details at multiple scales, which these methods have difficulty reproducing faithfully. In this work, we present a new general‐purpose and fully automatic self‐tuning non‐parametric texture synthesis method that extends Texture Optimization by introducing several key improvements that result in superior synthesis ability. Our method is able to self‐tune its various parameters and weights and focuses on addressing three challenging aspects of texture synthesis: (i) irregular large scale structures are faithfully reproduced through the use of automatically generated and weighted guidance channels; (ii) repetition and smoothing of texture patches is avoided by new spatial uniformity constraints; (iii) a smart initialization strategy is used in order to improve the synthesis of regular and near‐regular textures, without affecting textures that do not exhibit regularities. We demonstrate the versatility and robustness of our completely automatic approach on a variety of challenging high‐resolution texture exemplars.  相似文献   

8.
In this paper we present an elegant pixel‐based texture synthesis technique that is able to generate visually pleasing results from source textures of both stochastic and structured nature. Inspired by the observation that the most common artifacts that occur when synthesizing textures are high‐frequency discontinuities, our technique tries to avoid these artifacts by forcing at least one of the direct neighboring pixels in each causal neighborhood to match within a predetermined threshold. This does not only avoid deterioration of the visual quality, but also results in faster synthesis timings. We demonstrate our technique on a variety of stochastic and structured textures.  相似文献   

9.
We present a novel multi‐view, projective texture mapping technique. While previous multi‐view texturing approaches lead to blurring and ghosting artefacts if 3D geometry and/or camera calibration are imprecise, we propose a texturing algorithm that warps (“floats”) projected textures during run‐time to preserve crisp, detailed texture appearance. Our GPU implementation achieves interactive to real‐time frame rates. The method is very generally applicable and can be used in combination with many image‐based rendering methods or projective texturing applications. By using Floating Textures in conjunction with, e.g., visual hull rendering, light field rendering, or free‐viewpoint video, improved rendering results are obtained from fewer input images, less accurately calibrated cameras, and coarser 3D geometry proxies.  相似文献   

10.
Dominant Texture and Diffusion Distance Manifolds   总被引:1,自引:0,他引:1  
Texture synthesis techniques require nearly uniform texture samples, however identifying suitable texture samples in an image requires significant data preprocessing. To eliminate this work, we introduce a fully automatic pipeline to detect dominant texture samples based on a manifold generated using the diffusion distance. We define the characteristics of dominant texture and three different types of outliers that allow us to efficiently identify dominant texture in feature space. We demonstrate how this method enables the analysis/synthesis of a wide range of natural textures. We compare textures synthesized from a sample image, with and without dominant texture detection. We also compare our approach to that of using a texture segmentation technique alone, and to using Euclidean, rather than diffusion, distances between texture features.  相似文献   

11.
Shadow removal is a challenging problem and previous approaches often produce de‐shadowed regions that are visually inconsistent with the rest of the image. We propose an automatic shadow region harmonization approach that makes the appearance of a de‐shadowed region (produced using any previous technique) compatible with the rest of the image. We use a shadow‐guided patch‐based image synthesis approach that reconstructs the shadow region using patches sampled from non‐shadowed regions. This result is then refined based on the reconstruction confidence to handle unique textures. Qualitative comparisons over a wide range of images, and a quantitative evaluation on a benchmark dataset show that our technique significantly improves upon the state‐of‐the‐art.  相似文献   

12.
In this paper, we present an efficient approach for the interactive rendering of large‐scale urban models, which can be integrated seamlessly with virtual globe applications. Our scheme fills the gap between standard approaches for distant views of digital terrains and the polygonal models required for close‐up views. Our work is oriented towards city models with real photographic textures of the building facades. At the heart of our approach is a multi‐resolution tree of the scene defining multi‐level relief impostors. Key ingredients of our approach include the pre‐computation of a small set of zenithal and oblique relief maps that capture the geometry and appearance of the buildings inside each node, a rendering algorithm combining relief mapping with projective texture mapping which uses only a small subset of the pre‐computed relief maps, and the use of wavelet compression to simulate two additional levels of the tree. Our scheme runs considerably faster than polygonal‐based approaches while producing images with higher quality than competing relief‐mapping techniques. We show both analytically and empirically that multi‐level relief impostors are suitable for interactive navigation through large urban models.  相似文献   

13.
The incident indirect light over a range of image pixels is often coherent. Two common approaches to exploit this inter‐pixel coherence to improve rendering performance are Irradiance Caching and Radiance Caching. Both compute incident indirect light only for a small subset of pixels (the cache), and later interpolate between pixels. Irradiance Caching uses scalar values that can be interpolated efficiently, but cannot account for shading variations caused by normal and reflectance variation between cache items. Radiance Caching maintains directional information, e.g., to allow highlights between cache items, but at the cost of storing and evaluating a Spherical Harmonics (SH) function per pixel. The arithmetic and bandwidth cost for this evaluation is linear in the number of coefficients and can be substantial. In this paper, we propose a method to replace it by an efficient per‐cache item pre‐filtering based on MIP maps — such as previously done for environment maps — leading to a single constant‐time lookup per pixel. Additionally, per‐cache item geometry statistics stored in distance‐MIP maps are used to improve the quality of each pixel's lookup. Our approximate interactive global illumination approach is an order of magnitude faster than Radiance Caching with Phong BRDFs and can be combined with Monte Carlo‐raytracing, Point‐based Global Illumination or Instant Radiosity.  相似文献   

14.
Solving aliasing artifacts is an essential problem in shadow mapping approaches. Many works have been proposed, however, most of them focused on removing the texel‐level aliasing that results from the limited resolution of shadow maps. Little work has been done to solve the pixel‐level shadow aliasing that is produced by the rasterization on the screen plane. In this paper, we propose a fast, sub‐pixel antialiased shadowing algorithm to solve the pixel aliasing problem. Our work is based on the alias‐free shadow maps, which is capable of computing accurate per‐pixel shadow, and only incurs little cost to extend to sub‐pixel accuracy. Instead of direct supersampling the screen space, we take facets to approximate pixels in shadow testing. The shadowed area of one facet is rapidly evaluated by projecting blocker geometry onto a supersampled 2D occlusion mask with bitmasks fusion. It provides a sub‐pixel occlusion sampling so as to capture fine shadow details and features. Furthermore, we introduce the silhouette mask map that limits visibility evaluation to pixels only on the silhouette, which greatly reduces the computation cost. Our algorithm runs entirely on the GPU, achieving real‐time performance and is an order of magnitude faster than the brute‐force supersampling method to produce comparable 32× antialiased shadows.  相似文献   

15.
This paper presents an efficient approach for generating weathering effects with detailed appearance variations in a single image. Previous approaches merely change chroma or reflectance of weathered objects, which is not sufficient for materials with detailed shading and texture variations, such as growing moss and peeling plaster. Our method propagates such detailed features via seamless patch‐based synthesis driven by weathering degree distribution. Unlike previous methods, the weathering degrees are calculated efficiently using Radial Basis Functions even for materials with wide color variations. We use graph cut‐based optimization to identify the most weathered region as a “weathering exemplar”, from which we sample weathering patches. We demonstrate our method enables us to generate various types of detailed weathering effects interactively.  相似文献   

16.
Curvilinear features extracted from a 2D user‐sketched feature map have been used successfully to constraint a patch‐based texture synthesis of real landscapes. This map‐based user interface does not give fine control over the height profile of the generated terrain. We propose a new texture‐based terrain synthesis framework controllable by a terrain sketching interface. We enhance the realism of the generated landscapes by using a novel patch merging method that reduces boundary artefacts caused by overlapping terrain patches. A more constrained synthesis process is used to produce landscapes that better match user requirements. The high computational cost of texture synthesis is reduced with a parallel implementation on graphics hardware. Our GPU‐accelerated solution provides a significant speedup depending on the size of the example terrain. We show experimentally that our framework is more successful in generating realistic landscapes than current example‐based terrain synthesis methods. We conclude that texture‐based terrain synthesis combined with sketching provides an excellent solution to the user control and realism challenges of virtual landscape generation.  相似文献   

17.
Many previous approaches to detecting urban change from LIDAR point clouds interpolate the points into rasters, perform pixel‐based image processing to detect changes, and produce 2D images as output. We present a method of LIDAR change detection that maintains accuracy by only using the raw, irregularly spaced LIDAR points, and extracts relevant changes as individual 3D models. We then utilize these models, alongside existing GIS data, within an interactive application that allows the chronological exploration of the changes to an urban environment. A three‐tiered level‐of‐detail system maintains a scale‐appropriate, legible visual representation across the entire range of view scales, from individual changes such as buildings and trees, to groups of changes such as new residential developments, deforestation, and construction sites, and finally to larger regions such as neighborhoods and districts of a city that are emerging or undergoing revitalization. Tools are provided to assist the visual analysis by urban planners and historians through semantic categorization and filtering of the changes presented.  相似文献   

18.
Large 3D asset databases are critical for designing virtual worlds, and using them effectively requires techniques for efficient querying and navigation. One important form of query is search by style compatibility: given a query object, find others that would be visually compatible if used in the same scene. In this paper, we present a scalable, learning‐based approach for solving this problem which is designed for use with real‐world 3D asset databases; we conduct experiments on 121 3D asset packages containing around 4000 3D objects from the Unity Asset Store. By leveraging the structure of the object packages, we introduce a technique to synthesize training labels for metric learning that work as well as human labels. These labels can grow exponentially with the number of objects, allowing our approach to scale to large real‐world 3D asset databases without the need for expensive human training labels. We use these synthetic training labels in a metric learning model that analyzes the in‐engine rendered appearance of an object—combining geometry, material, and texture—whereas prior work considers only object geometry, or disjoint geometry and texture features. Through an ablation experiment, we find that using this representation yields better results than using renders which lack texture, materiality, or both.  相似文献   

19.
Image‐based rendering techniques are a powerful alternative to traditional polygon‐based computer graphics. This paper presents a novel light field rendering technique which performs per‐pixel depth correction of rays for high‐quality reconstruction. Our technique stores combined RGB and depth values in a parabolic 2D texture for every light field sample acquired at discrete positions on a uniform spherical setup. Image synthesis is implemented on the GPU as a fragment program which extracts the correct image information from adjacent cameras for each fragment by applying per‐pixel depth correction of rays. We show that the presented image‐based rendering technique provides a significant improvement compared to previous approaches. We explain two different rendering implementations which make use of a uniform parametrisation to minimise disparity problems and ensure full six degrees of freedom for virtual view synthesis. While one rendering algorithm implements an iterative refinement approach for rendering light fields with per pixel depth correction, the other approach employs a raycaster, which provides superior rendering quality at moderate frame rates. GPU based per‐fragment depth correction of rays, used in both implementations, helps reducing ghosting artifacts to a non‐noticeable amount and provides a rendering technique that performs without exhaustive pre‐processing for 3D object reconstruction and without real‐time ray‐object intersection calculations at rendering time.  相似文献   

20.
Existing solid texture synthesis algorithms generate a full volume of color content from a set of 2D example images. We introduce a new algorithm with the unique ability to restrict synthesis to a subset of the voxels, while enforcing spatial determinism. This is especially useful when texturing objects, since only a thick layer around the surface needs to be synthesized. A major difficulty lies in reducing the dependency chain of neighborhood matching, so that each voxel only depends on a small number of other voxels. Our key idea is to synthesize a volume from a set of pre‐computed 3D candidates, each being a triple of interleaved 2D neighborhoods. We present an efficient algorithm to carefully select in a pre‐process only those candidates forming consistent triples. This significantly reduces the search space during subsequent synthesis. The result is a new parallel, spatially deterministic solid texture synthesis algorithm which runs efficiently on the GPU. Our approach generates high resolution solid textures on surfaces within seconds. Memory usage and synthesis time only depend on the output textured surface area. The GPU implementation of our method rapidly synthesizes new textures for the surfaces appearing when interactively breaking or cutting objects.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号