首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, we present a practically robust method for computing foldover‐free volumetric mappings with hard linear constraints. Central to this approach is a projection algorithm that monotonically and efficiently decreases the distance from the mapping to the bounded conformal distortion mapping space. After projection, the conformal distortion of the updated mapping tends to be below the given bound, thereby significantly reducing foldovers. Since it is non‐trivial to define an optimal bound, we introduce a practical conformal distortion bound generation scheme to facilitate subsequent projections. By iteratively generating conformal distortion bounds and trying to project mappings into bounded conformal distortion spaces monotonically, our algorithm achieves high‐quality foldover‐free volumetric mappings with strong practical robustness and high efficiency. Compared with existing methods, our method computes mesh‐based and meshless volumetric mappings with no prescribed conformal distortion bounds. We demonstrate the efficacy and efficiency of our method through a variety of geometric processing tasks.  相似文献   

2.
We introduce an interactive tool for novice users to design mechanical objects made of 2.5D linkages. Users simply draw the shape of the object and a few key poses of its multiple moving parts. Our approach automatically generates a one‐degree‐of freedom linkage that connects the fixed and moving parts, such that the moving parts traverse all input poses in order without any collision with the fixed and other moving parts. In addition, our approach avoids common linkage defects and favors compact linkages and smooth motion trajectories. Finally, our system automatically generates the 3D geometry of the object and its links, allowing the rapid creation of a physical mockup of the designed object.  相似文献   

3.
We present a novel method to compute bijective PolyCube‐maps with low isometric distortion. Given a surface and its pre‐axis‐aligned shape that is not an exact PolyCube shape, the algorithm contains two steps: (i) construct a PolyCube shape to approximate the pre‐axis‐aligned shape; and (ii) generate a bijective, low isometric distortion mapping between the constructed PolyCube shape and the input surface. The PolyCube construction is formulated as a constrained optimization problem, where the objective is the number of corners in the constructed PolyCube, and the constraint is to bound the approximation error between the constructed PolyCube and the input pre‐axis‐aligned shape while ensuring topological validity. A novel erasing‐and‐filling solver is proposed to solve this challenging problem. Centeral to the algorithm for computing bijective PolyCube‐maps is a quad mesh optimization process that projects the constructed PolyCube onto the input surface with high‐quality quads. We demonstrate the efficacy of our algorithm on a data set containing 300 closed meshes. Compared to state‐of‐the‐art methods, our method achieves higher practical robustness and lower mapping distortion.  相似文献   

4.
Creating a virtual city is demanded for computer games, movies, and urban planning, but it takes a lot of time to create numerous 3D building models. Procedural modeling has become popular in recent years to overcome this issue, but creating a grammar to get a desired output is difficult and time consuming even for expert users. In this paper, we present an interactive tool that allows users to automatically generate such a grammar from a single image of a building. The user selects a photograph and highlights the silhouette of the target building as input to our method. Our pipeline automatically generates the building components, from large‐scale building mass to fine‐scale windows and doors geometry. Each stage of our pipeline combines convolutional neural networks (CNNs) and optimization to select and parameterize procedural grammars that reproduce the building elements of the picture. In the first stage, our method jointly estimates camera parameters and building mass shape. Once known, the building mass enables the rectification of the façades, which are given as input to the second stage that recovers the façade layout. This layout allows us to extract individual windows and doors that are subsequently fed to the last stage of the pipeline that selects procedural grammars for windows and doors. Finally, the grammars are combined to generate a complete procedural building as output. We devise a common methodology to make each stage of this pipeline tractable. This methodology consists in simplifying the input image to match the visual appearance of synthetic training data, and in using optimization to refine the parameters estimated by CNNs. We used our method to generate a variety of procedural models of buildings from existing photographs.  相似文献   

5.
We propose an approach for temporally coherent patch‐based texture synthesis on the free surface of fluids. Our approach is applied as a post‐process, using the surface and velocity field from any fluid simulator. We apply the texture from the exemplar through multiple local mesh patches fitted to the surface and mapped to the exemplar. Our patches are constructed from the fluid free surface by taking a subsection of the free surface mesh. As such, they are initially very well adapted to the fluid's surface, and can later deform according to the free surface velocity field, allowing a greater ability to represent surface motion than rigid or 2D grid‐based patches. From one frame to the next, the patch centers and surrounding patch vertices are advected according to the velocity field. We seek to maintain a Poisson disk distribution of patches, and following advection, the Poisson disk criterion determines where to add new patches and which patches should e flagged for removal. The removal considers the local number of patches: in regions containing too many patches, we accelerate the temporal removal. This reduces the number of patches while still meeting the Poisson disk criterion. Reducing areas with too many patches speeds up the computation and avoids patch‐blending artifacts. The final step of our approach creates the overall texture in an atlas where each texel is computed from the patches using a contrast‐preserving blending function. Our tests show that the approach works well on free surfaces undergoing significant deformation and topological changes. Furthermore, we show that our approach provides good results for many fluid simulation scenarios, and with many texture exemplars. We also confirm that the optical flow from the resulting texture matches the fluid velocity field. Overall, our approach compares favorably against recent work in this area.  相似文献   

6.
Image composition extracts the content of interest (COI) from a source image and blends it into a target image to generate a new image. In the majority of existing works, the COI is manually extracted and then overlaid on top of the target image. However, in practice, it is often necessary to deal with situations in which the COI is partially occluded by the target image content. In this regard, both tasks of extracting the COI and cropping its occluded part require intensive user interactions, which are laborious and seriously reduce the composition efficiency. This paper addresses the aforementioned challenges by proposing an efficient image composition method. First, we extract the semantic contents of the images by using state‐of‐the‐art deep learning methods. Therefore, the COI can be selected with clicks only, which can greatly reduce the demanded user interactions. Second, according to the user's operations (such as translation or scale) on the COI, we can effectively infer the occlusion relationships between the COI and the contents of the target image. Thus, the COI can be adaptively embedded into the target image without concern about cropping its occluded part. Therefore, the procedures of content extraction and occlusion handling can be significantly simplified, and work efficiency is remarkably improved. Experimental results show that compared to existing works, our method can reduce the number of user interactions to approximately one‐tenth and increase the speed of image composition by more than ten times.  相似文献   

7.
Displacement mapping is routinely used to add geometric details in a fast and easy‐to‐control way, both in offline rendering as well as recently in interactive applications such as games. However, it went largely unnoticed (with the exception of McGuire and Whitson [MW08]) that, when applying displacement mapping to a surface with a low‐distortion parametrization, this parametrization is distorted as the geometry was changed by the displacement mapping. Typical resulting artifacts are “rubber band”‐like distortion patterns in areas of strong displacement change where a small isotropic area in texture space is mapped to a large anisotropic area in world space. We describe a fast, fully GPU‐based two‐step procedure to resolve this problem. First, a correction deformation is computed from the displacement map. Second, two variants to apply this correction when computing displacement mapping are proposed. The first variant is backward‐compatible and can resolve the artifact in any rendering pipeline without modifying it and without requiring additional computation at render time, but only works for bijective parametrizations. The second variant works for more general parametrizations, but requires to modify the rendering code and incurs a very small computational overhead.  相似文献   

8.
We propose an efficient method for topology‐preserving simplification of medial axes of 3D models. Existing methods either cannot preserve the topology during medial axes simplification or have the problem of being geometrically inaccurate or computationally expensive. To tackle these issues, we restrict our topology‐checking to the areas around the topological holes to avoid unnecessary checks in other areas. Our algorithm can keep high precision even when the medial axis is simplified to be in very few vertices. Furthermore, we parallelize the medial axes simplification procedure to enhance the performance significantly. Experimental results show that our method can preserve the topology with highly efficient performance, much superior to the existing methods in terms of topology preservation, accuracy and performance.  相似文献   

9.
Semantic surface decomposition (SSD) facilitates various geometry processing and product re‐design tasks. Filter‐based techniques are meaningful and widely used to achieve the SSD, which however often leads to surface either under‐fitting or over‐fitting. In this paper, we propose a reliable rolling‐guided point normal filtering method to decompose textures from a captured point cloud surface. Our method is built on the geometry assumption that 3D surfaces are comprised of an underlying shape (US) and a variety of bump ups and downs (BUDs) on the US. We have three core contributions. First, by considering the BUDs as surface textures, we present a RANSAC‐based sub‐neighborhood detection scheme to distinguish the US and the textures. Second, to better preserve the US (especially the prominent structures), we introduce a patch shift scheme to estimate the guidance normal for feeding the rolling‐guided filter. Third, we formulate a new position updating scheme to alleviate the common uneven distribution of points. Both visual and numerical experiments demonstrate that our method is comparable to state‐of‐the‐art methods in terms of the robustness of texture removal and the effectiveness of the underlying shape preservation.  相似文献   

10.
This paper proposes a scale‐adaptive filtering method to improve the performance of structure‐preserving texture filtering for image smoothing. With classical texture filters, it usually is challenging to smooth texture at multiple scales while preserving salient structures in an image. We address this issue in the concept of adaptive bilateral filtering, where the scales of Gaussian range kernels are allowed to vary from pixel to pixel. Based on direction‐wise statistics, our method distinguishes texture from structure effectively, identifies appropriate scope around a pixel to be smoothed and thus infers an optimal smoothing scale for it. Filtering an image with varying‐scale kernels, the image is smoothed according to the distribution of texture adaptively. With commendable experimental results, we show that, needing less iterations, our proposed scheme boosts texture filtering performance in terms of preserving the geometric structures of multiple scales even after aggressive smoothing of the original image.  相似文献   

11.
Robust and efficient rendering of complex lighting effects, such as caustics, remains a challenging task. While algorithms like vertex connection and merging can render such effects robustly, their significant overhead over a simple path tracer is not always justified and – as we show in this paper ‐ also not necessary. In current rendering solutions, caustics often require the user to enable a specialized algorithm, usually a photon mapper, and hand‐tune its parameters. But even with carefully chosen parameters, photon mapping may still trace many photons that the path tracer could sample well enough, or, even worse, that are not visible at all. Our goal is robust, yet lightweight, caustics rendering. To that end, we propose a technique to identify and focus computation on the photon paths that offer significant variance reduction over samples from a path tracer. We apply this technique in a rendering solution combining path tracing and photon mapping. The photon emission is automatically guided towards regions where the photons are useful, i.e., provide substantial variance reduction for the currently rendered image. Our method achieves better photon densities with fewer light paths (and thus photons) than emission guiding approaches based on visual importance. In addition, we automatically determine an appropriate number of photons for a given scene, and the algorithm gracefully degenerates to pure path tracing for scenes that do not benefit from photon mapping.  相似文献   

12.
Feature curves on 3D shapes provide important hints about significant parts of the geometry and reveal their underlying structure. However, when we process real world data, automatically detected feature curves are affected by measurement uncertainty, missing data, and sampling resolution, leading to noisy, fragmented, and incomplete feature curve networks. These artifacts make further processing unreliable. In this paper we analyze the global co‐occurrence information in noisy feature curve networks to fill in missing data and suppress weakly supported feature curves. For this we propose an unsupervised approach to find meaningful structure within the incomplete data by detecting multiple occurrences of feature curve configurations (co‐occurrence analysis). We cluster and merge these into feature curve templates, which we leverage to identify strongly supported feature curve segments as well as to complete missing data in the feature curve network. In the presence of significant noise, previous approaches had to resort to user input, while our method performs fully automatic feature curve co‐completion. Finding feature reoccurrences however, is challenging since naïve feature curve comparison fails in this setting due to fragmentation and partial overlaps of curve segments. To tackle this problem we propose a robust method for partial curve matching. This provides us with the means to apply symmetry detection methods to identify co‐occurring configurations. Finally, Bayesian model selection enables us to detect and group re‐occurrences that describe the data well and with low redundancy.  相似文献   

13.
We introduce a bidirectional reflectance distribution function (BRDF) model for the rendering of materials that exhibit hazy reflections, whereby the specular reflections appear to be flanked by a surrounding halo. The focus of this work is on artistic control and ease of implementation for real‐time and off‐line rendering. We propose relying on a composite material based on a pair of arbitrary BRDF models; however, instead of controlling their physical parameters, we expose perceptual parameters inspired by visual experiments [ VBF17 ]. Our main contribution then consists in a mapping from perceptual to physical parameters that ensures the resulting composite BRDF is valid in terms of reciprocity, positivity and energy conservation. The immediate benefit of our approach is to provide direct artistic control over both the intensity and extent of the haze effect, which is not only necessary for editing purposes, but also essential to vary haziness spatially over an object surface. Our solution is also simple to implement as it requires no new importance sampling strategy and relies on existing BRDF models. Such a simplicity is key to approximating the method for the editing of hazy gloss in real‐time and for compositing.  相似文献   

14.
An appearance model for materials adhered with massive collections of special effect pigments has to take both high‐frequency spatial details (e.g., glints) and wave‐optical effects (e.g., iridescence) due to thin‐film interference into account. However, either phenomenon is challenging to characterize and simulate in a physically accurate way. Capturing these fascinating effects in a unified framework is even harder as the normal distribution function and the reflectance term are highly correlated and cannot be treated separately. In this paper, we propose a multi‐scale BRDF model for reproducing the main visual effects generated by the discrete assembly of special effect pigments, enabling a smooth transition from fine‐scale surface details to large‐scale iridescent patterns. We demonstrate that the wavelength‐dependent reflectance inside the pixel's footprint follows a Gaussian distribution according to the central limit theorem, and is closely related to the distribution of the thin‐film's thickness. We efficiently determine the mean and the variance of this Gaussian distribution for each pixel whose closed‐form expressions can be derived by assuming that the thin‐film's thickness is uniformly distributed. To validate its effectiveness, the proposed model is compared against some previous methods and photographs of actual materials. Furthermore, since our method does not require any scene‐dependent precomputation, the distribution of thickness is allowed to be spatially‐varying.  相似文献   

15.
16.
Despite recent advances in surveying techniques, publicly available Digital Elevation Models (DEMs) of terrains are low‐resolution except for selected places on Earth. In this paper we present a new method to turn low‐resolution DEMs into plausible and faithful high‐resolution terrains. Unlike other approaches for terrain synthesis/amplification (fractal noise, hydraulic and thermal erosion, multi‐resolution dictionaries), we benefit from high‐resolution aerial images to produce highly‐detailed DEMs mimicking the features of the real terrain. We explore different architectures for Fully Convolutional Neural Networks to learn upsampling patterns for DEMs from detailed training sets (high‐resolution DEMs and orthophotos), yielding up to one order of magnitude more resolution. Our comparative results show that our method outperforms competing data amplification approaches in terms of elevation accuracy and terrain plausibility.  相似文献   

17.
We present a versatile technique to convert textures with tristimulus colors into the spectral domain, allowing such content to be used in modern rendering systems. Our method is based on the observation that suitable reflectance spectra can be represented using a low‐dimensional parametric model that is intrinsically smooth and energy‐conserving, which leads to significant simplifications compared to prior work. The resulting spectral textures are compact and efficient: storage requirements are identical to standard RGB textures, and as few as six floating point instructions are required to evaluate them at any wavelength. Our model is the first spectral upsampling method to achieve zero error on the full sRGB gamut. The technique also supports large‐gamut color spaces, and can be vectorized effectively for use in rendering systems that handle many wavelengths at once.  相似文献   

18.
We consider the problem of non‐rigid shape matching using the functional map framework. Specifically, we analyze a commonly used approach for regularizing functional maps, which consists in penalizing the failure of the unknown map to commute with the Laplace‐Beltrami operators on the source and target shapes. We show that this approach has certain undesirable fundamental theoretical limitations, and can be undefined even for trivial maps in the smooth setting. Instead we propose a novel, theoretically well‐justified approach for regularizing functional maps, by using the notion of the resolvent of the Laplacian operator. In addition, we provide a natural one‐parameter family of regularizers, that can be easily tuned depending on the expected approximate isometry of the input shape pair. We show on a wide range of shape correspondence scenarios that our novel regularization leads to an improvement in the quality of the estimated functional, and ultimately pointwise correspondences before and after commonly‐used refinement techniques.  相似文献   

19.
20.
We present a procedural method for authoring synthetic tectonic planets. Instead of relying on computationally demanding physically‐based simulations, we capture the fundamental phenomena into a procedural method that faithfully reproduces large‐scale planetary features generated by the movement and collision of the tectonic plates. We approximate complex phenomena such as plate subduction or collisions to deform the lithosphere, including the continental and oceanic crusts. The user can control the movement of the plates, which dynamically evolve and generate a variety of landforms such as continents, oceanic ridges, large scale mountain ranges or island arcs. Finally, we amplify the large‐scale planet model with either procedurally‐defined or real‐world elevation data to synthesize coherent detailed reliefs. Our method allows the user to control the evolution of an entire planet interactively, and to trigger specific events such as catastrophic plate rifting.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号