首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
Symmetry is a common characteristic in natural and man‐made objects. Its ubiquitous nature can be exploited to facilitate the analysis and processing of computational representations of real objects. In particular, in computer graphics, the detection of symmetries in 3D geometry has enabled a number of applications in modeling and reconstruction. However, the problem of symmetry detection in incomplete geometry remains a challenging task. In this paper, we propose a vote‐based approach to detect symmetry in 3D shapes, with special interest in models with large missing parts. Our algorithm generates a set of candidate symmetries by matching local maxima of a surface function based on the heat diffusion in local domains, which guarantee robustness to missing data. In order to deal with local perturbations, we propose a multi‐scale surface function that is useful to select a set of distinctive points over which the approximate symmetries are defined. In addition, we introduce a vote‐based scheme that is aware of the partiality, and therefore reduces the number of false positive votes for the candidate symmetries. We show the effectiveness of our method in a varied set of 3D shapes and different levels of partiality. Furthermore, we show the applicability of our algorithm in the repair and completion of challenging reassembled objects in the context of cultural heritage.  相似文献   

2.
With the rapid advancement of 3D scanning devices, large and complicated 3D shapes are becoming ubiquitous, and require large amount of resources to store and transmit them efficiently. This makes shape compression a demanding technique in order for the user to reduce the data transmission latency. Existing shape compression methods could achieve very low bit‐rates by sacrificing shape quality. But none of them guarantees the preservation of salient feature lines that users care. In addition, many 3D shapes come with parametric information for texture mapping purposes. In this paper we describe a spectral method to compress the geometric shapes equipped with arbitrary valid parametric information. It guarantees to preserve user‐specified feature lines while achieving a high compression ratio. By applying the spectral shape analysis – Dirichlet Manifold Harmonics, in the 2D parametric domain, this method provides a progressive compression mechanism to trade‐off between bit‐rate and shape quality. Experiments show that this method provides very low bit‐rate with high shape‐quality and still guarantees the preservation of user‐specified feature lines.  相似文献   

3.
Noisy volumetric details like clouds, grounds, plaster, bark, roughcast, etc. are frequently encountered in nature and bring an important contribution to the realism of outdoor scenes. We introduce a new interactive approach, easing the creation of procedural representations of “stochastic” volumetric details by using a single example photograph. Instead of attempting to reconstruct an accurate geometric representation from the photograph, we use a stochastic multi‐scale approach that fits parameters of a multi‐layered noise‐based 3D deformation model, using a multi‐resolution filter banks error metric. Once computed, visually similar details can be applied to arbitrary objects with a high degree of visual realism, since lighting and parallax effects are naturally taken into account. Our approach is inspired by image‐based techniques. In practice, the user supplies a photograph of an object covered by noisy details, provides a corresponding coarse approximation of the shape of this object as well as an estimated lighting condition (generally a light source direction). Our system then determines the corresponding noise‐based representation as well as some diffuse, ambient, specular and semi‐transparency reflectance parameters. The resulting details are fully procedural and, as such, have the advantage of extreme compactness, while they can be infinitely extended without repetition in order to cover huge surfaces.  相似文献   

4.
We introduce a generative model of part‐segmented 3D objects: the shape variational auto‐encoder (ShapeVAE). The ShapeVAE describes a joint distribution over the existence of object parts, the locations of a dense set of surface points, and over surface normals associated with these points. Our model makes use of a deep encoder‐decoder architecture that leverages the part‐decomposability of 3D objects to embed high‐dimensional shape representations and sample novel instances. Given an input collection of part‐segmented objects with dense point correspondences the ShapeVAE is capable of synthesizing novel, realistic shapes, and by performing conditional inference enables imputation of missing parts or surface normals. In addition, by generating both points and surface normals, our model allows for the use of powerful surface‐reconstruction methods for mesh synthesis. We provide a quantitative evaluation of the ShapeVAE on shape‐completion and test‐set log‐likelihood tasks and demonstrate that the model performs favourably against strong baselines. We demonstrate qualitatively that the ShapeVAE produces plausible shape samples, and that it captures a semantically meaningful shape‐embedding. In addition we show that the ShapeVAE facilitates mesh reconstruction by sampling consistent surface normals.  相似文献   

5.
Large 3D asset databases are critical for designing virtual worlds, and using them effectively requires techniques for efficient querying and navigation. One important form of query is search by style compatibility: given a query object, find others that would be visually compatible if used in the same scene. In this paper, we present a scalable, learning‐based approach for solving this problem which is designed for use with real‐world 3D asset databases; we conduct experiments on 121 3D asset packages containing around 4000 3D objects from the Unity Asset Store. By leveraging the structure of the object packages, we introduce a technique to synthesize training labels for metric learning that work as well as human labels. These labels can grow exponentially with the number of objects, allowing our approach to scale to large real‐world 3D asset databases without the need for expensive human training labels. We use these synthetic training labels in a metric learning model that analyzes the in‐engine rendered appearance of an object—combining geometry, material, and texture—whereas prior work considers only object geometry, or disjoint geometry and texture features. Through an ablation experiment, we find that using this representation yields better results than using renders which lack texture, materiality, or both.  相似文献   

6.
Computational manufacturing technologies such as 3D printing hold the potential for creating objects with previously undreamed‐of combinations of functionality and physical properties. Human designers, however, typically cannot exploit the full geometric (and often material) complexity of which these devices are capable. This STAR examines recent systems developed by the computer graphics community in which designers specify higher‐level goals ranging from structural integrity and deformation to appearance and aesthetics, with the final detailed shape and manufacturing instructions emerging as the result of computation. It summarizes frameworks for interaction, simulation, and optimization, as well as documents the range of general objectives and domain‐specific goals that have been considered. An important unifying thread in this analysis is that different underlying geometric and physical representations are necessary for different tasks: we document over a dozen classes of representations that have been used for fabrication‐aware design in the literature. We analyze how these classes possess obvious advantages for some needs, but have also been used in creative manners to facilitate unexpected problem solutions.  相似文献   

7.
8.
We present the design of an interactive image‐based modeling tool that enables a user to quickly generate detailed 3D models with texture from a set of calibrated input images. Our main contribution is an intuitive user interface that is entirely based on simple 2D painting operations and does not require any technical expertise by the user or difficult pre‐processing of the input images. One central component of our tool is a GPU‐based multi‐view stereo reconstruction scheme, which is implemented by an incremental algorithm, that runs in the background during user interaction so that the user does not notice any significant response delay.  相似文献   

9.
A recent trend in interactive modeling of 3D shapes from a single image is designing minimal interfaces, and accompanying algorithms, for modeling a specific class of objects. Expanding upon the range of shapes that existing minimal interfaces can model, we present an interactive image‐guided tool for modeling shapes made up of extruded parts. An extruded part is represented by extruding a closed planar curve, called base, in the direction orthogonal to the base. To model each extruded part, the user only needs to sketch the projected base shape in the image. The main technical contribution is a novel optimization‐based approach for recovering the 3D normal of the base of an extruded object by exploring both geometric regularity of the sketched curve and image contents. We developed a convenient interface for modeling multi‐part shapes and a method for optimizing the relative placement of the parts. Our tool is validated using synthetic data and tested on real‐world images.  相似文献   

10.
Deep neural networks provide a promising tool for incorporating semantic information in geometry processing applications. Unlike image and video processing, however, geometry processing requires handling unstructured geometric data, and thus data representation becomes an important challenge in this framework. Existing approaches tackle this challenge by converting point clouds, meshes, or polygon soups into regular representations using, e.g., multi‐view images, volumetric grids or planar parameterizations. In each of these cases, geometric data representation is treated as a fixed pre‐process that is largely disconnected from the machine learning tool. In contrast, we propose to optimize for the geometric representation during the network learning process using a novel metric alignment layer. Our approach maps unstructured geometric data to a regular domain by minimizing the metric distortion of the map using the regularized Gromov–Wasserstein objective. This objective is parameterized by the metric of the target domain and is differentiable; thus, it can be easily incorporated into a deep network framework. Furthermore, the objective aims to align the metrics of the input and output domains, promoting consistent output for similar shapes. We show the effectiveness of our layer within a deep network trained for shape classification, demonstrating state‐of‐the‐art performance for nonrigid shapes.  相似文献   

11.
We introduce co‐variation analysis as a tool for modeling the way part geometries and configurations co‐vary across a family of man‐made 3D shapes. While man‐made 3D objects exhibit large geometric and structural variations, the geometry, structure, and configuration of their individual components usually do not vary independently from each other but in a correlated fashion. The size of the body of an airplane, for example, constrains the range of deformations its wings can undergo to ensure that the entire object remains a functionally‐valid airplane. These co‐variation constraints, which are often non‐linear, can be either physical, and thus they can be explicitly enumerated, or implicit to the design and style of the shape family. In this article, we propose a data‐driven approach, which takes pre‐segmented 3D shapes with known component‐wise correspondences and learns how various geometric and structural properties of their components co‐vary across the set. We demonstrate, using a variety of 3D shape families, the utility of the proposed co‐variation analysis in various applications including 3D shape repositories exploration and shape editing where the propagation of deformations is guided by the co‐variation analysis. We also show that the framework can be used for context‐guided orientation of objects in 3D scenes.  相似文献   

12.
Modeling of realistic garments is essential for online shopping and many other applications including virtual characters. Most of existing methods either require a multi‐camera capture setup or a restricted mannequin pose. We address the garment modeling problem according to a single input image. We design an all‐pose garment outline interpretation, and a shading‐based detail modeling algorithm. Our method first estimates the mannequin pose and body shape from the input image. It further interprets the garment outline with an oriented facet decided according to the mannequin pose to generate the initial 3D garment model. Shape details such as folds and wrinkles are modeled by shape‐from‐shading techniques, to improve the realism of the garment model. Our method achieves similar result quality as prior methods from just a single image, significantly improving the flexibility of garment modeling.  相似文献   

13.
This paper presents a method that generates natural and intuitive deformations via direct manipulation and smooth interpolation for multi‐element 2D shapes. Observing that the structural relationships between different parts of a multi‐element 2D shape are important for capturing its feature semantics, we introduce a simple structure called a feature frame to represent such relationships. A constrained optimization is solved for shape manipulation to find optimal deformed shapes under user‐specified handle constraints. Based on the feature frame, local feature preservation and structural relationship maintenance are directly encoded into the objective function. Beyond deforming a given multi‐element 2D shape into a new one at each key frame, our method can automatically generate a sequence of natural intermediate deformations by interpolating the shapes between the key frames. The method is computationally efficient, allowing real‐time manipulation and interpolation, as well as generating natural and visually plausible results.  相似文献   

14.
A major challenge in generating high‐fidelity virtual environments (VEs) is to be able to provide realism at interactive rates. The high‐fidelity simulation of light and sound is still unachievable in real time as such physical accuracy is very computationally demanding. Only recently has visual perception been used in high‐fidelity rendering to improve performance by a series of novel exploitations; to render parts of the scene that are not currently being attended to by the viewer at a much lower quality without the difference being perceived. This paper investigates the effect spatialized directional sound has on the visual attention of a user towards rendered images. These perceptual artefacts are utilized in selective rendering pipelines via the use of multi‐modal maps. The multi‐modal maps are tested through psychophysical experiments to examine their applicability to selective rendering algorithms, with a series of fixed cost rendering functions, and are found to perform significantly better than only using image saliency maps that are naively applied to multi‐modal VEs.  相似文献   

15.
Feature detection in geometric datasets is a fundamental tool for solving shape matching problems such as partial symmetry detection. Traditional techniques usually employ a priori models such as crease lines that are unspecific to the actual application. Our paper examines the idea of learning geometric features. We introduce a formal model for a class of linear feature constellations based on a Markov chain model and propose a novel, efficient algorithm for detecting a large number of features simultaneously. After a short user‐guided training stage, in which one or a few example lines are sketched directly onto the input data, our algorithm automatically finds all pieces of geometry similar to the marked areas. In particular, the algorithm is able recognize larger classes of semantically similar but geometrically varying features, which is very difficult using unsupervised techniques. In a number of experiments, we apply our technique to point cloud data from 3D scanners. The algorithm is able to detect features with very low rates of false positives and negatives and to recognize broader classes of similar geometry (such as “windows” in a building scan) even from few training examples, thereby significantly improving over previous unsupervised techniques.  相似文献   

16.
We present the first 3D algorithm capable of answering the question: what would a Mandelbrot‐like set in the shape of a bunny look like? More concretely, can we find an iterated quaternion rational map whose potential field contains an isocontour with a desired shape? We show that it is possible to answer this question by casting it as a shape optimization that discovers novel, highly complex shapes. The problem can be written as an energy minimization, the optimization can be made practical by using an efficient method for gradient evaluation, and convergence can be accelerated by using a variety of multi‐resolution strategies. The resulting shapes are not invariant under common operations such as translation, and instead undergo intricate, non‐linear transformations.  相似文献   

17.
We propose a new approach to automatically semantize complex objects in a 3D scene. For this, we define an expressive formalism combining the power of both attribute grammars and constraint. It offers a practical conceptual interface, which is crucial to write large maintainable specifications. As recursion is inadequate to express large collections of items, we introduce maximal operators, that are essential to reduce the parsing search space. Given a grammar in this formalism and a 3D scene, we show how to automatically compute a shared parse forest of all interpretations — in practice, only a few, thanks to relevant constraints. We evaluate this technique for building model semantization using CAD model examples as well as photogrammetric and simulated LiDAR data.  相似文献   

18.
19.
20.
Skeletons are powerful geometric abstractions that provide useful representations for a number of geometric operations. The straight skeleton has a lower combinatorial complexity compared with the medial axis. Moreover, while the medial axis of a polyhedron is composed of quadric surfaces the straight skeleton just consist of planar faces. Although there exist several methods to compute the straight skeleton of a polygon, the straight skeleton of polyhedra has been paid much less attention. We require to compute the skeleton of very large datasets storing orthogonal polyhedra. Furthermore, we need to treat geometric degeneracies that usually arise when dealing with orthogonal polyhedra. We present a new approach so as to robustly compute the straight skeleton of orthogonal polyhedra. We follow a geometric technique that works directly with the boundary of an orthogonal polyhedron. Our approach is output sensitive with respect to the number of vertices of the skeleton and solves geometric degeneracies. Unlike the existing straight skeleton algorithms that shrink the object boundary to obtain the skeleton, our algorithm relies on the plane sweep paradigm. The resulting skeleton is only composed of axis‐aligned and 45° rotated planar faces and edges.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号