首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Feature learning for 3D shapes is challenging due to the lack of natural paramterization for 3D surface models. We adopt the multi‐view depth image representation and propose Multi‐View Deep Extreme Learning Machine (MVD‐ELM) to achieve fast and quality projective feature learning for 3D shapes. In contrast to existing multi‐view learning approaches, our method ensures the feature maps learned for different views are mutually dependent via shared weights and in each layer, their unprojections together form a valid 3D reconstruction of the input 3D shape through using normalized convolution kernels. These lead to a more accurate 3D feature learning as shown by the encouraging results in several applications. Moreover, the 3D reconstruction property enables clear visualization of the learned features, which further demonstrates the meaningfulness of our feature learning.  相似文献   

2.
We present a data‐driven method for synthesizing 3D indoor scenes by inserting objects progressively into an initial, possibly, empty scene. Instead of relying on few hundreds of hand‐crafted 3D scenes, we take advantage of existing large‐scale annotated RGB‐D datasets, in particular, the SUN RGB‐D database consisting of 10,000+ depth images of real scenes, to form the prior knowledge for our synthesis task. Our object insertion scheme follows a co‐occurrence model and an arrangement model, both learned from the SUN dataset. The former elects a highly probable combination of object categories along with the number of instances per category while a plausible placement is defined by the latter model. Compared to previous works on probabilistic learning for object placement, we make two contributions. First, we learn various classes of higher‐order object‐object relations including symmetry, distinct orientation, and proximity from the database. These relations effectively enable considering objects in semantically formed groups rather than by individuals. Second, while our algorithm inserts objects one at a time, it attains holistic plausibility of the whole current scene while offering controllability through progressive synthesis. We conducted several user studies to compare our scene synthesis performance to results obtained by manual synthesis, state‐of‐the‐art object placement schemes, and variations of parameter settings for the arrangement model.  相似文献   

3.
Non‐rigid registration of 3D shapes is an essential task of increasing importance as commodity depth sensors become more widely available for scanning dynamic scenes. Non‐rigid registration is much more challenging than rigid registration as it estimates a set of local transformations instead of a single global transformation, and hence is prone to the overfitting issue due to underdetermination. The common wisdom in previous methods is to impose an ?2‐norm regularization on the local transformation differences. However, the ?2‐norm regularization tends to bias the solution towards outliers and noise with heavy‐tailed distribution, which is verified by the poor goodness‐of‐fit of the Gaussian distribution over transformation differences. On the contrary, Laplacian distribution fits well with the transformation differences, suggesting the use of a sparsity prior. We propose a sparse non‐rigid registration (SNR) method with an ?1‐norm regularized model for transformation estimation, which is effectively solved by an alternate direction method (ADM) under the augmented Lagrangian framework. We also devise a multi‐resolution scheme for robust and progressive registration. Results on both public datasets and our scanned datasets show the superiority of our method, particularly in handling large‐scale deformations as well as outliers and noise.  相似文献   

4.
Since indoor scenes are frequently changed in daily life, such as re‐layout of furniture, the 3D reconstructions for them should be flexible and easy to update. We present an automatic 3D scene update algorithm to indoor scenes by capturing scene variation with RGBD cameras. We assume an initial scene has been reconstructed in advance in manual or other semi‐automatic way before the change, and automatically update the reconstruction according to the newly captured RGBD images of the real scene update. It starts with an automatic segmentation process without manual interaction, which benefits from accurate labeling training from the initial 3D scene. After the segmentation, objects captured by RGBD camera are extracted to form a local updated scene. We formulate an optimization problem to compare to the initial scene to locate moved objects. The moved objects are then integrated with static objects in the initial scene to generate a new 3D scene. We demonstrate the efficiency and robustness of our approach by updating the 3D scene of several real‐world scenes.  相似文献   

5.
In this paper, we introduce an interactive method suitable for retargeting both 3D objects and scenes. Initially, the input object or scene is decomposed into a collection of constituent components enclosed by corresponding control bounding volumes which capture the intra‐structures of the object or semantic grouping of objects in the 3D scene. The overall retargeting is accomplished through a constrained optimization by manipulating the control bounding volumes. Without inferring the intricate dependencies between the components, we define a minimal set of constraints that maintain the spatial arrangement and connectivity between the components to regularize the valid retargeting results. The default retargeting behavior can then be easily altered by additional semantic constraints imposed by users. This strategy makes the proposed method highly flexible to process a wide variety of 3D objects and scenes under an unified framework. In addition, the proposed method achieved more general structure‐preserving pattern synthesis in both object and scene levels. We demonstrate the effectiveness of our method by applying it to several complicated 3D objects and scenes.  相似文献   

6.
7.
8.
We present a sparse optimization framework for extracting sparse shape priors from a collection of 3D models. Shape priors are defined as point‐set neighborhoods sampled from shape surfaces which convey important information encompassing normals and local shape characterization. A 3D shape model can be considered to be formed with a set of 3D local shape priors, while most of them are likely to have similar geometry. Our key observation is that the local priors extracted from a family of 3D shapes lie in a very low‐dimensional manifold. Consequently, a compact and informative subset of priors can be learned to efficiently encode all shapes of the same family. A comprehensive library of local shape priors is first built with the given collection of 3D models of the same family. We then formulate a global, sparse optimization problem which enforces selecting representative priors while minimizing the reconstruction error. To solve the optimization problem, we design an efficient solver based on the Augmented Lagrangian Multipliers method (ALM). Extensive experiments exhibit the power of our data‐driven sparse priors in elegantly solving several high‐level shape analysis applications and geometry processing tasks, such as shape retrieval, style analysis and symmetry detection.  相似文献   

9.
We consider the problem of stable region detection and segmentation of deformable shapes. We pursue this goal by determining a consensus segmentation from a heterogeneous ensemble of putative segmentations, which are generated by a clustering process on an intrinsic embedding of the shape. The intuition is that the consensus segmentation, which relies on aggregate statistics gathered from the segmentations in the ensemble, can reveal components in the shape that are more stable to deformations than the single baseline segmentations. Compared to the existing approaches, our solution exhibits higher robustness and repeatability throughout a wide spectrum of non‐rigid transformations. It is computationally efficient, naturally extendible to point clouds, and remains semantically stable even across different object classes. A quantitative evaluation on standard datasets confirms the potentiality of our method as a valid tool for deformable shape analysis.  相似文献   

10.
We propose a novel method for the automatic generation of structured hexahedral meshes of articulated 3D shapes. We recast the complex problem of generating the connectivity of a hexahedral mesh of a general shape into the simpler problem of generating the connectivity of a tubular structure derived from its curve‐skeleton. We also provide volumetric subdivision schemes to nicely adapt the topology of the mesh to the local thickness of tubes, while regularizing per‐element size. Our method is fast, one‐click, easy to reproduce, and it generates structured meshes that better align to the branching structure of the input shape if compared to previous methods for hexa mesh generation.  相似文献   

11.
Understanding how an animal can deform and articulate is essential for a realistic modification of its 3D model. In this paper, we show that such information can be learned from user‐clicked 2D images and a template 3D model of the target animal. We present a volumetric deformation framework that produces a set of new 3D models by deforming a template 3D model according to a set of user‐clicked images. Our framework is based on a novel locally‐bounded deformation energy, where every local region has its own stiffness value that bounds how much distortion is allowed at that location. We jointly learn the local stiffness bounds as we deform the template 3D mesh to match each user‐clicked image. We show that this seemingly complex task can be solved as a sequence of convex optimization problems. We demonstrate the effectiveness of our approach on cats and horses, which are highly deformable and articulated animals. Our framework produces new 3D models of animals that are significantly more plausible than methods without learned stiffness.  相似文献   

12.
Deep neural networks provide a promising tool for incorporating semantic information in geometry processing applications. Unlike image and video processing, however, geometry processing requires handling unstructured geometric data, and thus data representation becomes an important challenge in this framework. Existing approaches tackle this challenge by converting point clouds, meshes, or polygon soups into regular representations using, e.g., multi‐view images, volumetric grids or planar parameterizations. In each of these cases, geometric data representation is treated as a fixed pre‐process that is largely disconnected from the machine learning tool. In contrast, we propose to optimize for the geometric representation during the network learning process using a novel metric alignment layer. Our approach maps unstructured geometric data to a regular domain by minimizing the metric distortion of the map using the regularized Gromov–Wasserstein objective. This objective is parameterized by the metric of the target domain and is differentiable; thus, it can be easily incorporated into a deep network framework. Furthermore, the objective aims to align the metrics of the input and output domains, promoting consistent output for similar shapes. We show the effectiveness of our layer within a deep network trained for shape classification, demonstrating state‐of‐the‐art performance for nonrigid shapes.  相似文献   

13.
We study the difficult problem of deciding if parts of a freeform surface can be generated, or approximately generated, by the motion of a planar profile through space. While this task is basic for understanding the geometry of shapes as well as highly relevant for manufacturing and building construction, previous approaches were confined to special cases like kinematic surfaces or “moulding” surfaces. The general case remained unsolved so far. We approach this problem by a combination of local and global methods: curve analysis with regard to “movability”, curve comparison by common substring search in curvature plots, an exhaustive search through all planar cuts enhanced by quick rejection procedures, the ordering of candidate profiles and finally, global optimization. The main applications of our method are digital reconstruction of CAD models exhibiting sweep patches, and aiding in manufacturing freeform surfaces by pointing out those parts which can be approximated by sweeps.  相似文献   

14.
This paper presents a method that can convert a given 3D mesh into a flat‐foldable model consisting of rigid panels. A previous work proposed a method to assist manual design of a single component of such flat‐foldable model, consisting of vertically‐connected side panels as well as horizontal top and bottom panels. Our method semi‐automatically generates a more complicated model that approximates the input mesh with multiple convex components. The user specifies the folding direction of each convex component and the fidelity of shape approximation. Given the user inputs, our method optimizes shapes and positions of panels of each convex component in order to make the whole model flat‐foldable. The user can check a folding animation of the output model. We demonstrate the effectiveness of our method by fabricating physical paper prototypes of flat‐foldable models.  相似文献   

15.
We introduce a generative model of part‐segmented 3D objects: the shape variational auto‐encoder (ShapeVAE). The ShapeVAE describes a joint distribution over the existence of object parts, the locations of a dense set of surface points, and over surface normals associated with these points. Our model makes use of a deep encoder‐decoder architecture that leverages the part‐decomposability of 3D objects to embed high‐dimensional shape representations and sample novel instances. Given an input collection of part‐segmented objects with dense point correspondences the ShapeVAE is capable of synthesizing novel, realistic shapes, and by performing conditional inference enables imputation of missing parts or surface normals. In addition, by generating both points and surface normals, our model allows for the use of powerful surface‐reconstruction methods for mesh synthesis. We provide a quantitative evaluation of the ShapeVAE on shape‐completion and test‐set log‐likelihood tasks and demonstrate that the model performs favourably against strong baselines. We demonstrate qualitatively that the ShapeVAE produces plausible shape samples, and that it captures a semantically meaningful shape‐embedding. In addition we show that the ShapeVAE facilitates mesh reconstruction by sampling consistent surface normals.  相似文献   

16.
Various applications of global surface parametrization benefit from the alignment of parametrization isolines with principal curvature directions. This is particularly true for recent parametrization‐based meshing approaches, where this directly translates into a shape‐aware edge flow, better approximation quality, and reduced meshing artifacts. Existing methods to influence a parametrization based on principal curvature directions suffer from scale‐dependence, which implies the necessity of parameter variation, or try to capture complex directional shape features using simple 1D curves. Especially for non‐sharp features, such as chamfers, fillets, blends, and even more for organic variants thereof, these abstractions can be unfit. We present a novel approach which respects and exploits the 2D nature of such directional feature regions, detects them based on coherence and homogeneity properties, and controls the parametrization process accordingly. This approach enables us to provide an intuitive, scale‐invariant control parameter to the user. It also allows us to consider non‐local aspects like the topology of a feature, enabling further improvements. We demonstrate that, compared to previous approaches, global parametrizations of higher quality can be generated without user intervention.  相似文献   

17.
Direction fields and vector fields play an increasingly important role in computer graphics and geometry processing. The synthesis of directional fields on surfaces, or other spatial domains, is a fundamental step in numerous applications, such as mesh generation, deformation, texture mapping, and many more. The wide range of applications resulted in definitions for many types of directional fields: from vector and tensor fields, over line and cross fields, to frame and vector‐set fields. Depending on the application at hand, researchers have used various notions of objectives and constraints to synthesize such fields. These notions are defined in terms of fairness, feature alignment, symmetry, or field topology, to mention just a few. To facilitate these objectives, various representations, discretizations, and optimization strategies have been developed. These choices come with varying strengths and weaknesses. This report provides a systematic overview of directional field synthesis for graphics applications, the challenges it poses, and the methods developed in recent years to address these challenges.  相似文献   

18.
We propose a novel method to synthesize geometric models from a given class of context‐aware structured shapes such as buildings and other man‐made objects. The central idea is to leverage powerful machine learning methods from the area of natural language processing for this task. To this end, we propose a technique that maps shapes to strings and vice versa, through an intermediate shape graph representation. We then convert procedurally generated shape repositories into text databases that, in turn, can be used to train a variational autoencoder. The autoencoder enables higher level shape manipulation and synthesis like, for example, interpolation and sampling via its continuous latent space. We provide project code and pre‐trained models.  相似文献   

19.
20.
Splines are part of the standard toolbox for the approximation of functions and curves in ?d. Still, the problem of finding the spline that best approximates an input function or curve is ill‐posed, since in general this yields a “spline” with an infinite number of segments. The problem can be regularized by adding a penalty term for the number of spline segments. We show how this idea can be formulated as an ?0‐regularized quadratic problem. This gives us a notion of optimal approximating splines that depend on one parameter, which weights the approximation error against the number of segments. We detail this concept for different types of splines including B‐splines and composite Bézier curves. Based on the latest development in the field of sparse approximation, we devise a solver for the resulting minimization problems and show applications to spline approximation of planar and space curves and to spline conversion of motion capture data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号