共查询到20条相似文献,搜索用时 0 毫秒
1.
We consider the problem of non‐rigid shape matching using the functional map framework. Specifically, we analyze a commonly used approach for regularizing functional maps, which consists in penalizing the failure of the unknown map to commute with the Laplace‐Beltrami operators on the source and target shapes. We show that this approach has certain undesirable fundamental theoretical limitations, and can be undefined even for trivial maps in the smooth setting. Instead we propose a novel, theoretically well‐justified approach for regularizing functional maps, by using the notion of the resolvent of the Laplacian operator. In addition, we provide a natural one‐parameter family of regularizers, that can be easily tuned depending on the expected approximate isometry of the input shape pair. We show on a wide range of shape correspondence scenarios that our novel regularization leads to an improvement in the quality of the estimated functional, and ultimately pointwise correspondences before and after commonly‐used refinement techniques. 相似文献
2.
3.
We propose a new scalable version of the functional map pipeline that allows to efficiently compute correspondences between potentially very dense meshes. Unlike existing approaches that process dense meshes by relying on ad-hoc mesh simplification, we establish an integrated end-to-end pipeline with theoretical approximation analysis. In particular, our method overcomes the computational burden of both computing the basis, as well the functional and pointwise correspondence computation by approximating the functional spaces and the functional map itself. Errors in the approximations are controlled by theoretical upper bounds assessing the range of applicability of our pipeline. With this construction in hand, we propose a scalable practical algorithm and demonstrate results on dense meshes, which approximate those obtained by standard functional map algorithms at the fraction of the computation time. Moreover, our approach outperforms the standard acceleration procedures by a large margin, leading to accurate results even in challenging cases. 相似文献
4.
A mandatory component for many point set algorithms is the availability of consistently oriented vertex‐normals (e.g. for surface reconstruction, feature detection, visualization). Previous orientation methods on meshes or raw point clouds do not consider a global context, are often based on unrealistic assumptions, or have extremely long computation times, making them unusable on real‐world data. We present a novel massively parallelized method to compute globally consistent oriented point normals for raw and unsorted point clouds. Built on the idea of graph‐based energy optimization, we create a complete kNN‐graph over the entire point cloud. A new weighted similarity criterion encodes the graph‐energy. To orient normals in a globally consistent way we perform a highly parallel greedy edge collapse, which merges similar parts of the graph and orients them consistently. We compare our method to current state‐of‐the‐art approaches and achieve speedups of up to two orders of magnitude. The achieved quality of normal orientation is on par or better than existing solutions, especially for real‐world noisy 3D scanned data. 相似文献
5.
The fast development of novel approaches derived from the Transformers architecture has led to outstanding performance in different scenarios, from Natural Language Processing to Computer Vision. Recently, they achieved impressive results even in the challenging task of non-rigid shape matching. However, little is known about the capability of the Transformer-encoder architecture for the shape matching task, and its performances still remained largely unexplored. In this paper, we step back and investigate the contribution made by the Transformer-encoder architecture compared to its more recent alternatives, focusing on why and how it works on this specific task. Thanks to the versatility of our implementation, we can harness the bi-directional structure of the correspondence problem, making it more interpretable. Furthermore, we prove that positional encodings are essential for processing unordered point clouds. Through a comprehensive set of experiments, we find that attention and positional encoding are (almost) all you need for shape matching. The simple Transformer-encoder architecture, coupled with relative position encoding in the attention mechanism, is able to obtain strong improvements, reaching the current state-of-the-art. 相似文献
6.
Sascha Brandt Claudius Jhn Matthias Fischer Friedhelm Meyer auf der Heide 《Computer Graphics Forum》2019,38(7):413-424
In this paper, we present the first algorithm for progressive sampling of 3D surfaces with blue noise characteristics that runs entirely on the GPU. The performance of our algorithm is comparable to state‐of‐the‐art GPU Poisson‐disk sampling methods, while additionally producing ordered sequences of samples where every prefix exhibits good blue noise properties. The basic idea is, to reduce the 3D sampling domain to a set of 2.5D images which we sample in parallel utilizing the rasterization hardware of current GPUs. This allows for simple visibility‐aware sampling that only captures the surface as seen from outside the sampled object, which is especially useful for point‐based level‐of‐detail rendering methods. However, our method can be easily extended for sampling the entire surface without changing the basic algorithm. We provide a statistical analysis of our algorithm and show that it produces good blue noise characteristics for every prefix of the resulting sample sequence and analyze the performance of our method compared to related state‐of‐the‐art sampling methods. 相似文献
7.
We propose an efficient method for topology‐preserving simplification of medial axes of 3D models. Existing methods either cannot preserve the topology during medial axes simplification or have the problem of being geometrically inaccurate or computationally expensive. To tackle these issues, we restrict our topology‐checking to the areas around the topological holes to avoid unnecessary checks in other areas. Our algorithm can keep high precision even when the medial axis is simplified to be in very few vertices. Furthermore, we parallelize the medial axes simplification procedure to enhance the performance significantly. Experimental results show that our method can preserve the topology with highly efficient performance, much superior to the existing methods in terms of topology preservation, accuracy and performance. 相似文献
8.
We introduce an interactive tool for novice users to design mechanical objects made of 2.5D linkages. Users simply draw the shape of the object and a few key poses of its multiple moving parts. Our approach automatically generates a one‐degree‐of freedom linkage that connects the fixed and moving parts, such that the moving parts traverse all input poses in order without any collision with the fixed and other moving parts. In addition, our approach avoids common linkage defects and favors compact linkages and smooth motion trajectories. Finally, our system automatically generates the 3D geometry of the object and its links, allowing the rapid creation of a physical mockup of the designed object. 相似文献
9.
We propose a novel approach for computing correspondences between subdivision surfaces with different control polygons. Our main observation is that the multi‐resolution spectral basis functions that are open used for computing a functional correspondence can be compactly represented on subdivision surfaces, and therefore can be efficiently computed. Furthermore, the reconstruction of a pointwise map from a functional correspondence also greatly benefits from the subdivision structure. Leveraging these observations, we suggest a hierarchical pipeline for functional map inference, allowing us to compute correspondences between surfaces at fine subdivision levels, with hundreds of thousands of polygons, an order of magnitude faster than existing correspondence methods. We demonstrate the applicability of our results by transferring high‐resolution sculpting displacement maps and textures between subdivision models. 相似文献
10.
We propose a novel framework to generate a global texture atlas for a deforming geometry. Our approach distinguishes from prior arts in two aspects. First, instead of generating a texture map for each timestamp to color a dynamic scene, our framework reconstructs a global texture atlas that can be consistently mapped to a deforming object. Second, our approach is based on a single RGB‐D camera, without the need of a multiple‐camera setup surrounding a scene. In our framework, the input is a 3D template model with an RGB‐D image sequence, and geometric warping fields are found using a state‐of‐the‐art non‐rigid registration method [GXW*15] to align the template mesh to noisy and incomplete input depth images. With these warping fields, our multi‐scale approach for texture coordinate optimization generates a sharp and clear texture atlas that is consistent with multiple color observations over time. Our approach is accelerated by graphical hardware and provides a handy configuration to capture a dynamic geometry along with a clean texture atlas. We demonstrate our approach with practical scenarios, particularly human performance capture. We also show that our approach is resilient on misalignment issues caused by imperfect estimation of warping fields and inaccurate camera parameters. 相似文献
11.
Feature curves on 3D shapes provide important hints about significant parts of the geometry and reveal their underlying structure. However, when we process real world data, automatically detected feature curves are affected by measurement uncertainty, missing data, and sampling resolution, leading to noisy, fragmented, and incomplete feature curve networks. These artifacts make further processing unreliable. In this paper we analyze the global co‐occurrence information in noisy feature curve networks to fill in missing data and suppress weakly supported feature curves. For this we propose an unsupervised approach to find meaningful structure within the incomplete data by detecting multiple occurrences of feature curve configurations (co‐occurrence analysis). We cluster and merge these into feature curve templates, which we leverage to identify strongly supported feature curve segments as well as to complete missing data in the feature curve network. In the presence of significant noise, previous approaches had to resort to user input, while our method performs fully automatic feature curve co‐completion. Finding feature reoccurrences however, is challenging since naïve feature curve comparison fails in this setting due to fragmentation and partial overlaps of curve segments. To tackle this problem we propose a robust method for partial curve matching. This provides us with the means to apply symmetry detection methods to identify co‐occurring configurations. Finally, Bayesian model selection enables us to detect and group re‐occurrences that describe the data well and with low redundancy. 相似文献
12.
Pierre Ecormier‐Nocca Pooran Memari James Gain Marie‐Paule Cani 《Computer Graphics Forum》2019,38(2):157-168
While analysing and synthesising 2D distributions of points has been applied both to the generation of textures with discrete elements and for populating virtual worlds with 3D objects, the results are often inaccurate since the spatial extent of objects cannot be expressed. We introduce three improvements enabling the synthesis of more general distributions of elements. First, we extend continuous pair correlation function (PCF) algorithms to multi‐class distributions using a dependency graph, thereby capturing interrelationships between distinct categories of objects. Second, we introduce a new normalised metric for disks, which makes the method applicable to both point and possibly overlapping disk distributions. The metric is specifically designed to distinguish perceptually salient features, such as disjoint, tangent, overlapping, or nested disks. Finally, we pay particular attention to convergence of the mean PCF as well as the validity of individual PCFs, by taking into consideration the variance of the input. Our results demonstrate that this framework can capture and reproduce real‐life distributions of elements representing a variety of complex semi‐structured patterns, from the interaction between trees and the understorey in a forest to droplets of water. More generally, it applies to any category of 2D object whose shape is better represented by bounding circles than points. 相似文献
13.
Estimating the correspondence between the images using optical flow is the key component for image fusion, however, computing optical flow between a pair of facial images including backgrounds is challenging due to large differences in illumination, texture, color and background in the images. To improve optical flow results for image fusion, we propose a novel flow estimation method, wavelet flow, which can handle both the face and background in the input images. The key idea is that instead of computing flow directly between the input image pair, we estimate the image flow by incorporating multi‐scale image transfer and optical flow guided wavelet fusion. Multi‐scale image transfer helps to preserve the background and lighting detail of input, while optical flow guided wavelet fusion produces a series of intermediate images for further fusion quality optimizing. Our approach can significantly improve the performance of the optical flow algorithm and provide more natural fusion results for both faces and backgrounds in the images. We evaluate our method on a variety of datasets to show its high outperformance. 相似文献
14.
We present a novel method to compute bijective PolyCube‐maps with low isometric distortion. Given a surface and its pre‐axis‐aligned shape that is not an exact PolyCube shape, the algorithm contains two steps: (i) construct a PolyCube shape to approximate the pre‐axis‐aligned shape; and (ii) generate a bijective, low isometric distortion mapping between the constructed PolyCube shape and the input surface. The PolyCube construction is formulated as a constrained optimization problem, where the objective is the number of corners in the constructed PolyCube, and the constraint is to bound the approximation error between the constructed PolyCube and the input pre‐axis‐aligned shape while ensuring topological validity. A novel erasing‐and‐filling solver is proposed to solve this challenging problem. Centeral to the algorithm for computing bijective PolyCube‐maps is a quad mesh optimization process that projects the constructed PolyCube onto the input surface with high‐quality quads. We demonstrate the efficacy of our algorithm on a data set containing 300 closed meshes. Compared to state‐of‐the‐art methods, our method achieves higher practical robustness and lower mapping distortion. 相似文献
15.
In this paper, we present a practically robust method for computing foldover‐free volumetric mappings with hard linear constraints. Central to this approach is a projection algorithm that monotonically and efficiently decreases the distance from the mapping to the bounded conformal distortion mapping space. After projection, the conformal distortion of the updated mapping tends to be below the given bound, thereby significantly reducing foldovers. Since it is non‐trivial to define an optimal bound, we introduce a practical conformal distortion bound generation scheme to facilitate subsequent projections. By iteratively generating conformal distortion bounds and trying to project mappings into bounded conformal distortion spaces monotonically, our algorithm achieves high‐quality foldover‐free volumetric mappings with strong practical robustness and high efficiency. Compared with existing methods, our method computes mesh‐based and meshless volumetric mappings with no prescribed conformal distortion bounds. We demonstrate the efficacy and efficiency of our method through a variety of geometric processing tasks. 相似文献
16.
Creating a virtual city is demanded for computer games, movies, and urban planning, but it takes a lot of time to create numerous 3D building models. Procedural modeling has become popular in recent years to overcome this issue, but creating a grammar to get a desired output is difficult and time consuming even for expert users. In this paper, we present an interactive tool that allows users to automatically generate such a grammar from a single image of a building. The user selects a photograph and highlights the silhouette of the target building as input to our method. Our pipeline automatically generates the building components, from large‐scale building mass to fine‐scale windows and doors geometry. Each stage of our pipeline combines convolutional neural networks (CNNs) and optimization to select and parameterize procedural grammars that reproduce the building elements of the picture. In the first stage, our method jointly estimates camera parameters and building mass shape. Once known, the building mass enables the rectification of the façades, which are given as input to the second stage that recovers the façade layout. This layout allows us to extract individual windows and doors that are subsequently fed to the last stage of the pipeline that selects procedural grammars for windows and doors. Finally, the grammars are combined to generate a complete procedural building as output. We devise a common methodology to make each stage of this pipeline tractable. This methodology consists in simplifying the input image to match the visual appearance of synthetic training data, and in using optimization to refine the parameters estimated by CNNs. We used our method to generate a variety of procedural models of buildings from existing photographs. 相似文献
17.
A practical way to generate a high dynamic range (HDR) video using off‐the‐shelf cameras is to capture a sequence with alternating exposures and reconstruct the missing content at each frame. Unfortunately, existing approaches are typically slow and are not able to handle challenging cases. In this paper, we propose a learning‐based approach to address this difficult problem. To do this, we use two sequential convolutional neural networks (CNN) to model the entire HDR video reconstruction process. In the first step, we align the neighboring frames to the current frame by estimating the flows between them using a network, which is specifically designed for this application. We then combine the aligned and current images using another CNN to produce the final HDR frame. We perform an end‐to‐end training by minimizing the error between the reconstructed and ground truth HDR images on a set of training scenes. We produce our training data synthetically from existing HDR video datasets and simulate the imperfections of standard digital cameras using a simple approach. Experimental results demonstrate that our approach produces high‐quality HDR videos and is an order of magnitude faster than the state‐of‐the‐art techniques for sequences with two and three alternating exposures. 相似文献
18.
Gang Fu Qing Zhang Chengfang Song Qifeng Lin Chunxia Xiao 《Computer Graphics Forum》2019,38(7):253-263
Removing specular highlight in an image is a fundamental research problem in computer vision and computer graphics. While various methods have been proposed, they typically do not work well for real‐world images due to the presence of rich textures, complex materials, hard shadows, occlusions and color illumination, etc. In this paper, we present a novel specular highlight removal method for real‐world images. Our approach is based on two observations of the real‐world images: (i) the specular highlight is often small in size and sparse in distribution; (ii) the remaining diffuse image can be represented by linear combination of a small number of basis colors with the sparse encoding coefficients. Based on the two observations, we design an optimization framework for simultaneously estimating the diffuse and specular highlight images from a single image. Specifically, we recover the diffuse components of those regions with specular highlight by encouraging the encoding coefficients sparseness using L0 norm. Moreover, the encoding coefficients and specular highlight are also subject to the non‐negativity according to the additive color mixing theory and the illumination definition, respectively. Extensive experiments have been performed on a variety of images to validate the effectiveness of the proposed method and its superiority over the previous methods. 相似文献
19.
Lintao Zheng Chenyang Zhu Jiazhao Zhang Hang Zhao Hui Huang Matthias Niessner Kai Xu 《Computer Graphics Forum》2019,38(7):103-114
We propose a novel approach to robot‐operated active understanding of unknown indoor scenes, based on online RGBD reconstruction with semantic segmentation. In our method, the exploratory robot scanning is both driven by and targeting at the recognition and segmentation of semantic objects from the scene. Our algorithm is built on top of a volumetric depth fusion framework and performs real‐time voxel‐based semantic labeling over the online reconstructed volume. The robot is guided by an online estimated discrete viewing score field (VSF) parameterized over the 3D space of 2D location and azimuth rotation. VSF stores for each grid the score of the corresponding view, which measures how much it reduces the uncertainty (entropy) of both geometric reconstruction and semantic labeling. Based on VSF, we select the next best views (NBV) as the target for each time step. We then jointly optimize the traverse path and camera trajectory between two adjacent NBVs, through maximizing the integral viewing score (information gain) along path and trajectory. Through extensive evaluation, we show that our method achieves efficient and accurate online scene parsing during exploratory scanning. 相似文献
20.
This paper proposes a scale‐adaptive filtering method to improve the performance of structure‐preserving texture filtering for image smoothing. With classical texture filters, it usually is challenging to smooth texture at multiple scales while preserving salient structures in an image. We address this issue in the concept of adaptive bilateral filtering, where the scales of Gaussian range kernels are allowed to vary from pixel to pixel. Based on direction‐wise statistics, our method distinguishes texture from structure effectively, identifies appropriate scope around a pixel to be smoothed and thus infers an optimal smoothing scale for it. Filtering an image with varying‐scale kernels, the image is smoothed according to the distribution of texture adaptively. With commendable experimental results, we show that, needing less iterations, our proposed scheme boosts texture filtering performance in terms of preserving the geometric structures of multiple scales even after aggressive smoothing of the original image. 相似文献