首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Most surfaces, be it from a fine‐art artifact or a mechanical object, are characterized by a strong self‐similarity. This property finds its source in the natural structures of objects but also in the fabrication processes: regularity of the sculpting technique, or machine tool. In this paper, we propose to exploit the self‐similarity of the underlying shapes for compressing point cloud surfaces which can contain millions of points at a very high precision. Our approach locally resamples the point cloud in order to highlight the self‐similarity of the shape, while remaining consistent with the original shape and the scanner precision. It then uses this self‐similarity to create an ad hoc dictionary on which the local neighborhoods will be sparsely represented, thus allowing for a light‐weight representation of the total surface. We demonstrate the validity of our approach on several point clouds from fine‐arts and mechanical objects, as well as a urban scene. In addition, we show that our approach also achieves a filtering of noise whose magnitude is smaller than the scanner precision.  相似文献   

2.
We propose a parameter‐free method to recover manifold connectivity in unstructured 2D point clouds with high noise in terms of the local feature size. This enables us to capture the features which emerge out of the noise. To achieve this, we extend the reconstruction algorithm HNN‐Crust , which connects samples to two (noise‐free) neighbours and has been proven to output a manifold for a relaxed sampling condition. Applying this condition to noisy samples by projecting their k‐nearest neighbourhoods onto local circular fits leads to multiple candidate neighbour pairs and thus makes connecting them consistently an NP‐hard problem. To solve this efficiently, we design an algorithm that searches that solution space iteratively on different scales of k. It achieves linear time complexity in terms of point count plus quadratic time in the size of noise clusters. Our algorithm FitConnect extends HNN‐Crust seamlessly to connect both samples with and without noise, performs as local as the recovered features and can output multiple open or closed piecewise curves. Incidentally, our method simplifies the output geometry by eliminating all but a representative point from noisy clusters. Since local neighbourhood fits overlap consistently, the resulting connectivity represents an ordering of the samples along a manifold. This permits us to simply blend the local fits for denoising with the locally estimated noise extent. Aside from applications like reconstructing silhouettes of noisy sensed data, this lays important groundwork to improve surface reconstruction in 3D. Our open‐source algorithm is available online.  相似文献   

3.
4.
In this paper, we propose PCPNET , a deep‐learning based approach for estimating local 3D shape properties in point clouds. In contrast to the majority of prior techniques that concentrate on global or mid‐level attributes, e.g., for shape classification or semantic labeling, we suggest a patch‐based learning method, in which a series of local patches at multiple scales around each point is encoded in a structured manner. Our approach is especially well‐adapted for estimating local shape properties such as normals (both unoriented and oriented) and curvature from raw point clouds in the presence of strong noise and multi‐scale features. Our main contributions include both a novel multi‐scale variant of the recently proposed PointNet architecture with emphasis on local shape information, and a series of novel applications in which we demonstrate how learning from training data arising from well‐structured triangle meshes, and applying the trained model to noisy point clouds can produce superior results compared to specialized state‐of‐the‐art techniques. Finally, we demonstrate the utility of our approach in the context of shape reconstruction, by showing how it can be used to extract normal orientation information from point clouds.  相似文献   

5.
We present a robust method to find region‐level correspondences between shapes, which are invariant to changes in geometry and applicable across multiple shape representations. We generate simplified shape graphs by jointly decomposing the shapes, and devise an adapted graph‐matching technique, from which we infer correspondences between shape regions. The simplified shape graphs are designed to primarily capture the overall structure of the shapes, without reflecting precise information about the geometry of each region, which enables us to find correspondences between shapes that might have significant geometric differences. Moreover, due to the special care we take to ensure the robustness of each part of our pipeline, our method can find correspondences between shapes with different representations, such as triangular meshes and point clouds. We demonstrate that the region‐wise matching that we obtain can be used to find correspondences between feature points, reveal the intrinsic self‐similarities of each shape and even construct point‐to‐point maps across shapes. Our method is both time and space efficient, leading to a pipeline that is significantly faster than comparable approaches. We demonstrate the performance of our approach through an extensive quantitative and qualitative evaluation on several benchmarks where we achieve comparable or superior performance to existing methods.  相似文献   

6.
Various applications of global surface parametrization benefit from the alignment of parametrization isolines with principal curvature directions. This is particularly true for recent parametrization‐based meshing approaches, where this directly translates into a shape‐aware edge flow, better approximation quality, and reduced meshing artifacts. Existing methods to influence a parametrization based on principal curvature directions suffer from scale‐dependence, which implies the necessity of parameter variation, or try to capture complex directional shape features using simple 1D curves. Especially for non‐sharp features, such as chamfers, fillets, blends, and even more for organic variants thereof, these abstractions can be unfit. We present a novel approach which respects and exploits the 2D nature of such directional feature regions, detects them based on coherence and homogeneity properties, and controls the parametrization process accordingly. This approach enables us to provide an intuitive, scale‐invariant control parameter to the user. It also allows us to consider non‐local aspects like the topology of a feature, enabling further improvements. We demonstrate that, compared to previous approaches, global parametrizations of higher quality can be generated without user intervention.  相似文献   

7.
Segmenting a moving foreground (fg) from its background (bg) is a fundamental step in many Machine Vision and Computer Graphics applications. Nevertheless, hardly any attempts have been made to tackle this problem in dynamic 3D scanned scenes. Scanned dynamic scenes are typically challenging due to noise and large missing parts. Here, we present a novel approach for motion segmentation in dynamic point‐cloud scenes designed to cater to the unique properties of such data. Our key idea is to augment fg/bg classification with an active learning framework by refining the segmentation process in an adaptive manner. Our method initially classifies the scene points as either fg or bg in an un‐supervised manner. This, by training discriminative RBF‐SVM classifiers on automatically labeled, high‐certainty fg/bg points. Next, we adaptively detect unreliable classification regions (i.e. where fg/bg separation is uncertain), locally add more training examples to better capture the motion in these areas, and re‐train the classifiers to fine‐tune the segmentation. This not only improves segmentation accuracy, but also allows our method to perform in a coarse‐to‐fine manner, thereby efficiently process high‐density point‐clouds. Additionally, we present a unique interactive paradigm for enhancing this learning process, by using a manual editing tool. The user explicitly edits the RBF‐SVM decision borders in unreliable regions in order to refine and correct the classification. We provide extensive qualitative and quantitative experiments on both real (scanned) and synthetic dynamic scenes.  相似文献   

8.
Feature learning for 3D shapes is challenging due to the lack of natural paramterization for 3D surface models. We adopt the multi‐view depth image representation and propose Multi‐View Deep Extreme Learning Machine (MVD‐ELM) to achieve fast and quality projective feature learning for 3D shapes. In contrast to existing multi‐view learning approaches, our method ensures the feature maps learned for different views are mutually dependent via shared weights and in each layer, their unprojections together form a valid 3D reconstruction of the input 3D shape through using normalized convolution kernels. These lead to a more accurate 3D feature learning as shown by the encouraging results in several applications. Moreover, the 3D reconstruction property enables clear visualization of the learned features, which further demonstrates the meaningfulness of our feature learning.  相似文献   

9.
We present the design of an interactive image‐based modeling tool that enables a user to quickly generate detailed 3D models with texture from a set of calibrated input images. Our main contribution is an intuitive user interface that is entirely based on simple 2D painting operations and does not require any technical expertise by the user or difficult pre‐processing of the input images. One central component of our tool is a GPU‐based multi‐view stereo reconstruction scheme, which is implemented by an incremental algorithm, that runs in the background during user interaction so that the user does not notice any significant response delay.  相似文献   

10.
Sharp edges are important shape features and their extraction has been extensively studied both on point clouds and surfaces. We consider the problem of extracting sharp edges from a sparse set of colour‐and‐depth (RGB‐D) images. The noise‐ridden depth measurements are challenging for existing feature extraction methods that work solely in the geometric domain (e.g. points or meshes). By utilizing both colour and depth information, we propose a novel feature extraction method that produces much cleaner and more coherent feature lines. We make two technical contributions. First, we show that intensity edges can augment the depth map to improve normal estimation and feature localization from a single RGB‐D image. Second, we designed a novel algorithm for consolidating feature points obtained from multiple RGB‐D images. By utilizing normals and ridge/valley types associated with the feature points, our algorithm is effective in suppressing noise without smearing nearby features.  相似文献   

11.
We present a new framework for point cloud denoising by patch‐collaborative spectral analysis. A collaborative generalization of each surface patch is defined, combining similar patches from the denoised surface. The Laplace–Beltrami operator of the collaborative patch is then used to selectively smooth the surface in a robust manner that can gracefully handle high levels of noise, yet preserves sharp surface features. The resulting denoising algorithm competes favourably with state‐of‐the‐art approaches, and extends patch‐based algorithms from the image processing domain to point clouds of arbitrary sampling. We demonstrate the accuracy and noise‐robustness of the proposed algorithm on standard benchmark models as well as range scans, and compare it to existing methods for point cloud denoising.  相似文献   

12.
In this paper, we present a novel method for the direct volume rendering of large smoothed‐particle hydrodynamics (SPH) simulation data without transforming the unstructured data to an intermediate representation. By directly visualizing the unstructured particle data, we avoid long preprocessing times and large storage requirements. This enables the visualization of large, time‐dependent, and multivariate data both as a post‐process and in situ. To address the computational complexity, we introduce stochastic volume rendering that considers only a subset of particles at each step during ray marching. The sample probabilities for selecting this subset at each step are thereby determined both in a view‐dependent manner and based on the spatial complexity of the data. Our stochastic volume rendering enables us to scale continuously from a fast, interactive preview to a more accurate volume rendering at higher cost. Lastly, we discuss the visualization of free‐surface and multi‐phase flows by including a multi‐material model with volumetric and surface shading into the stochastic volume rendering.  相似文献   

13.
Hardware tessellation is de facto the preferred mechanism to adaptively control mesh resolution with maximal performances. However, owing to its fixed and uniform pattern, leveraging tessellation for feature‐aware LOD rendering remains a challenging problem. We relax this fundamental constraint by introducing a new spatial and temporal blending mechanism of tessellation levels, which is built on top of a novel hierarchical representation of multi‐resolution meshes. This mechanism allows to finely control topological changes so that vertices can be removed or added at the most appropriate location to preserve geometric features in a continuous and artifact‐free manner. We then show how to extend edge‐collapse based decimation methods to build feature‐aware multi‐resolution meshes that match the tessellation patterns. Our approach is fully compatible with current hardware tessellators and only adds a small overhead on memory consumption and tessellation cost.  相似文献   

14.
15.
We present an integrated, fully GPU‐based processing pipeline to interactively render new views of arbitrary scenes from calibrated but otherwise unstructured input views. In a two‐step procedure, our method first generates for each input view a dense proxy of the scene using a new multi‐view stereo formulation. Each scene proxy consists of a structured cloud of feature aware particles which automatically have their image space footprints aligned to depth discontinuities of the scene geometry and hence effectively handle sharp object boundaries and occlusions. We propose a particle optimization routine combined with a special parameterization of the view space that enables an efficient proxy generation as well as robust and intuitive filter operators for noise and outlier removal. Moreover, our generic proxy generation allows us to flexibly handle scene complexities ranging from small objects up to complete outdoor scenes. The second phase of the algorithm combines these particle clouds in real‐time into a view‐dependent proxy for the desired output view and performs a pixel‐accurate accumulation of the colour contributions from each available input view. This makes it possible to reconstruct even fine‐scale view‐dependent illumination effects. We demonstrate how all these processing stages of the pipeline can be implemented entirely on the GPU with memory efficient, scalable data structures for maximum performance. This allows us to generate new output renderings of high visual quality from input images in real‐time.  相似文献   

16.
Reconstructing a surface mesh from a set of discrete point samples is a fundamental problem in geometric modeling. It becomes challenging in presence of ‘singularities’ such as boundaries, sharp features, and non‐manifolds. A few of the current research in reconstruction have addressed handling some of these singularities, but a unified approach to handle them all is missing. In this paper we allow the presence of various singularities by requiring that the sampled object is a collection of smooth surface patches with boundaries that can meet or intersect. Our algorithm first identifies and reconstructs the features where singularities occur. Next, it reconstructs the surface patches containing these feature curves. The identification and reconstruction of feature curves are achieved by a novel combination of the Gaussian weighted graph Laplacian and the Reeb graphs. The global reconstruction is achieved by a method akin to the well known Cocone reconstruction, but with weighted Delaunay triangulation that allows protecting the feature samples with balls. We provide various experimental results to demonstrate the effectiveness of our feature‐preserving singular surface reconstruction algorithm.  相似文献   

17.
In this paper, we describe a novel approach for the reconstruction of animated meshes from a series of time‐deforming point clouds. Given a set of unordered point clouds that have been captured by a fast 3‐D scanner, our algorithm is able to compute coherent meshes which approximate the input data at arbitrary time instances. Our method is based on the computation of an implicit function in ?4 that approximates the time‐space surface of the time‐varying point cloud. We then use the four‐dimensional implicit function to reconstruct a polygonal model for the first time‐step. By sliding this template mesh along the time‐space surface in an as‐rigid‐as‐possible manner, we obtain reconstructions for further time‐steps which have the same connectivity as the previously extracted mesh while recovering rigid motion exactly. The resulting animated meshes allow accurate motion tracking of arbitrary points and are well suited for animation compression. We demonstrate the qualities of the proposed method by applying it to several data sets acquired by real‐time 3‐D scanners.  相似文献   

18.
The emergence of laser/LiDAR sensors, reliable multi‐view stereo techniques and more recently consumer depth cameras have brought point clouds to the forefront as a data format useful for a number of applications. Unfortunately, the point data from those channels often incur imperfection, frequently contaminated with severe outliers and noise. This paper presents a robust consolidation algorithm for low‐quality point data from outdoor scenes, which essentially consists of two steps: 1) outliers filtering and 2) noise smoothing. We first design a connectivity‐based scheme to evaluate outlierness and thereby detect sparse outliers. Meanwhile, a clustering method is used to further remove small dense outliers. Both outlier removal methods are insensitive to the choice of the neighborhood size and the levels of outliers. Subsequently, we propose a novel approach to estimate normals for noisy points based on robust partial rankings, which is the basis of noise smoothing. Accordingly, a fast approach is exploited to smooth noise, while preserving sharp features. We evaluate the effectiveness of the proposed method on the point clouds from a variety of outdoor scenes.  相似文献   

19.
We introduce novel multi‐scale kernels using the random walk framework and derive corresponding embeddings and pairwise distances. The fractional moments of the rate of continuous time random walk (equivalently diffusion rate) are used to discover higher order kernels (or similarities) between pair of points. The formulated kernels are isometry, scale and tessellation invariant, can be made globally or locally shape aware and are insensitive to partial objects and noise based on the moment and influence parameters. In addition, the corresponding kernel distances and embeddings are convergent and efficiently computable. We introduce dual Green's mean signatures based on the kernels and discuss the applicability of the multi‐scale distance and embedding. Collectively, we present a unified view of popular embeddings and distance metrics while recovering intuitive probabilistic interpretations on discrete surface meshes.  相似文献   

20.
Multi-scale Feature Extraction on Point-Sampled Surfaces   总被引:8,自引:0,他引:8  
We present a new technique for extracting line‐type features on point‐sampled geometry. Given an unstructuredpoint cloud as input, our method first applies principal component analysis on local neighborhoods toclassify points according to the likelihood that they belong to a feature. Using hysteresis thresholding, we thencompute a minimum spanning graph as an initial approximation of the feature lines. To smooth out the featureswhile maintaining a close connection to the underlying surface, we use an adaptation of active contour models.Central to our method is a multi‐scale classification operator that allows feature analysis at multiplescales, using the size of the local neighborhoods as a discrete scale parameter. This significantly improves thereliability of the detection phase and makes our method more robust in the presence of noise. To illustrate theusefulness of our method, we have implemented a non‐photorealistic point renderer to visualize point‐sampledsurfaces as line drawings of their extracted feature curves.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号