首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this work we present the first algorithm for restoring consistency between curve networks on non‐parallel cross‐sections. Our method addresses a critical but overlooked challenge in the reconstruction process from cross‐sections that stems from the fact that cross‐sectional slices are often generated independently of one another, such as in interactive volume segmentation. As a result, the curve networks on two non‐parallel slices may disagree where the slices intersect, which makes these cross‐sections an invalid input for surfacing. We propose a method that takes as input an arbitrary number of non‐parallel slices, each partitioned into two or more labels by a curve network, and outputs a modified set of curve networks on these slices that are guaranteed to be consistent. We formulate the task of restoring consistency while preserving the shape of input curves as a constrained optimization problem, and we propose an effective solution framework. We demonstrate our method on a data‐set of complex multi‐labeled input cross‐sections. Our technique efficiently produces consistent curve networks even in the presence of large errors.  相似文献   

2.
Feature curves on surface meshes are usually defined solely based on local shape properties such as dihedral angles and principal curvatures. From the application perspective, however, the meaningfulness of a network of feature curves also depends on a global scale parameter that takes the distance between feature curves into account, i.e., on a coarse scale, nearby feature curves should be merged or suppressed if the surface region between them is not representable at the given scale/resolution. In this paper, we propose a computational approach to the intuitive notion of scale conforming feature curve networks where the density of feature curves on the surface adapts to a global scale parameter. We present a constrained global optimization algorithm that computes scale conforming feature curve networks by eliminating curve segments that represent surface features, which are not compatible to the prescribed scale. To demonstrate the usefulness of our approach we apply isotropic and anisotropic remeshing schemes that take our feature curve networks as input. For a number of example meshes, we thus generate high quality shape approximations at various levels of detail.  相似文献   

3.
With the widespread use of 3D acquisition devices, there is an increasing need of consolidating captured noisy and sparse point cloud data for accurate representation of the underlying structures. There are numerous algorithms that rely on a variety of assumptions such as local smoothness to tackle this ill‐posed problem. However, such priors lead to loss of important features and geometric detail. Instead, we propose a novel data‐driven approach for point cloud consolidation via a convolutional neural network based technique. Our method takes a sparse and noisy point cloud as input, and produces a dense point cloud accurately representing the underlying surface by resolving ambiguities in geometry. The resulting point set can then be used to reconstruct accurate manifold surfaces and estimate surface properties. To achieve this, we propose a generative neural network architecture that can input and output point clouds, unlocking a powerful set of tools from the deep learning literature. We use this architecture to apply convolutional neural networks to local patches of geometry for high quality and efficient point cloud consolidation. This results in significantly more accurate surfaces, as we illustrate with a diversity of examples and comparisons to the state‐of‐the‐art.  相似文献   

4.
We propose a parameter‐free method to recover manifold connectivity in unstructured 2D point clouds with high noise in terms of the local feature size. This enables us to capture the features which emerge out of the noise. To achieve this, we extend the reconstruction algorithm HNN‐Crust , which connects samples to two (noise‐free) neighbours and has been proven to output a manifold for a relaxed sampling condition. Applying this condition to noisy samples by projecting their k‐nearest neighbourhoods onto local circular fits leads to multiple candidate neighbour pairs and thus makes connecting them consistently an NP‐hard problem. To solve this efficiently, we design an algorithm that searches that solution space iteratively on different scales of k. It achieves linear time complexity in terms of point count plus quadratic time in the size of noise clusters. Our algorithm FitConnect extends HNN‐Crust seamlessly to connect both samples with and without noise, performs as local as the recovered features and can output multiple open or closed piecewise curves. Incidentally, our method simplifies the output geometry by eliminating all but a representative point from noisy clusters. Since local neighbourhood fits overlap consistently, the resulting connectivity represents an ordering of the samples along a manifold. This permits us to simply blend the local fits for denoising with the locally estimated noise extent. Aside from applications like reconstructing silhouettes of noisy sensed data, this lays important groundwork to improve surface reconstruction in 3D. Our open‐source algorithm is available online.  相似文献   

5.
Robust feature extraction is an integral part of scientific visualization. In unsteady vector field analysis, researchers recently directed their attention towards the computation of near‐steady reference frames for vortex extraction, which is a numerically challenging endeavor. In this paper, we utilize a convolutional neural network to combine two steps of the visualization pipeline in an end‐to‐end manner: the filtering and the feature extraction. We use neural networks for the extraction of a steady reference frame for a given unsteady 2D vector field. By conditioning the neural network to noisy inputs and resampling artifacts, we obtain numerically stabler results than existing optimization‐based approaches. Supervised deep learning typically requires a large amount of training data. Thus, our second contribution is the creation of a vector field benchmark data set, which is generally useful for any local deep learning‐based feature extraction. Based on Vatistas velocity profile, we formulate a parametric vector field mixture model that we parameterize based on numerically‐computed example vector fields in near‐steady reference frames. Given the parametric model, we can efficiently synthesize thousands of vector fields that serve as input to our deep learning architecture. The proposed network is evaluated on an unseen numerical fluid flow simulation.  相似文献   

6.
Identifying incomplete or partial fingerprints from a large fingerprint database remains a difficult challenge today. Existing studies on partial fingerprints focus on one-to-one matching using local ridge details. In this paper, we investigate the problem of retrieving candidate lists for matching partial fingerprints by exploiting global topological features. Specifically, we propose an analytical approach for reconstructing the global topology representation from a partial fingerprint. First, we present an inverse orientation model for describing the reconstruction problem. Then, we provide a general expression for all valid solutions to the inverse model. This allows us to preserve data fidelity in the existing segments while exploring missing structures in the unknown parts. We have further developed algorithms for estimating the missing orientation structures based on some a priori knowledge of ridge topology features. Our statistical experiments show that our proposed model-based approach can effectively reduce the number of candidates for pair-wised fingerprint matching, and thus significantly improve the system retrieval performance for partial fingerprint identification.  相似文献   

7.
8.
A practical way to generate a high dynamic range (HDR) video using off‐the‐shelf cameras is to capture a sequence with alternating exposures and reconstruct the missing content at each frame. Unfortunately, existing approaches are typically slow and are not able to handle challenging cases. In this paper, we propose a learning‐based approach to address this difficult problem. To do this, we use two sequential convolutional neural networks (CNN) to model the entire HDR video reconstruction process. In the first step, we align the neighboring frames to the current frame by estimating the flows between them using a network, which is specifically designed for this application. We then combine the aligned and current images using another CNN to produce the final HDR frame. We perform an end‐to‐end training by minimizing the error between the reconstructed and ground truth HDR images on a set of training scenes. We produce our training data synthetically from existing HDR video datasets and simulate the imperfections of standard digital cameras using a simple approach. Experimental results demonstrate that our approach produces high‐quality HDR videos and is an order of magnitude faster than the state‐of‐the‐art techniques for sequences with two and three alternating exposures.  相似文献   

9.
Splines are part of the standard toolbox for the approximation of functions and curves in ?d. Still, the problem of finding the spline that best approximates an input function or curve is ill‐posed, since in general this yields a “spline” with an infinite number of segments. The problem can be regularized by adding a penalty term for the number of spline segments. We show how this idea can be formulated as an ?0‐regularized quadratic problem. This gives us a notion of optimal approximating splines that depend on one parameter, which weights the approximation error against the number of segments. We detail this concept for different types of splines including B‐splines and composite Bézier curves. Based on the latest development in the field of sparse approximation, we devise a solver for the resulting minimization problems and show applications to spline approximation of planar and space curves and to spline conversion of motion capture data.  相似文献   

10.
In this article, we propose a novel approach for measuring word association based on the joint occurrences distribution in a text. Our approach relies on computing a sum of distances between neighboring occurrences of a given word pair and comparing it with a vector of randomly generated occurrences. The idea behind this assumption is that if the distribution of co‐occurrences is close to random or if they tend to appear together less frequently than by chance, such words are not semantically related. We devise a distance function S that evaluates the words association rate. Using S, we build a concept tree, which provides a visual and comprehensive representation of keywords association in a text. In order to illustrate the effectiveness of our algorithm, we apply it to three different texts, showing the consistency and significance of the obtained results with respect to the semantics of documents. Finally, we compare the results obtained by applying our proposed algorithm with the ones achieved by both human experts and the co‐occurrence correlation method. We show that our method is consistent with the experts' evaluation and outperforms with respect to the co‐occurrence correlation method.  相似文献   

11.
A new representation of digital curves is introduced. It has the property of being unique and canonical when computed on closed curves. The representation is based on the discrete notion of tangents and is complete in the sense that it contains all discrete segments and all polygonalizations which can be constructed with connected subsets of the original curve. This representation is extended for dealing with noisy curves and we also propose a multi-scale extension. An application is given to curve decomposition into concave–convex parts and with application in syntactical based methods.  相似文献   

12.
In this paper, a feature-preserving mesh hole-filling algorithm is realized by the polynomial blending technique. We first search for feature points in the neighborhood of the hole. These feature points allow us to define the feature curves with missing parts in the hole. A polynomial blending curve is constructed to complete the missing parts of the feature curves. These feature curves divide the original complex hole into small simple sub-holes. We use the Bézier-Lagrange hybrid patch to fill each sub-hole. The experimental results show that our mesh hole-filling algorithm can effectively restore the original shape of the hole.  相似文献   

13.
We propose a method for the data‐driven inference of temporal evolutions of physical functions with deep learning. More specifically, we target fluid flow problems, and we propose a novel LSTM‐based approach to predict the changes of the pressure field over time. The central challenge in this context is the high dimensionality of Eulerian space‐time data sets. We demonstrate for the first time that dense 3D+time functions of physics system can be predicted within the latent spaces of neural networks, and we arrive at a neural‐network based simulation algorithm with significant practical speed‐ups. We highlight the capabilities of our method with a series of complex liquid simulations, and with a set of single‐phase buoyancy simulations. With a set of trained networks, our method is more than two orders of magnitudes faster than a traditional pressure solver. Additionally, we present and discuss a series of detailed evaluations for the different components of our algorithm.  相似文献   

14.
In this paper, we propose a new method for reconstructing 3D models from a noisy and incomplete 3D scan and a coarse template model. The main idea is to maintain characteristic high‐level features of the template that remain unchanged for different variants of the same type of object. As invariants, we chose the partial symmetry structure of the template model under Euclidian transformations, i.e. we maintain the algebraic structure of all reflections, rotations and translations that map the object partially to itself. We propose an optimization scheme that maintains continuous and discrete symmetry properties of this kind while registering a template against scan data using a deformable iterative closest points (ICP) framework with thin‐plate‐spline regularization. We apply our new deformation approach to a large number of example data sets and demonstrate that symmetry‐guided template matching often yields much more plausible reconstructions than previous variants of ICP.  相似文献   

15.
We present an incremental Voronoi vertex labelling algorithm for approximating contours, medial axes and dominant points (high curvature points) from 2D point sets. Though there exist many number of algorithms for reconstructing curves, medial axes or dominant points, a unified framework capable of approximating all the three in one place from points is missing in the literature. Our algorithm estimates the normals at each sample point through poles (farthest Voronoi vertices of a sample point) and uses the estimated normals and the corresponding tangents to determine the spatial locations (inner or outer) of the Voronoi vertices with respect to the original curve. The vertex classification helps to construct a piece‐wise linear approximation to the object boundary. We provide a theoretical analysis of the algorithm for points non‐uniformly (ε‐sampling) sampled from simple, closed, concave and smooth curves. The proposed framework has been thoroughly evaluated for its usefulness using various test data. Results indicate that even sparsely and non‐uniformly sampled curves with outliers or collection of curves are faithfully reconstructed by the proposed algorithm.  相似文献   

16.
由整体到局部的平面曲线部分匹配算法   总被引:2,自引:0,他引:2  
在基于曲线匹配的检索系统中,提高曲线的匹配速度和精度具有重要的意义.提出一种平面曲线的部分匹配算法,该算法分为整体搜索和局部匹配2个阶段.首先整体搜索确定候选的匹配区域,然后在局部进行精确匹配和验证.对于特征点较少的曲线,根据曲率极值点将曲线划分为多条曲线段,采用局部线性搜索法实现曲线的部分匹配.实验结果说明了算法的有效性.  相似文献   

17.
A major bottleneck in developing knowledge-based systems is the acquisition of knowledge. Machine learning is an area concerned with the automation of this process of knowledge acquisition. Neural networks generally represent their knowledge at the lower level, while knowledge-based systems use higher-level knowledge representations. the method we propose here provides a technique that automatically allows us to extract conjunctive rules from the lower-level representation used by neural networks, the strength of neural networks in dealing with noise has enabled us to produce correct rules in a noisy domain. Thus we propose a method that uses neural networks as the basis for the automation of knowledge acquisition and can be applied to noisy, realworld domains. © 1993 John Wiley & Sons, Inc.  相似文献   

18.
Keyword spotting refers to detection of all occurrences of any given keyword in input speech utterances. In this paper, we define a keyword spotter as a binary classifier that separates a class of sentences containing a target keyword from a class of sentences which do not include the target keyword. In order to discriminate the mentioned classes, an efficient classification method and a suitable feature set are to be studied. For the classification method, we propose an evolutionary algorithm to train the separating hyper-plane between the two classes. As our discriminative feature set, we propose two confidence measure functions. The first confidence measure function computes the possibility of phonemes presence in the speech frames, and the second one determines the duration of each phoneme. We define these functions based on the acoustic, spectral and statistical features of speech. The results on TIMIT indicate that the proposed evolutionary-based discriminative keyword spotter has lower computational complexity and higher speed in both test and train phases, in comparison to the SVM-based discriminative keyword spotter. Additionally, the proposed system is robust in noisy conditions.  相似文献   

19.
基于BP神经网络的隐式曲线构造方法   总被引:2,自引:0,他引:2  
隐式曲线与曲面是当前计算机图形学研究的热点之一。通过把BP神经网络与隐式曲线构造原理相结合,提出了一种构造隐式曲线的新方法,即首先由约束点构造神经网络的输入与输出,把描述物体边界曲线的隐式函数转化为显式函数;然后用BP神经网络对此显式函数进行逼近;最后由仿真曲面得到物体边界的拟合曲线。该新方法不同于传统的对显式函数的逼近方法,因为传统方法无法描述封闭的曲线;也不同于基于优化的拟合隐式曲线方法,因为它无须考虑函数的形式或多项式的次数。实验表明,该新方法有很强的物体边界描述能力和缺损修复能力,因而在物体边界重建、缺损图像复原等领域有一定的应用前景。  相似文献   

20.
Image steganography is the technique of hiding secret information within images. It is an important research direction in the security field. Benefitting from the rapid development of deep neural networks, many steganographic algorithms based on deep learning have been proposed. However, two problems remain to be solved in which the most existing methods are limited by small image size and information capacity. In this paper, to address these problems, we propose a high capacity image steganographic model named HidingGAN. The proposed model utilizes a new secret information preprocessing method and Inception‐ResNet block to promote better integration of secret information and image features. Meanwhile, we introduce generative adversarial networks and perceptual loss to maintain the same statistical characteristics of cover images and stego images in the high‐dimensional feature space, thereby improving the undetectability. Through these manners, our model reaches higher imperceptibility, security, and capacity. Experiment results show that our HidingGAN achieves the capacity of 4 bits‐per‐pixel (bpp) at 256 × 256 pixels, improving over the previous best result of 0.4 bpp at 32 × 32 pixels.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号