首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
In this paper, a robust algorithm is proposed for reconstructing 2D curve from unorganized point data with a high level of noise and outliers. By constructing the quadtree of the input point data, we extract the “grid-like” boundaries of the quadtree, and smooth the boundaries using a modified Laplacian method. The skeleton of the smoothed boundaries is computed and thereby the initial curve is generated by circular neighboring projection. Subsequently, a normal-based processing method is applied to the initial curve to smooth jagged features at low curvatures areas, and recover sharp features at high curvature areas. As a result, the curve is reconstructed accurately with small details and sharp features well preserved. A variety of experimental results demonstrate the effectiveness and robustness of our method.  相似文献   

3.
Automated three-dimensional surface reconstruction is a very large and still fast growing area of applied computer vision and there exists a huge number of heuristic algorithms. Nevertheless, the number of algorithms which give formal guarantees about the correctness of the reconstructed surface is quite limited. Moreover such theoretical approaches are proven to be correct only for objects with smooth surfaces and extremely dense samplings with no or very few noise. We define an alternative surface reconstruction method and prove that it preserves the topological structure of multi-region objects under much weaker constraints and thus under much more realistic conditions. We derive the necessary error bounds for some digitization methods often used in discrete geometry, i.e. supercover and m-cell intersection sampling. We also give a detailed analysis of the behavior of our algorithm and compare it with other approaches.  相似文献   

4.
Extracting skeletal curves from 3D scattered data   总被引:8,自引:0,他引:8  
  相似文献   

5.
Accurate home location is increasingly important for urban computing. Existing methods either rely on continuous (and expensive) Global Positioning System (GPS) data or suffer from poor accuracy. In particular, the sparse and noisy nature of social media data poses serious challenges in pinpointing where people live at scale. We revisit this research topic and infer home location within 100 m×100 m squares at 70% accuracy for 76% and 71% of active users in New York City and the Bay Area, respectively. To the best of our knowledge, this is the first time home location has been detected at such a fine granularity using sparse and noisy data. Since people spend a large portion of their time at home, our model enables novel applications. As an example, we focus on modeling people’s health at scale by linking their home locations with publicly available statistics, such as education disparity. Results in multiple geographic regions demonstrate both the effectiveness and added value of our home localization method and reveal insights that eluded earlier studies. In addition, we are able to discover the real buzz in the communities where people live.  相似文献   

6.
Nguyen  Ha  Pham  Hoang  Nguyen  Son  Van Linh  Ngo  Than  Khoat 《Machine Learning》2022,111(8):3025-3060
Machine Learning - The ability to analyze data streams, which arrive sequentially and possibly infinitely, is increasingly vital in various online applications. However, data streams pose various...  相似文献   

7.
Viewpoint invariant recovery of visual surfaces from sparse data   总被引:1,自引:0,他引:1  
An algorithm for the reconstruction of visual surfaces from sparse data is proposed. An important aspect of this algorithm is that the surface estimated from the sparse data is approximately invariant with respect to rigid transformation of the surface in 3D space. The algorithm is based on casting the problem as an ill-posed inverse problem that must be stabilized using a priori information related to the image and constraint formation. To form a surface estimate that is approximately invariant with respect to viewpoint, the stabilizing information is based on invariant surface characteristics. With appropriate approximations, this results in a convex functional to minimize, which is then solved using finite element analysis. The relationship of this algorithm to several previously proposed reconstruction algorithms is discussed, and several examples that demonstrate its effectiveness in reconstructing viewpoint-invariant surface estimates are given  相似文献   

8.
In this paper, we present a method for the numerical differentiation of bivariate functions when a set of noisy data is given. We suppose we have a sample coming from an independent process with unknown covariance matrix.We construct the gradient estimator using a multiresolution analysis and the usual difference operators. The asymptotic properties of the estimator are studied and convergence results are provided. The method is suitable for any data configuration.  相似文献   

9.
Computing an offset curve to a 3D curve involves the basic problem of normal orientation in space. The same thing is encountered when interpolating surfaces with 3D sweep or skinning methods. Different methods for normal orientation are discussed an this article. A new method is presented for normal orientation on general parametric curves.  相似文献   

10.
Stable fitting of 2D curves and 3D surfaces by implicit polynomials   总被引:1,自引:0,他引:1  
This work deals with fitting 2D and 3D implicit polynomials (IPs) to 2D curves and 3D surfaces, respectively. The zero-set of the polynomial is determined by the IP coefficients and describes the data. The polynomial fitting algorithms proposed in this paper aim at reducing the sensitivity of the polynomial to coefficient errors. Errors in coefficient values may be the result of numerical calculations, when solving the fitting problem or due to coefficient quantization. It is demonstrated that the effect of reducing this sensitivity also improves the fitting tightness and stability of the proposed two algorithms in fitting noisy data, as compared to existing algorithms like the well-known 3L and gradient-one algorithms. The development of the proposed algorithms is based on an analysis of the sensitivity of the zero-set to small coefficient changes and on minimizing a bound on the maximal error for one algorithm and minimizing the error variance for the second. Simulation results show that the proposed algorithms provide a significant reduction in fitting errors, particularly when fitting noisy data of complex shapes with high order polynomials, as compared to the performance obtained by the above mentioned existing algorithms.  相似文献   

11.
3D mapping is very challenging in the underwater domain, especially due to the lack of high resolution, low noise sensors. A new spectral registration method is presented that can determine the spatial 6 DOF transformation between pairs of very noisy 3D scans with only partial overlap. The approach is hence suited to cope with sonar as the predominant underwater sensor. The spectral registration method is based on Phase Only Matched Filtering (POMF) on non-trivially resampled spectra of the 3D data.  相似文献   

12.
13.
A linear least squares method for fitting noisy unimodal functions such as indicator-dilution curves with piecewise stretched exponential functions is described. Stretched exponential functions have the form z(t) = alpha t beta e gamma t, where alpha, beta, and gamma are constants. These functions are particularly useful for fitting experimental data that spans several orders of magnitude is non-Gaussian, high skewed, and long tailed. In addition, the method allows for specifying external restrictions on the smooth curve that might be required by physical constraints on the data. These constraints can take the form of restrictions on the value of the fitting function at certain points or the value of the derivatives in certain regions. To determine the necessary constants in the fitting functions, a linear least squares problem with linear equality and inequality constraints is solved.  相似文献   

14.
15.
We propose a novel semi-parametric modeling strategy for classifying noisy curves. This strategy uses a family of non-linear parametric models to describe known aspects of the signal and its propagation, with a non-parametric component incorporating unmodeled characteristics. We propose a novel multi-record model building strategy and assess its scope in classifying acoustic and radar signals. Our experiments suggest that the semi-parametric approach generally out performs the parametric approach, and in certain circumstance gives better performance than the non-parametric approach. In all cases, it is close to the best approach considered, with the added advantage of interpretable coefficients in the parametric component.  相似文献   

16.
Range images often suffer from issues such as low resolution (LR) (for low-cost scanners) and presence of missing regions due to poor reflectivity, and occlusions. Another common problem (with high quality scanners) is that of long acquisition times. In this work, we propose two approaches to counter these shortcomings. Our first proposal which addresses the issues of low resolution as well as missing regions, is an integrated super-resolution (SR) and inpainting approach. We use multiple relatively-shifted LR range images, where the motion between the LR images serves as a cue for super-resolution. Our imaging model also accounts for missing regions to enable inpainting. Our framework models the high resolution (HR) range as a Markov random field (MRF), and uses inhomogeneous MRF priors to constrain the solution differently for inpainting and super-resolution. Our super-resolved and inpainted outputs show significant improvements over their LR/interpolated counterparts. Our second proposal addresses the issue of long acquisition times by facilitating reconstruction of range data from very sparse measurements. Our technique exploits a cue from segmentation of an optical image of the same scene, which constrains pixels in the same color segment to have similar range values. Our approach is able to reconstruct range images with as little as 10% data. We also study the performance of both the proposed approaches in a noisy scenario as well as in the presence of alignment errors.  相似文献   

17.
A simple and efficient method is presented in this paper to reliably reconstruct 2D polygonal curves and 3D triangular surfaces from discrete points based on the respective clustering of Delaunay circles and spheres. A Delaunay circle is the circumcircle of a Delaunay triangle in the 2D space, and a Delaunay sphere is the circumsphere of a Delaunay tetrahedron in the 3D space. The basic concept of the presented method is that all the incident Delaunay circles/spheres of a point are supposed to be clustered into two groups along the original curve/surface with satisfactory point density. The required point density is considered equivalent to that of meeting the well-documented r-sampling condition. With the clustering of Delaunay circles/spheres at each point, an initial partial mesh can be generated. An extrapolation heuristic is then applied to reconstructing the remainder mesh, often around sharp corners. This leads to the unique benefit of the presented method that point density around sharp corners does not have to be infinite. Implementation results have shown that the presented method can correctly reconstruct 2D curves and 3D surfaces for known point cloud data sets employed in the literature.  相似文献   

18.
The proliferation of Internet has not only led to the generation of huge volumes of unstructured information in the form of web documents, but a large amount of text is also generated in the form of emails, blogs, and feedbacks, etc. The data generated from online communication acts as potential gold mines for discovering knowledge, particularly for market researchers. Text analytics has matured and is being successfully employed to mine important information from unstructured text documents. The chief bottleneck for designing text mining systems for handling blogs arise from the fact that online communication text data are often noisy. These texts are informally written. They suffer from spelling mistakes, grammatical errors, improper punctuation and irrational capitalization. This paper focuses on opinion extraction from noisy text data. It is aimed at extracting and consolidating opinions of customers from blogs and feedbacks, at multiple levels of granularity. We have proposed a framework in which these texts are first cleaned using domain knowledge and then subjected to mining. Ours is a semi-automated approach, in which the system aids in the process of knowledge assimilation for knowledge-base building and also performs the analytics. Domain experts ratify the knowledge base and also provide training samples for the system to automatically gather more instances for ratification. The system identifies opinion expressions as phrases containing opinion words, opinionated features and also opinion modifiers. These expressions are categorized as positive or negative with membership values varying from zero to one. Opinion expressions are identified and categorized using localized linguistic techniques. Opinions can be aggregated at any desired level of specificity i.e. feature level or product level, user level or site level, etc. We have developed a system based on this approach, which provides the user with a platform to analyze opinion expressions crawled from a set of pre-defined blogs.  相似文献   

19.
The construction of freeform models has always been a challenging task. A popular approach is to edit a primitive object such that its projections conform to a set of given planar curves. This process is tedious and relies very much on the skill and experience of the designer in editing 3D shapes. This paper describes an intuitive approach for the modeling of freeform objects based on planar profile curves. A freeform surface defined by a set of orthogonal planar curves is created by blending a corresponding set of sweep surfaces. Each of the sweep surfaces is obtained by sweeping a planar curve about a computed axis. A Catmull-Clark subdivision surface interpolating a set of data points on the object surface is then constructed. Since the curve points lying on the computed axis of the sweep will become extraordinary vertices of the subdivision surface, a mesh refinement process is applied to adjust the mesh topology of the surface around the axis points. In order to maintain characteristic features of the surface defined with the planar curves, sharp features on the surface are located and are retained in the mesh refinement process. This provides an intuitive approach for constructing freeform objects with regular mesh topology using planar profile curves.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号