首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
We consider the application of Fictitious Domain approach combined with Least Squares Spectral Elements for the numerical solution of partial differential equations. Fictitious Domain methods allow problems formulated on a complicated shaped domain Ω to be solved on a simpler domain Π containing Ω. Least Squares Spectral Element Method has been used to develop the discrete model, as this scheme combines the generality of finite element methods with the accuracy of spectral methods. Moreover the least squares methods have theoretical and computational advantages in the algorithmic design and implementation. This paper presents the formulation and validation of the Fictitious Domain/Least Squares Spectral Element approach. The convergence of the relative energy norm η is verified computing smooth solutions to two-dimensional first and second-order differential equations, demonstrating the predictive capability of the proposed formulation.  相似文献   

2.
3.
Consistent segmentation is to the center of many applications based on dynamic geometric data. Directly segmenting a raw 3D point cloud sequence is a challenging task due to the low data quality and large inter‐frame variation across the whole sequence. We propose a local‐to‐global approach to co‐segment point cloud sequences of articulated objects into near‐rigid moving parts. Our method starts from a per‐frame point clustering, derived from a robust voting‐based trajectory analysis. The local segments are then progressively propagated to the neighboring frames with a cut propagation operation, and further merged through all frames using a novel space‐time segment grouping technqiue, leading to a globally consistent and compact segmentation of the entire articulated point cloud sequence. Such progressive propagating and merging, in both space and time dimensions, makes our co‐segmentation algorithm especially robust in handling noise, occlusions and pose/view variations that are usually associated with raw scan data.  相似文献   

4.
In this paper, we propose PCPNET , a deep‐learning based approach for estimating local 3D shape properties in point clouds. In contrast to the majority of prior techniques that concentrate on global or mid‐level attributes, e.g., for shape classification or semantic labeling, we suggest a patch‐based learning method, in which a series of local patches at multiple scales around each point is encoded in a structured manner. Our approach is especially well‐adapted for estimating local shape properties such as normals (both unoriented and oriented) and curvature from raw point clouds in the presence of strong noise and multi‐scale features. Our main contributions include both a novel multi‐scale variant of the recently proposed PointNet architecture with emphasis on local shape information, and a series of novel applications in which we demonstrate how learning from training data arising from well‐structured triangle meshes, and applying the trained model to noisy point clouds can produce superior results compared to specialized state‐of‐the‐art techniques. Finally, we demonstrate the utility of our approach in the context of shape reconstruction, by showing how it can be used to extract normal orientation information from point clouds.  相似文献   

5.
Late binding and subtyping create run‐time overhead for object‐oriented languages, especially in the context of both multiple inheritance and dynamic loading, for instance for JAVA interfaces. In a previous article, we proposed a novel approach based on perfect hashing and truly constant‐time hashtables for implementing subtype testing and method invocation in a dynamic loading setting. In this first study, we based our efficiency assessment on Driesen's abstract computational model for the time aspect, and on large‐scale benchmarks for the space aspect. The conclusions were that the technique was promising but required further research in order to assess its scalability. This article presents some new results on perfect class hashing that enhance its interest. We propose and test both new hashing functions and an inverse problem that amounts to selecting the best class identifiers in order to minimize the overall hashtable size. This optimizing approach is proven to be optimal for single‐inheritance hierarchies. Experiments within an extended testbed with random class loading and under cautious assumptions about what should be a sensible class‐loading order show that perfect class hashing scales up gracefully, especially on JAVA ‐like multiple‐subtyping hierarchies. Furthermore, perfect class hashing is implemented in the PRM compiler testbed, and compared here with the coloring technique, which amounts to maintaining the single‐inheritance implementation in multiple inheritance. The overall conclusion is that the approach is efficient from both time and space standpoints with the bit‐wise and hashing function. In contrast, the poor time efficiency of modulus hashing function on most processors is confirmed. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

6.
Turbulent flows are multi‐scale with vortices spanning a wide range of scales continuously. Due to such complexities, turbulence scales are particularly difficult to analyse and visualize. In this work, we present a novel and efficient optimization‐based method for continuous‐scale turbulence structure visualization with scale decomposition directly in the Kolmogorov energy spectrum. To achieve this, we first derive a new analytical objective function based on integration approximation. Using this new formulation, we can significantly improve the efficiency of the underlying optimization process and obtain the desired filter in the Kolmogorov energy spectrum for scale decomposition. More importantly, such a decomposition allows a ‘continuous‐scale visualization’ that enables us to efficiently explore the decomposed turbulence scales and further analyse the turbulence structures in a continuous manner. With our approach, we can present scale visualizations of direct numerical simulation data sets continuously over the scale domain for both isotropic and boundary layer turbulent flows. Compared with previous works on multi‐scale turbulence analysis and visualization, our method is highly flexible and efficient in generating scale decomposition and visualization results. The application of the proposed technique to both isotropic and boundary layer turbulence data sets verifies the capability of our technique to produce desirable scale visualization results.  相似文献   

7.
The usual approach to design subdivision schemes for curves and surfaces basically consists in combining proper rules for regular configurations, with some specific heuristics to handle extraordinary vertices. In this paper, we introduce an alternative approach, called Least Squares Subdivision Surfaces (LS), where the key idea is to iteratively project each vertex onto a local approximation of the current polygonal mesh. While the resulting procedure haves the same complexity as simpler subdivision schemes, our method offers much higher visual quality, especially in the vicinity of extraordinary vertices. Moreover, we show it can be easily generalized to support boundaries and creases. The fitting procedure allows for a local control of the surface from the normals, making LS3 very well suited for interactive freeform modeling applications. We demonstrate our approach on diadic triangular and quadrangular refinement schemes, though it can be applied to any splitting strategies.  相似文献   

8.
This paper investigates the commonly overlooked “sensitivity” of sensitivity analysis (SA) to what we refer to as parameter “perturbation scale”, which can be defined as a prescribed size of the sensitivity-related neighbourhood around any point in the parameter space (analogous to step size Δx for numerical estimation of derivatives). We discuss that perturbation scale is inherent to any (local and global) SA approach, and explain how derivative-based SA approaches (e.g., method of Morris) focus on small-scale perturbations, while variance-based approaches (e.g., method of Sobol) focus on large-scale perturbations. We employ a novel variogram-based approach, called Variogram Analysis of Response Surfaces (VARS), which bridges derivative- and variance-based approaches. Our analyses with different real-world environmental models demonstrate significant implications of subjectivity in the perturbation-scale choice and the need for strategies to address these implications. It is further shown how VARS can uniquely characterize the perturbation-scale dependency and generate sensitivity measures that encompass all sensitivity-related information across the full spectrum of perturbation scales.  相似文献   

9.
We present a novel approach for optimizing real‐valued functions based on a wide range of topological criteria. In particular, we show how to modify a given function in order to remove topological noise and to exhibit prescribed topological features. Our method is based on using the previously‐proposed persistence diagrams associated with real‐valued functions, and on the analysis of the derivatives of these diagrams with respect to changes in the function values. This analysis allows us to use continuous optimization techniques to modify a given function, while optimizing an energy based purely on the values in the persistence diagrams. We also present a procedure for aligning persistence diagrams of functions on different domains, without requiring a mapping between them. Finally, we demonstrate the utility of these constructions in the context of the functional map framework, by first giving a characterization of functional maps that are associated with continuous point‐to‐point correspondences, directly in the functional domain, and then by presenting an optimization scheme that helps to promote the continuity of functional maps, when expressed in the reduced basis, without imposing any restrictions on metric distortion. We demonstrate that our approach is efficient and can lead to improvement in the accuracy of maps computed in practice.  相似文献   

10.
In this paper, we present a novel method for detecting partial symmetries in very large point clouds of 3D city scans. Unlike previous work, which has only been demonstrated on data sets of a few hundred megabytes maximum, our method scales to very large scenes: We map the detection problem to a nearest‐neighbour problem in a low‐dimensional feature space, and follow this with a cascade of tests for geometric clustering of potential matches. Our algorithm robustly handles noisy real‐world scanner data, obtaining a recognition performance comparable to that of state‐of‐the‐art methods. In practice, it scales linearly with scene size and achieves a high absolute throughput, processing half a terabyte of scanner data overnight on a dual socket commodity PC.  相似文献   

11.
In this paper it is presented the application of a higher-order finite volume method based on Moving Least Squares approximations (FV-MLS) to the resolution of non-wall-bounded compressible turbulent flows. Our approach is based on the monotonically implicit Large Eddy Simulation (MILES). The main idea of MILES methodology is the absence of any explicit subgrid scale (SGS) model in the numerical algorithm to solve turbulent flows. In the case of the FV-MLS method, we take advantage of the multiresolution properties of Moving Least Squares Approximations, and we show that they can be used as an implicit SGS model. The numerical results are encouraging. The third-order FV-MLS method is able to reproduce the inertial subrange, and it obtains better results than other usual numerical schemes in LES computations, such as the MUSCL scheme. We note that in the present state of this research, the numerical method is not yet suited for wall-bounded flows. This paper is the first step in the application of the FV-MLS method to general turbulent flows.  相似文献   

12.
Multi-scale Feature Extraction on Point-Sampled Surfaces   总被引:8,自引:0,他引:8  
We present a new technique for extracting line‐type features on point‐sampled geometry. Given an unstructuredpoint cloud as input, our method first applies principal component analysis on local neighborhoods toclassify points according to the likelihood that they belong to a feature. Using hysteresis thresholding, we thencompute a minimum spanning graph as an initial approximation of the feature lines. To smooth out the featureswhile maintaining a close connection to the underlying surface, we use an adaptation of active contour models.Central to our method is a multi‐scale classification operator that allows feature analysis at multiplescales, using the size of the local neighborhoods as a discrete scale parameter. This significantly improves thereliability of the detection phase and makes our method more robust in the presence of noise. To illustrate theusefulness of our method, we have implemented a non‐photorealistic point renderer to visualize point‐sampledsurfaces as line drawings of their extracted feature curves.  相似文献   

13.
Geodesic Polar Coordinates (GPCs) on a smooth surface S are local surface coordinates that relates a surface point to a planar parameter point by the length and direction of a corresponding geodesic curve onS . They are intrinsic to the surface and represent a natural local parameterization with useful properties. We present a simple and efficient algorithm to approximate GPCs on both triangle and general polygonal meshes. Our approach, named DGPC, is based on extending an existing algorithm for computing geodesic distance. We compare our approach with previous methods, both with respect to efficiency, accuracy and visual qualities when used for local mesh texturing. As a further application we show how the resulting coordinates can be used for vector space methods like local remeshing at interactive frame‐rates even for large meshes.  相似文献   

14.
Multi‐view reconstruction aims at computing the geometry of a scene observed by a set of cameras. Accurate 3D reconstruction of dynamic scenes is a key component for a large variety of applications, ranging from special effects to telepresence and medical imaging. In this paper we propose a method based on Moving Least Squares surfaces which robustly and efficiently reconstructs dynamic scenes captured by a calibrated set of hybrid color+depth cameras. Our reconstruction provides spatio‐temporal consistency and seamlessly fuses color and geometric information. We illustrate our approach on a variety of real sequences and demonstrate that it favorably compares to state‐of‐the‐art methods.  相似文献   

15.
We use the approach of “optimal” switching to design the adaptive control because the design among multiple models is intuitively more practically feasible than the traditional adaptive control in improving the performances. We prove that for a typical class of nonlinear systems disturbed by random noise, the multiple model adaptive switching control based on WLS(Weighted Least Squares) or projected-LS (Least Squares) is stable and convergent.  相似文献   

16.
The generalized Sylvester matrix equation AX + YB = C is encountered in many systems and control applications, and also has several applications relating to the problem of image restoration, and the numerical solution of implicit ordinary differential equations. In this paper, we construct a symmetric preserving iterative method, basing on the classic Conjugate Gradient Least Squares (CGLS) method, for AX + YB = C with the unknown matrices X, Y having symmetric structures. With this method, for any arbitrary initial symmetric matrix pair, a desired solution can be obtained within finitely iterate steps. The unique optimal (least norm) solution can also be obtained by choosing a special kind of initial matrix. We also consider the matrix nearness problem. Some numerical results confirm the efficiency of these algorithms. It is more important that some numerical stability analysis on the matrix nearness problem is given combined with numerical examples, which is not given in the earlier papers. Copyright © 2010 John Wiley and Sons Asia Pte Ltd and Chinese Automatic Control Society  相似文献   

17.
Levenberg-Marquardt算法的一种新解释   总被引:3,自引:0,他引:3       下载免费PDF全文
Levenberg-Marquard(tLM)算法与最小二乘(Least Square,LS)方法关系密切,标度总体最小二乘(Scaled Total Least Square,STLS)是最小二乘,数据最小二乘(Data Least Square,DLS)与总体最小二乘(Total Least Square,TLS)的统一与推广,但是它与LM算法的关系尚不清楚。给出了一种求STLS解的算法及其子空间解释与拓扑解释,利用矩阵分解揭示了LM算法与STLS的密切关系,结果表明:阻尼因子使得LS解转变为STLS解;噪声子空间的剔除与系数矩阵条件数的控制保证了LM算法的稳健性与收敛速度;STLS的鲁棒性保障了LM算法处理过参数化问题的能力。  相似文献   

18.
Point cloud data is one of the most common types of input for geometric processing applications. In this paper, we study the point cloud density adaptation problem that underlies many pre‐processing tasks of points data. Specifically, given a (sparse) set of points Q sampling an unknown surface and a target density function, the goal is to adapt Q to match the target distribution. We propose a simple and robust framework that is effective at achieving both local uniformity and precise global density distribution control. Our approach relies on the Gaussian‐weighted graph Laplacian and works purely in the points setting. While it is well known that graph Laplacian is related to mean‐curvature flow and thus has denoising ability, our algorithm uses certain information encoded in the graph Laplacian that is orthogonal to the mean‐curvature flow. Furthermore, by leveraging the natural scale parameter contained in the Gaussian kernel and combining it with a simulated annealing idea, our algorithm moves points in a multi‐scale manner. The resulting algorithm relies much less on the input points to have a good initial distribution (neither uniform nor close to the target density distribution) than many previous refinement‐based methods. We demonstrate the simplicity and effectiveness of our algorithm with point clouds sampled from different underlying surfaces with various geometric and topological properties.  相似文献   

19.
We propose a novel pose-invariant face recognition approach which we call Discriminant Multiple Coupled Latent Subspace framework. It finds the sets of projection directions for different poses such that the projected images of the same subject in different poses are maximally correlated in the latent space. Discriminant analysis with artificially simulated pose errors in the latent space makes it robust to small pose errors caused due to a subject’s incorrect pose estimation. We do a comparative analysis of three popular latent space learning approaches: Partial Least Squares (PLSs), Bilinear Model (BLM) and Canonical Correlational Analysis (CCA) in the proposed coupled latent subspace framework. We experimentally demonstrate that using more than two poses simultaneously with CCA results in better performance. We report state-of-the-art results for pose-invariant face recognition on CMU PIE and FERET and comparable results on MultiPIE when using only four fiducial points for alignment and intensity features.  相似文献   

20.
This paper presents a novel approach to compute high quality and noise‐free soft shadows using exact visibility computations. This work relies on a theoretical framework allowing to group lines according to the geometry they intersect. From this study, we derive a new algorithm encoding lazily the visibility from a polygon. Contrary to previous works on from‐polygon visibility, our approach is very robust and straightforward to implement. We apply this algorithm to solve exactly and efficiently the visibility of an area light source from any point in a scene. As a consequence, results are not sensitive to noise, contrary to soft shadows methods based on area light source sampling. We demonstrate the reliability of our approach on different scenes and configurations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号