首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 940 毫秒
1.
Soft maps taking points on one surface to probability distributions on another are attractive for representing surface mappings in the presence of symmetry, ambiguity, and combinatorial complexity. Few techniques, however, are available to measure their continuity and other properties. To this end, we introduce a novel Dirichlet energy for soft maps generalizing the classical map Dirichlet energy, which measures distortion by computing how soft maps transport probabilistic mass from one distribution to another. We formulate the computation of the Dirichlet energy in terms of a differential equation and provide a finite elements discretization that enables all of the quantities introduced to be computed. We demonstrate the effectiveness of our framework for understanding soft maps arising from various sources. Furthermore, we suggest how these energies can be applied to generate continuous soft or point‐to‐point maps.  相似文献   

2.
Hsu  Chun-Nan  Huang  Hung-Ju  Wong  Tzu-Tsung 《Machine Learning》2003,53(3):235-263
In a naive Bayesian classifier, discrete variables as well as discretized continuous variables are assumed to have Dirichlet priors. This paper describes the implications and applications of this model selection choice. We start by reviewing key properties of Dirichlet distributions. Among these properties, the most important one is perfect aggregation, which allows us to explain why discretization works for a naive Bayesian classifier. Since perfect aggregation holds for Dirichlets, we can explain that in general, discretization can outperform parameter estimation assuming a normal distribution. In addition, we can explain why a wide variety of well-known discretization methods, such as entropy-based, ten-bin, and bin-log l, can perform well with insignificant difference. We designed experiments to verify our explanation using synthesized and real data sets and showed that in addition to well-known methods, a wide variety of discretization methods all perform similarly. Our analysis leads to a lazy discretization method, which discretizes continuous variables according to test data. The Dirichlet assumption implies that lazy methods can perform as well as eager discretization methods. We empirically confirmed this implication and extended the lazy method to classify set-valued and multi-interval data with a naive Bayesian classifier.  相似文献   

3.
We propose a linear finite-element discretization of Dirichlet problems for static Hamilton–Jacobi equations on unstructured triangulations. The discretization is based on simplified localized Dirichlet problems that are solved by a local variational principle. It generalizes several approaches known in the literature and allows for a simple and transparent convergence theory. In this paper the resulting system of nonlinear equations is solved by an adaptive Gauss–Seidel iteration that is easily implemented and quite effective as a couple of numerical experiments show.Dedicated to Peter Deuflhard on the occasion of his 60th birthday  相似文献   

4.
This paper presents a procedure for the discretization of 2D domains using a Delaunay triangulation. Improvements over existing similar methods are introduced, proposing in particular a multi-constraint insertion algorithm, very effective in the presence of highly irregular domains, and the topological structure used together with its primitives. The method obtained requires limited input and can be applied to a wide class of domains. Quadrilateral subdivisions with a control of the aspect ratio of the generated elements can also be reached. Further it is suitable for evolutionary problems, which require continuous updating of the discretization. Presented applications and comparisons with other discretization methods demonstrate the effectiveness of the procedure.  相似文献   

5.
Vector order statistics operators as color edge detectors   总被引:9,自引:0,他引:9  
Color edge detection is approached in this paper using vector order statistics. Based on the R-ordering method, a class of color edge detectors is defined. These detectors function as vector operators as opposed to component-wise operators. Specific edge detectors can be obtained as special cases of this class. Various such detectors are defined and analyzed. Experimental results show the noise robustness of the vector order statistics operators. A quantitative evaluation and comparison to other color edge detectors favors our approach. Edge detection results obtained from real color images demonstrate the effectiveness of the proposed approach in real applications.  相似文献   

6.
A new and considerably simplified solution technique for geometrically nonlinear problems is introduced. In contrast to the existing numerical methods, the present approach obtains an approximate large deflection pattern from the linear displacement vector by successively employing updated correction factors. Conservation of energy principle yields a general expression for these subsequent corrections. While the linear portion of the strain energy can be computed using finite element approach, evaluations of its nonlinear counterparts often require mathematical discretization techniques. The simple, self-correcting iterative procedure is unconditionally stable and its fast oscillatory convergence offers further computational efficiency. To illustrate the application of the proposed method and to assess its accuracy, moderately large deflections of beam, plate and flexible cable structures have been computed and compared with known analytical solutions. If required, the obtained results—which are acceptable for most design purposes—can be further improved.  相似文献   

7.
A new approach to obtain a volumetric discretization from a T-spline surface representation is presented. A T-spline boundary zone is created beneath the surface, while the core of the model is discretized with Lagrangian elements. T-spline enriched elements are used as an interface between isogeometric and Lagrangian finite elements. The thickness of the T-spline zone and thereby the isogeometric volume fraction can be chosen arbitrarily large such that pure Lagrangian and pure isogeometric discretizations are included. The presented approach combines the advantages of isogeometric elements (accuracy and smoothness) and classical finite elements (simplicity and efficiency).Different heat transfer problems are solved with the finite element method using the presented discretization approach with different isogeometric volume fractions. For suitable applications, the approach leads to a substantial accuracy gain.  相似文献   

8.
In many applications of genetic algorithms, there is a tradeoff between speed and accuracy in fitness evaluations when evaluations use numerical methods with varying discretization. In these types of applications, the cost and accuracy vary from discretization errors when implicit or explicit quadrature is used to estimate the function evaluations. This paper examines discretization scheduling, or how to vary the discretization within the genetic algorithm in order to use the least amount of computation time for a solution of a desired quality. The effectiveness of discretization scheduling can be determined by comparing its computation time to the computation time of a GA using a constant discretization. There are three ingredients for the discretization scheduling: population sizing, estimated time for each function evaluation and predicted convergence time analysis. Idealized one- and two-dimensional experiments and an inverse groundwater application illustrate the computational savings to be achieved from using discretization scheduling.  相似文献   

9.
Latent topic model such as Latent Dirichlet Allocation (LDA) has been designed for text processing and has also demonstrated success in the task of audio related processing. The main idea behind LDA assumes that the words of each document arise from a mixture of topics, each of which is a multinomial distribution over the vocabulary. When applying the original LDA to process continuous data, the word-like unit need be first generated by vector quantization (VQ). This data discretization usually results in information loss. To overcome this shortage, this paper introduces a new topic model named Gaussian-LDA for audio retrieval. In the proposed model, we consider continuous emission probability, Gaussian instead of multinomial distribution. This new topic model skips the vector quantization and directly models each topic as a Gaussian distribution over audio features. It avoids discretization by this way and integrates the procedure of clustering. The experiments of audio retrieval demonstrate that Gaussian-LDA achieves better performance than other compared methods.  相似文献   

10.
In this work we propose a new discretization method for the Laplace–Beltrami operator defined on point‐based surfaces. In contrast to the existing point‐based discretization techniques, our approach does not rely on any triangle mesh structure, turning out truly mesh‐free. Based on a combination of Smoothed Particle Hydrodynamics and an optimization procedure to estimate area elements, our discretization method results in accurate solutions while still being robust when facing abrupt changes in the density of points. Moreover, the proposed scheme results in numerically stable discrete operators. The effectiveness of the proposed technique is brought to bear in many practical applications. In particular, we use the eigenstructure of the discrete operator for filtering and shape segmentation. Point‐based surface deformation is another application that can be easily carried out from the proposed discretization method.  相似文献   

11.
We present a method for animating deformable objects using a novel finite element discretization on convex polyhedra. Our finite element approach draws upon recently introduced 3D mean value coordinates to define smooth interpolants within the elements. The mathematical properties of our basis functions guarantee convergence. Our method is a natural extension to linear interpolants on tetrahedra: for tetrahedral elements, the methods are identical. For fast and robust computations, we use an elasticity model based on Cauchy strain and stiffness warping. This more flexible discretization is particularly useful for simulations that involve topological changes, such as cutting or fracture. Since splitting convex elements along a plane produces convex elements, remeshing or subdivision schemes used in simulations based on tetrahedra are not necessary, leading to less elements after such operations. We propose various operators for cutting the polyhedral discretization. Our method can handle arbitrary cut trajectories, and there is no limit on how often elements can be split.  相似文献   

12.
《Computers & Structures》1987,26(3):499-512
A general analysis of the static and dynamic responses of suspension bridges in presented. The Cullmann ‘elastic weight’ theory is revisited, and it is shown to be a powerful discretization tool. According to this method, the structure is reduced to a system of lumped masses and elastic springs, so that it can be considered as a classic holonomic system of rigid elements (Lagrange system). The elasticity is reduced to reactive conservative forces in the springs, and the second-order components of the potential energy of the applied forces can be detected by means of the work of these reactive forces. This is an advantage, at least from the theoretical point of view, over any finite element method. Another advantage is the substantial diminishing of the redundants, which allowed us to use a portable personal computer. Finally, this discretization method is quite general, but it seems particularly suitable for suspension bridges, in which the springs are only on the bridge, and must be considered as simple bending cells.Some numerical results are shown, in which we examine one of the possible suspension bridges which is to be built on the Straits of Messina.  相似文献   

13.
Wu  Cheng-jin  Cen  Song  Shang  Yan 《Engineering with Computers》2021,37(3):1975-1998

A high-performance shape-free polygonal hybrid displacement-function finite-element method is proposed for analyses of Mindlin–Reissner plates. The analytical solutions of displacement functions are employed to construct element resultant fields, and the three-node Timoshenko’s beam formulae are adopted to simulate the boundary displacements. Then, the element stiffness matrix is obtained by the modified principle of minimum complementary energy. With a simple division, the integration of all the necessary matrices can be performed within polygonal element region. Five new polygonal plate elements containing a mid-side node on each element edge are developed, in which element HDF-PE is for general case, while the other four, HDF-PE-SS1, HDF-PE-Free, IHDF-PE-SS1, and IHDF-PE-Free, are for the edge effects at different boundary types. Furthermore, the shapes of these new elements are quite free, i.e., there is almost no limitation on the element shape and the number of element sides. Numerical examples show that the new elements are insensitive to mesh distortions, possess excellent and much better performance and flexibility in dealing with challenging problems with edge effects, complicated loading, and material distributions.

  相似文献   

14.
Dr. M. Neher 《Computing》1994,53(3-4):379-395
This paper is concerned with the reconstruction of an unknown potentialq(x) in the Sturm-Liouville problem with Dirichlet boundary conditions, when only a finite number of eigenvalues are known. The problem is transformed into a system of nonlinear equations. A solution of this system is enclosed in an interval vector by an interval Newton's method. From the interval vector, an interval function[q](x) is constructed that encloses a potentialq(x) corresponding to the prescribed eigenvalues. To make this numerical existence proof rigorous, of course, all discretization and rounding errors have to be taken into account in the computation.  相似文献   

15.
We describe approaches for positive data modeling and classification using both finite inverted Dirichlet mixture models and support vector machines (SVMs). Inverted Dirichlet mixture models are used to tackle an outstanding challenge in SVMs namely the generation of accurate kernels. The kernels generation approaches, grounded on ideas from information theory that we consider, allow the incorporation of data structure and its structural constraints. Inverted Dirichlet mixture models are learned within a principled Bayesian framework using both Gibbs sampler and Metropolis-Hastings for parameter estimation and Bayes factor for model selection (i.e., determining the number of mixture’s components). Our Bayesian learning approach uses priors, which we derive by showing that the inverted Dirichlet distribution belongs to the family of exponential distributions, over the model parameters, and then combines these priors with information from the data to build posterior distributions. We illustrate the merits and the effectiveness of the proposed method with two real-world challenging applications namely object detection and visual scenes analysis and classification.  相似文献   

16.
The prior distribution of an attribute in a naïve Bayesian classifier is typically assumed to be a Dirichlet distribution, and this is called the Dirichlet assumption. The variables in a Dirichlet random vector can never be positively correlated and must have the same confidence level as measured by normalized variance. Both the generalized Dirichlet and the Liouville distributions include the Dirichlet distribution as a special case. These two multivariate distributions, also defined on the unit simplex, are employed to investigate the impact of the Dirichlet assumption in naïve Bayesian classifiers. We propose methods to construct appropriate generalized Dirichlet and Liouville priors for naïve Bayesian classifiers. Our experimental results on 18 data sets reveal that the generalized Dirichlet distribution has the best performance among the three distribution families. Not only is the Dirichlet assumption inappropriate, but also forcing the variables in a prior to be all positively correlated can deteriorate the performance of the naïve Bayesian classifier.  相似文献   

17.
The generalized Dirichlet distribution has been shown to be a more appropriate prior than the Dirichlet distribution for naïve Bayesian classifiers. When the dimension of a generalized Dirichlet random vector is large, the computational effort for calculating the expected value of a random variable can be high. In document classification, the number of distinct words that is the dimension of a prior for naïve Bayesian classifiers is generally more than ten thousand. Generalized Dirichlet priors can therefore be inapplicable for document classification from the viewpoint of computational efficiency. In this paper, some properties of the generalized Dirichlet distribution are established to accelerate the calculation of the expected values of random variables. Those properties are then used to construct noninformative generalized Dirichlet priors for naïve Bayesian classifiers with multinomial models. Our experimental results on two document sets show that generalized Dirichlet priors can achieve a significantly higher prediction accuracy and that the computational efficiency of naïve Bayesian classifiers is preserved.  相似文献   

18.
A hybrid edge element approach for the computation of waveguide modes is presented. The electric field is decomposed into its transverse and longitudinal components, which are modeled in terms of two-dimensional edge elements and scalar nodal elements, respectively, thereby satisfying the Dirichlet boundary condition at the perfect electric conductor boundaries and dielectric interfaces. Failure to do so results in the generation of spurious modes. This approach allows for the modeling of a three-dimensional field quantity over a two-dimensional boundary, namely the waveguide cross section. Another approach, the method of moments, serves as an excellent means of verifying the results obtained through use of hybrid edge elements. A comparison of the results obtained from both techniques are presented, along with the associated field plots obtained from the hybrid edge element approach for several geometries.  相似文献   

19.
梯度向量流模型(GVF Snake)在图像处理领域取得较好的效果.但它简单的迭代运算方法,其收敛速度慢,限制了其应用.针对梯度向量场的计算,提出一种基于BFGS算法求解力场的方法,给出详细的求解过程并并且通过计算机仿真进行数值求解,最后将改进后的GVF Snake模型用于图像处理.结果表明, BFGS-GVF建立的梯度向量场性能较好.与图像处理中的牛顿几何轮廓算法、CV活动轮廓算法及IALM-GVF Snake算法进行对比, BFGS-GVF Snake算法能得到清晰、光滑的图像轮廓.  相似文献   

20.
从遥感影像中自动提取建筑物是研究从遥感影像构建地图的关键技术。针对当前建筑物自动提取算法中存在的缺陷,提出了一种基于数字高程模型的建筑物自动提取算法。该算法是建立在灰度数学形态学、矢量化变换、离散化噪音消除以及边缘简化技术综合的基础上,充分考虑了建筑物后续重建计算量的问题。实验结果表明利用该算法提取出的建筑物边缘清晰正确,同时又能大大减少冗余数据量。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号