首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
In this paper we develop direct and iterative algorithms for the solution of finite difference approximations of the Poisson and Biharmonic equations on a square, using a number of arithmetic units in parallel. Assuming ann×n grid of mesh points, we show that direct algorithms for the Poisson and Biharmonic equations require 0(logn) and 0(n) time steps, respectively. The corresponding speedup over the sequential algorithms are 0(n 2) and 0(n 2logn). We also compare the efficiency of these direct algorithms with parallel SOR and ADI algorithms for the Poisson equation, and a parallel semi-direct method for the Biharmonic equation treated as a coupled pair of Poisson equations.  相似文献   

2.
Latent Semantic Kernels   总被引:5,自引:0,他引:5  
Kernel methods like support vector machines have successfully been used for text categorization. A standard choice of kernel function has been the inner product between the vector-space representation of two documents, in analogy with classical information retrieval (IR) approaches.Latent semantic indexing (LSI) has been successfully used for IR purposes as a technique for capturing semantic relations between terms and inserting them into the similarity measure between two documents. One of its main drawbacks, in IR, is its computational cost.In this paper we describe how the LSI approach can be implemented in a kernel-defined feature space.We provide experimental results demonstrating that the approach can significantly improve performance, and that it does not impair it.  相似文献   

3.
Multiscale Active Contours   总被引:1,自引:0,他引:1  
We propose a new multiscale image segmentation model, based on the active contour/snake model and the Polyakov action. The concept of scale, general issue in physics and signal processing, is introduced in the active contour model, which is a well-known image segmentation model that consists of evolving a contour in images toward the boundaries of objects. The Polyakov action, introduced in image processing by Sochen-Kimmel-Malladi in Sochen et al. (1998), provides an efficient mathematical framework to define a multiscale segmentation model because it generalizes the concept of harmonic maps embedded in higher-dimensional Riemannian manifolds such as multiscale images. Our multiscale segmentation model, unlike classical multiscale segmentations which work scale by scale to speed up the segmentation process, uses all scales simultaneously, i.e. the whole scale space, to introduce the geometry of multiscale images in the segmentation process. The extracted multiscale structures will be useful to efficiently improve the robustness and the performance of standard shape analysis techniques such as shape recognition and shape registration. Another advantage of our method is to use not only the Gaussian scale space but also many other multiscale spaces such as the Perona-Malik scale space, the curvature scale space or the Beltrami scale space. Finally, this multiscale segmentation technique is coupled with a multiscale edge detecting function based on the gradient vector flow model, which is able to extract convex and concave object boundaries independent of the initial condition. We apply our multiscale segmentation model on a synthetic image and a medical image.  相似文献   

4.
Among the major developments in Mathematical Morphology in the last two decades are the interrelated subjects of connectivity classes and connected operators. Braga-Neto and Goutsias have proposed an extension of the theory of connectivity classes to a multiscale setting, whereby one can assign connectivity to an object observed at different scales. In this paper, we study connected operators in the context of multiscale connectivity. We propose the notion of a -connected operator, that is, an operator connected at scale . We devote some attention to the study of binary -grain operators. In particular, we show that families of -grain openings and -grain closings, indexed by the connectivity scale parameter, are granulometries and anti-granulometries, respectively. We demonstrate the use of multiscale connected operators with image analysis applications. The first is the scale-space representation of grayscale images using multiscale levelings, where the role of scale is played by the connectivity scale. Then we discuss the application of multiscale connected openings in granulometric analysis, where both size and connectivity information are summarized. Finally, we describe an application of multiscale connected operators to an automatic target recognition problem.Ulisses Braga-Neto received the Baccalaureate degree in Electrical Engineering from the Universidade Federal de Pernambuco (UFPE), Brazil, in 1992, the Masters degree in Electrical Engineering from the Universidade Estadual de Campinas, Brazil, in 1994, the M.S.E. degree in Electrical and Computer Engineering and the M.S.E. degree in Mathematical Sciences, both from The Johns Hopkins University, in 1998, and the Ph.D. degree in Electrical and Computer Engineering, from The Johns Hopkins University, in 2001. He was a Post-Doctoral Fellow at the University of Texas MD Anderson Cancer Center and a Visiting Scholar at Texas A&M University, from 2002 to 2004. He is currently an Associate Researcher at the Aggeu Magalhães Research Center of the Osvaldo Cruz Foundation, Brazilian Ministry of Health. His research interests include Bioinformatics, Pattern Recognition, Image Analysis, and Mathematical Morphology.  相似文献   

5.
Gunn  S.R.  Kandola  J.S. 《Machine Learning》2002,48(1-3):137-163
A widely acknowledged drawback of many statistical modelling techniques, commonly used in machine learning, is that the resulting model is extremely difficult to interpret. A number of new concepts and algorithms have been introduced by researchers to address this problem. They focus primarily on determining which inputs are relevant in predicting the output. This work describes a transparent, advanced non-linear modelling approach that enables the constructed predictive models to be visualised, allowing model validation and assisting in interpretation. The technique combines the representational advantage of a sparse ANOVA decomposition, with the good generalisation ability of a kernel machine. It achieves this by employing two forms of regularisation: a 1-norm based structural regulariser to enforce transparency, and a 2-norm based regulariser to control smoothness. The resulting model structure can be visualised showing the overall effects of different inputs, their interactions, and the strength of the interactions. The robustness of the technique is illustrated using a range of both artifical and real world datasets. The performance is compared to other modelling techniques, and it is shown to exhibit competitive generalisation performance together with improved interpretability.  相似文献   

6.
The multiple-instance learning (MIL) model has been successful in numerous application areas. Recently, a generalization of this model and an algorithm for it were introduced, showing significant advantages over the conventional MIL model on certain application areas. Unfortunately, that algorithm is not scalable to high dimensions. We adapt that algorithm to one using a support vector machine with our new kernel kwedge. This reduces the time complexity from exponential in the dimension to polynomial. Computing our new kernel is equivalent to counting the number of boxes in a discrete, bounded space that contain at least one point from each of two multisets. We show that this problem is #P-complete, but then give a fully polynomial randomized approximation scheme (FPRAS) for it. We then extend kwedge by enriching its representation into a new kernel kmin, and also consider a normalized version of kwedge that we call kwedge/vee (which may or may not not be a kernel, but whose approximation yielded positive semidefinite Gram matrices in practice). We then empirically evaluate all three measures on data from content-based image retrieval, biological sequence analysis, and the musk data sets. We found that our kernels performed well on all data sets relative to algorithms in the conventional MIL model.  相似文献   

7.
Many machine learning problems in natural language processing, transaction-log analysis, or computational biology, require the analysis of variable-length sequences, or, more generally, distributions of variable-length sequences.Kernel methods introduced for fixed-size vectors have proven very successful in a variety of machine learning tasks. We recently introduced a new and general kernel framework, rational kernels, to extend these methods to the analysis of variable-length sequences or more generally distributions given by weighted automata. These kernels are efficient to compute and have been successfully used in applications such as spoken-dialog classification with Support Vector Machines.However, the rational kernels previously introduced in these applications do not fully encompass distributions over alternate sequences. They are based only on the counts of co-occurring subsequences averaged over the alternate paths without taking into accounts information about the higher-order moments of the distributions of these counts.In this paper, we introduce a new family of rational kernels, moment kernels, that precisely exploits this additional information. These kernels are distribution kernels based on moments of counts of strings. We describe efficient algorithms to compute moment kernels and apply them to several difficult spoken-dialog classification tasks. Our experiments show that using the second moment of the counts of n-gram sequences consistently improves the classification accuracy in these tasks.Editors: Dan Roth and Pascale Fung  相似文献   

8.
用于图分类的组合维核方法   总被引:1,自引:0,他引:1  
对图等内含结构信息的数据进行学习,是机器学习领域的一个重要问题.核方法是解决此类问题的一种有效技术.文中针对分子图分类问题,基于Swamidass等人的工作,提出用于图分类的组合维核方法.该方法首先构建融合一维信息的二维核来刻画分子化学特征,然后基于分子力学的相关知识,利用几何信息构建三维核来刻画分子物理性质.在此基础上对不同维度的核进行集成,通过求解二次约束二次规划问题来获得最优核组合.实验结果表明,文中方法比现有技术具有更好的性能.  相似文献   

9.
We present a novel integral representation for the biharmonic Dirichlet problem. To obtain the representation, the Dirichlet problem is first converted into a related Stokes problem for which the Sherman–Lauricella integral representation can be used. Not all potentials for the Dirichlet problem correspond to a potential for Stokes flow, and vice-versa, but we show that the integral representation can be augmented and modified to handle either simply or multiply connected domains. The resulting integral representation has a kernel which behaves better on domains with high curvature than existing representations. Thus, this representation results in more robust computational methods for the solution of the Dirichlet problem of the biharmonic equation and we demonstrate this with several numerical examples.  相似文献   

10.
One of the biggest challenges in constructing empirical models is the presence of measurement errors in the data. These errors (or noise) can have a drastic effect on the accuracy and prediction of estimated models, and thus need to be removed for improved models accuracy. Multiscale representation of data has shown great noise-removal ability when used in data filtering. In this paper, this advantage of multiscale representation is exploited to improve the accuracy of the nonlinear Takagi–Sugeno (TS) fuzzy models by developing a multiscale fuzzy (MSF) system identification algorithm. The developed algorithm relies on constructing multiple TS fuzzy models at multiple scales using the scaled signal approximations of the input–output data, and then selecting the optimum multiscale model that maximizes the signal-to-noise ratio of the model prediction. The developed algorithm is shown to outperform the time domain fuzzy model, NARMAX model, and fuzzy model estimated from pre-filtered data using an Exponentially weighted Moving Average (EWMA) filter through a simulated shell and tube heat exchanger modeling example. The reason for this improvement is that the developed MSF modeling algorithm improves the model accuracy by integrating modeling and data filtering using a filter bank, from which the optimum filter (for modeling purposes) is selected.  相似文献   

11.
A class of multiscale stochastic models based on scale-recursive dynamics on trees has recently been introduced. These models are interesting because they can be used to represent a broad class of physical phenomena and because they lead to efficient algorithms for estimation and likelihood calculation. In this paper, we provide a complete statistical characterization of the error associated with smoothed estimates of the multiscale stochastic processes described by these models. In particular, we show that the smoothing error is itself a multiscale stochastic process with parameters that can be explicitly calculated  相似文献   

12.
A multiscale algorithm is described, mathematically justified, and demonstrated, which enables extraction of various sizes and shapes of texture elements from natural images without need for tuning parameters.  相似文献   

13.
In this paper, a mesoscale model of concrete is presented, which considers particles, matrix material and the interfacial transition zone (ITZ) as separate constituents. Particles are represented as ellipsoides, generated according to a prescribed grading curve and placed randomly into the specimen. Algorithms are proposed to generate realistic particle configurations efficiently. The nonlinear behavior is simulated with a cohesive interface model for the ITZ. For the matrix material, different damage/plasticity models are investigated. The simulation of localization requires to regularize the solution, which is performed by using integral type nonlocal models with strain or displacement averaging. Due to the complexity of a mesoscale model for a realistic structure, a multiscale method to couple the homogeneous macroscale with the heterogeneous mesoscale model in a concurrent embedded approach is proposed. This allows an adaptive transition from a full macroscale model to a multiscale model, where only the relevant parts are resolved on a finer scale. Special emphasis is placed on the investigation of different coupling schemes between the different scales, such as the mortar method and the arlequin method, and a discussion of their advantages and disadvantages within the current context. The applicability of the proposed methodology is illustrated for a variety of examples in tension and compression.  相似文献   

14.
Graph-based representations have been proved powerful in computer vision. The challenge that arises with large amounts of graph data is that of computationally burdensome edit distance computation. Graph kernels can be used to formulate efficient algorithms to deal with high dimensional data, and have been proved an elegant way to overcome this computational bottleneck. In this paper, we investigate whether the Jensen-Shannon divergence can be used as a means of establishing a graph kernel. The Jensen-Shannon kernel is nonextensive information theoretic kernel, and is defined using the entropy and mutual information computed from probability distributions over the structures being compared. To establish a Jensen-Shannon graph kernel, we explore two different approaches. The first of these is based on the von Neumann entropy associated with a graph. The second approach uses the Shannon entropy associated with the probability state vector for a steady state random walk on a graph. We compare the two resulting graph kernels for the problem of graph clustering. We use kernel principle components analysis (kPCA) to embed graphs into a feature space. Experimental results reveal that the method gives good classification results on graphs extracted both from an object recognition database and from an application in bioinformation.  相似文献   

15.
16.
Kernels and Distances for Structured Data   总被引:4,自引:2,他引:4  
Gärtner  Thomas  Lloyd  John W.  Flach  Peter A. 《Machine Learning》2004,57(3):205-232
This paper brings together two strands of machine learning of increasing importance: kernel methods and highly structured data. We propose a general method for constructing a kernel following the syntactic structure of the data, as defined by its type signature in a higher-order logic. Our main theoretical result is the positive definiteness of any kernel thus defined. We report encouraging experimental results on a range of real-world data sets. By converting our kernel to a distance pseudo-metric for 1-nearest neighbour, we were able to improve the best accuracy from the literature on the Diterpene data set by more than 10%.  相似文献   

17.
Many common machine learning methods such as support vector machines or Gaussian process inference make use of positive definite kernels, reproducing kernel Hilbert spaces, Gaussian processes, and regularization operators. In this work these objects are presented in a general, unifying framework and interrelations are highlighted.With this in mind we then show how linear stochastic differential equation models can be incorporated naturally into the kernel framework. And vice versa, many kernel machines can be interpreted in terms of differential equations. We focus especially on ordinary differential equations, also known as dynamical systems, and it is shown that standard kernel inference algorithms are equivalent to Kalman filter methods based on such models.In order not to cloud qualitative insights with heavy mathematical machinery, we restrict ourselves to finite domains, implying that differential equations are treated via their corresponding finite difference equations.  相似文献   

18.
The paper proposes a technique of the composition of two-dimensional interpolation kernels possessing the approximately isotropic spectral characteristics. The application of these kernels makes it possible to weaken many artifacts that appear at interpolation procedures implemented by the traditional techniques. The paper presents the results of the mathematical simulation that confirm the advantages of the proposed technique.  相似文献   

19.
Discriminative Common Vector Method With Kernels   总被引:3,自引:0,他引:3  
In some pattern recognition tasks, the dimension of the sample space is larger than the number of samples in the training set. This is known as the "small sample size problem". Linear discriminant analysis (LDA) techniques cannot be applied directly to the small sample size case. The small sample size problem is also encountered when kernel approaches are used for recognition. In this paper, we attempt to answer the question of "How should one choose the optimal projection vectors for feature extraction in the small sample size case?" Based on our findings, we propose a new method called the kernel discriminative common vector method. In this method, we first nonlinearly map the original input space to an implicit higher dimensional feature space, in which the data are hoped to be linearly separable. Then, the optimal projection vectors are computed in this transformed space. The proposed method yields an optimal solution for maximizing a modified Fisher's linear discriminant criterion, discussed in the paper. Thus, under certain conditions, a 100% recognition rate is guaranteed for the training set samples. Experiments on test data also show that, in many situations, the generalization performance of the proposed method compares favorably with other kernel approaches  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号