首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 812 毫秒
1.
目的 为了更有效地提高中智模糊C-均值聚类对非凸不规则数据的聚类性能和噪声污染图像的分割效果,提出了核空间中智模糊均值聚类算法。方法 引入核函数概念。利用满足Mercer条件的非线性问题,用非线性变换把低维空间线性不可分的输入模式空间映射到一个先行可分的高维特征空间进行中智模糊聚类分割。结果 通过对大量图像添加不同的加性和乘性噪声进行分割测试获得的核空间中智模糊聚类算法提高了现有算法的对含噪声聚类的鲁棒性和分类性能。峰值信噪比至少提高0.8 dB。结论 本文算法具有显著的分割效果和良好的鲁棒性,并适应于医学,遥感图像处理需要。  相似文献   

2.
一种新的海量数据分类方法   总被引:5,自引:1,他引:5  
使用支持向量机对非线性可分数据进行分类的基本思想是将样本集映射到一个高维线性空间使其线性可分。文章则基于Jordan曲线定理,提出了一种通用的基于分类超曲面的分类法,它是通过直接构造分类超曲面,根据样本点关于分类曲面的围绕数的奇偶性进行分类的一种新分类判断算法,不需作升维变换,不需要考虑使用何种核函数,而直接地解决非线性分类问题。对数据分类应用的结果说明:基于分类超曲面的分类法可以有效地解决非线性数据的分类问题,并能够提高分类效率和准确度。  相似文献   

3.
为了提高高维数据集合离群数据挖掘效率,该文分析传统的离群数据挖掘算法,提出一种离群点检测算法。该算法将非线性问题转化为高维特征空间中的线性问题,利用核函数-主成分进行维数约减,逐个扫描数据对象的投影分量,判断数据点是否为离群点,适用于线性可分数据集的离群点、线性不可分数据集的离群点的检测。实验表明了该算法的优越性。  相似文献   

4.
In this paper, we present a novel semi-supervised dimensionality reduction technique to address the problems of inefficient learning and costly computation in coping with high-dimensional data. Our method named the dual subspace projections (DSP) embeds high-dimensional data in an optimal low-dimensional space, which is learned with a few user-supplied constraints and the structure of input data. The method projects data into two different subspaces respectively the kernel space and the original input space. Each projection is designed to enforce one type of constraints and projections in the two subspaces interact with each other to satisfy constraints maximally and preserve the intrinsic data structure. Compared to existing techniques, our method has the following advantages: (1) it benefits from constraints even when only a few are available; (2) it is robust and free from overfitting; and (3) it handles nonlinearly separable data, but learns a linear data transformation. As a conclusion, our method can be easily generalized to new data points and is efficient in dealing with large datasets. An empirical study using real data validates our claims so that significant improvements in learning accuracy can be obtained after the DSP-based dimensionality reduction is applied to high-dimensional data.  相似文献   

5.
Profile curve reconstruction is crucial to surface reconstruction in reverse engineering. In this paper, we present a new constrained fitting method involving lines, circular arcs and B-spline curves for profile curve reconstruction. By using similarity transformation, we reduce the condition number of the Hessian matrix involved in the optimization process and, therefore, the numerical stability is significantly improved. Several industrial examples are presented to demonstrate the efficiency of our method. This paper describes a 2D constrained fitting method for profile curve reconstruction in reverse engineering. The method is an extension to the published methods for 2D constrained fitting. Further more, the numerical problem associated with constrained fitting is tackled in our paper. The described method has been implemented in RE-SOFT, which is a feature-based reverse engineering software developed by the CAD/CAE/CAM Lab of Zhejiang University.  相似文献   

6.
In this paper, we consider the problem of fitting the B-spline curves to a set of ordered points, by finding the control points and the location parameters. The presented method takes two main steps: specifying initial B-spline curve and optimization. The method determines the number and the position of control points such that the initial B-spline curve is very close to the target curve. The proposed method introduces a length parameter in which this allows us to adjust the number of the control points and increases the precision of the initial B-spline curve. Afterwards, the scaled BFGS algorithm is used to optimize the control points and the foot points simultaneously and generates the final curve. Furthermore, we present a new procedure to insert a new control point and repeat the optimization method, if it is necessary to modify the fitting accuracy of the generated B-spline fitting curve. Associated examples are also offered to show that the proposed approach performs accurately for complex shapes with a large number of data points and is able to generate a precise fitting curve with a high degree of approximation.  相似文献   

7.
传感器非线性的一种拟合方法   总被引:2,自引:0,他引:2  
介绍了非线性传感器的一种曲线拟合方法。通过对某霍尔位移传感器位移、电压值的多项式拟合曲线的特点进行分析,发现该传感器曲线具有抛物线的特点,所以采用含开方项的多项式拟合。最后对所得曲线的各参数用逼近法进一步优化,得到比普通多项式拟合更实用的、偏差更小的拟合曲线。这种处理数据的方法对于求具备开方特性的传感器拟合曲线有一定的借鉴意义。  相似文献   

8.
《Knowledge》2002,15(3):169-175
Clustered linear regression (CLR) is a new machine learning algorithm that improves the accuracy of classical linear regression by partitioning training space into subspaces. CLR makes some assumptions about the domain and the data set. Firstly, target value is assumed to be a function of feature values. Second assumption is that there are some linear approximations for this function in each subspace. Finally, there are enough training instances to determine subspaces and their linear approximations successfully. Tests indicate that if these approximations hold, CLR outperforms all other well-known machine-learning algorithms. Partitioning may continue until linear approximation fits all the instances in the training set — that generally occurs when the number of instances in the subspace is less than or equal to the number of features plus one. In other case, each new subspace will have a better fitting linear approximation. However, this will cause over fitting and gives less accurate results for the test instances. The stopping situation can be determined as no significant decrease or an increase in relative error. CLR uses a small portion of the training instances to determine the number of subspaces. The necessity of high number of training instances makes this algorithm suitable for data mining applications.  相似文献   

9.
《国际计算机数学杂志》2012,89(8):1768-1784
Boussinesq-type nonlinear wave equations with dispersive terms are solved via split-step Fourier methods. We decompose the equations into linear and nonlinear parts, then solve them orderly. The linear part can be projected into phase space by a Fourier transformation, and resulting in a variable separable ordinary differential system, which can be integrated exactly. Next, by an invert Fourier transformation, the classical explicit fourth-order Runge–Kutta method is adopted to solve the nonlinear subproblem. To examine the numerical accuracy and efficiency of the method, we compare the numerical solutions with exact solitary wave solutions. Additionally, various initial-value problems for all the listed Boussinesq-type system are studied numerically. In the study, we can observe that sech 2-type waves for KdV-BBM system will split into several solitons, which is a very interesting physical phenomenon. The interaction between solitons, including overtaking and head-on collisions, is also simulated.  相似文献   

10.
It is widely recognized that whether the selected kernel matches the data determines the performance of kernel-based methods. Ideally it is expected that the data is linearly separable in the kernel induced feature space, therefore, Fisher linear discriminant criterion can be used as a cost function to optimize the kernel function. However, the data may not be linearly separable even after kernel transformation in many applications, e.g., the data may exist as multimodally distributed structure, in this case, a nonlinear classifier is preferred, and obviously Fisher criterion is not a suitable choice as kernel optimization rule. Motivated by this issue, we propose a localized kernel Fisher criterion, instead of traditional Fisher criterion, as the kernel optimization rule to increase the local margins between embedded classes in kernel induced feature space. Experimental results based on some benchmark data and measured radar high-resolution range profile (HRRP) data show that the classification performance can be improved by using the proposed method.  相似文献   

11.
目的 隐式曲线能够描述复杂的几何形状和拓扑结构,而传统的隐式B样条曲线的控制网格需要大量多余的控制点满足拓扑约束。有些情况下,获取的数据点不仅包含坐标信息,还包含相应的法向约束条件。针对这个问题,提出了一种带法向约束的隐式T样条曲线重建算法。方法 结合曲率自适应地调整采样点的疏密,利用二叉树及其细分过程从散乱数据点集构造2维T网格;基于隐式T样条函数提出了一种有效的曲线拟合模型。通过加入偏移数据点和光滑项消除额外零水平集,同时加入法向项减小曲线的法向误差,并依据最优化原理将问题转化为线性方程组求解得到控制系数,从而实现隐式曲线的重构。在误差较大的区域进行T网格局部细分,提高重建隐式曲线的精度。结果 实验在3个数据集上与两种方法进行比较,实验结果表明,本文算法的法向误差显著减小,法向平均误差由10-3数量级缩小为10-4数量级,法向最大误差由10-2数量级缩小为10-3数量级。在重构曲线质量上,消除了额外零水平集。与隐式B样条控制网格相比,3个数据集的T网格的控制点数量只有B样条网格的55.88%、39.80%和47.06%。结论 本文算法能在保证数据点精度的前提下,有效降低法向误差,消除了额外的零水平集。与隐式B样条曲线相比,本文方法减少了控制系数的数量,提高了运算速度。  相似文献   

12.
在多喷头全彩色喷绘机中,存储和传输墨水温度电压曲线(简称T-V曲线)参数需耗费大量的MCU空间资源,同时对参数数据的处理也存在诸多问题。基于此,根据实际设计需求提出了改进方案,即利用最小二乘直线拟合的方法来解决参数数据压缩存储的问题,不仅减少了需要加载的参数数据量,还提高了喷绘机的整体工作效率。  相似文献   

13.
沈健  蒋芸  张亚男  胡学伟 《计算机科学》2016,43(12):139-145
多核学习方法是机器学习领域中的一个新的热点。核方法通过将数据映射到高维空间来增加线性分类器的计算能力,是目前解决非线性模式分析与分类问题的一种有效途径。但是在一些复杂的情况下,单个核函数构成的核学习方法并不能完全满足如数据异构或者不规则、样本规模大、样本分布不平坦等实际应用中的需求问题,因此将多个核函数进行组合以期获得更好的结果,是一种必然的发展趋势。因此提出一种基于样本加权的多尺度核支持向量机方法,通过不同尺度核函数对样本的拟合能力进行加权,从而得到基于样本加权的多尺度核支持向量机决策函数。通过在多个数据集上的实验分析可以得出所提方法对于各个数据集都获得了很高的分类准确率。  相似文献   

14.
施艳容  侯涛 《计算机工程》2011,37(11):237-239
针对传统矢量化算法在识别扫描图时识别效率不高的问题,使用稀疏像素遍历的跟踪拟和算法进行扫描图数字曲线识别。该算法以改进的正交像素跟踪方式获取图元区域关键点数据,通过相应的拟和公式实现直线段、圆和圆弧等数字曲线的提取。实验结果表明,相比传统矢量化算法,该算法在时间复杂度和空间复杂度方面具有较大优势。  相似文献   

15.
A nonlinear feature extraction method is presented which can reduce the data dimension down to the number of classes, providing dramatic savings in computational costs. The dimension reducing nonlinear transformation is obtained by implicitly mapping the input data into a feature space using a kernel function, and then finding a linear mapping based on an orthonormal basis of centroids in the feature space that maximally separates the between-class relationship. The experimental results demonstrate that our method is capable of extracting nonlinear features effectively so that competitive performance of classification can be obtained with linear classifiers in the dimension reduced space.  相似文献   

16.
In this work the kernel Adaline algorithm is presented. The new algorithm is a generalisation of Widrow's and Hoff's linear Adaline, and allows to approximate non-linear functional relationships. Similar to the linear adaline, the proposed neural network algorithm minimises the least-mean-squared (LMS) cost function. It can be guaranteed that the kernel Adaline's cost function is always convex, therefore the method does not suffer from local optima as known in conventional neural networks. The algorithm uses potential function operators due to Aizerman and colleagues to map the training points in a first stage into a very high dimensional non-linear “feature” space. In the second stage the LMS solution in this space is determined by the algorithm. Weight decay regularisation allows to avoid overfitting effects, and can be performed efficiently. The kernel Adaline algorithm works in a sequential fashion, is conceptually simple, and numerically robust. The method shows a high performace in tasks like one dimensional curve fitting, system identification, and speech processing.  相似文献   

17.
Discriminative Common Vector Method With Kernels   总被引:3,自引:0,他引:3  
In some pattern recognition tasks, the dimension of the sample space is larger than the number of samples in the training set. This is known as the "small sample size problem". Linear discriminant analysis (LDA) techniques cannot be applied directly to the small sample size case. The small sample size problem is also encountered when kernel approaches are used for recognition. In this paper, we attempt to answer the question of "How should one choose the optimal projection vectors for feature extraction in the small sample size case?" Based on our findings, we propose a new method called the kernel discriminative common vector method. In this method, we first nonlinearly map the original input space to an implicit higher dimensional feature space, in which the data are hoped to be linearly separable. Then, the optimal projection vectors are computed in this transformed space. The proposed method yields an optimal solution for maximizing a modified Fisher's linear discriminant criterion, discussed in the paper. Thus, under certain conditions, a 100% recognition rate is guaranteed for the training set samples. Experiments on test data also show that, in many situations, the generalization performance of the proposed method compares favorably with other kernel approaches  相似文献   

18.
Data may be afflicted with uncertainty. Uncertain data may be shown by an interval value or in general by a fuzzy set. A number of classification methods have considered uncertainty in features of samples. Some of these classification methods are extended version of the support vector machines (SVMs), such as the Interval‐SVM (ISVM), Holder‐ISVM and Distance‐ISVM, which are used to obtain a classifier for separating samples whose features are interval values. In this paper, we extend the SVM for robust classification of linear/non‐linear separable data whose features are fuzzy numbers. The support of such training data is shown by a hypercube. Our proposed method tries to obtain a hyperplane (in the input space or in a high‐dimensional feature space) such that the nearest point of the hypercube of each training sample to the hyperplane is separated with the widest symmetric margin. This strategy can reduce the misclassification probability of our proposed method. Our experimental results on six real data sets show that the classification rate of our novel method is better than or equal to the classification rate of the well‐known SVM, ISVM, Holder‐ISVM and Distance‐ISVM for all of these data sets.  相似文献   

19.
Implicit simplicial models for adaptive curve reconstruction   总被引:6,自引:0,他引:6  
Parametric deformable models have been extensively and very successfully used for reconstructing free-form curves and surfaces, and for tracking nonrigid deformations, but they require previous knowledge of the topological type of the data, and good initial curve or surface estimates. With deformable models, it is also computationally expensive to check for and to prevent self-intersections while tracking deformations. The implicit simplicial models that we introduce in this paper are implicit curves and surfaces defined by piecewise linear functions. This representation allows for local deformations, control of the topological type, and prevention of self-intersections during deformations. As a first application, we also describe an algorithm for 2D curve reconstruction from unorganized sets of data points. The topology, the number of connected components, and the geometry of the data are all estimated using an adaptive space subdivision approach. The main four components of the algorithm are topology estimation, curve fitting, adaptive space subdivision, and mesh relaxation  相似文献   

20.
最小二乘法分段直线拟合   总被引:14,自引:2,他引:12  
田垅  刘宗田 《计算机科学》2012,39(103):482-484
曲线拟合是图像分析中非常重要的描述符号。最常用的曲线拟合方法是最小二乘法,然而一般的最小二乘法有一定的局限性,已经有不少学者对其进行了一些改进。进一步对最小二乘法进行改进,提出一种新的分段直线拟合算法来代替多项式曲线拟合,以达到简化数学模型的建立和减少计算的目的,使其能够更好地对点序列进行拟合。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号