首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 218 毫秒
1.
最小二乘隐空间支持向量机   总被引:9,自引:0,他引:9  
王玲  薄列峰  刘芳  焦李成 《计算机学报》2005,28(8):1302-1307
在隐空间中采用最小二乘损失函数,提出了最小二乘隐空间支持向量机(LSHSSVMs).同隐空间支持向量机(HSSVMs)一样,最小二乘隐空间支持向量机不需要核函数满足正定条件,从而扩展了支持向量机核函数的选择范围.由于采用了最小二乘损失函数,最小二乘隐空问支持向量机产生的优化问题为无约束凸二次规划,这比隐空间支持向量机产生的约束凸二次规划更易求解.仿真实验结果表明所提算法在计算时间和推广能力上较隐空间支持向量机存在一定的优势.  相似文献   

2.
变元可分离核函数对非线性支持向量分类机的影响   总被引:2,自引:0,他引:2  
证明了变元可分离函数在Hilbert空间中满足Mercer定理的条件,为构造新的非线性支持向量分类机时选定核函数提供了一种新方法,并通过新方法构造的核函数与其它核函数构造的非线性支持向量分类机比较,得出了较好的结果。  相似文献   

3.
Wavelet support vector machine   总被引:28,自引:0,他引:28  
An admissible support vector (SV) kernel (the wavelet kernel), by which we can construct a wavelet support vector machine (SVM), is presented. The wavelet kernel is a kind of multidimensional wavelet function that can approximate arbitrary nonlinear functions. The existence of wavelet kernels is proven by results of theoretic analysis. Computer simulations show the feasibility and validity of wavelet support vector machines (WSVMs) in regression and pattern recognition.  相似文献   

4.
Fuzzy Regression Analysis by Support Vector Learning Approach   总被引:1,自引:0,他引:1  
Support vector machines (SVMs) have been very successful in pattern classification and function approximation problems for crisp data. In this paper, we incorporate the concept of fuzzy set theory into the support vector regression machine. The parameters to be estimated in the SVM regression, such as the components within the weight vector and the bias term, are set to be the fuzzy numbers. This integration preserves the benefits of SVM regression model and fuzzy regression model and has been attempted to treat fuzzy nonlinear regression analysis. In contrast to previous fuzzy nonlinear regression models, the proposed algorithm is a model-free method in the sense that we do not have to assume the underlying model function. By using different kernel functions, we can construct different learning machines with arbitrary types of nonlinear regression functions. Moreover, the proposed method can achieve automatic accuracy control in the fuzzy regression analysis task. The upper bound on number of errors is controlled by the user-predefined parameters. Experimental results are then presented that indicate the performance of the proposed approach.  相似文献   

5.
核函数是SVM的关键技术,核函数的选择将影响着学习机器的学习能力和泛化能力。不同的核函数确定了不同的非线性变换和特征空间,选取不同核函数训练SVM就会得到不同的分类效果。本文提出了一种混合的核函数[1]Kmix=λKpoly+(1-λ)Krbf,从而兼并二项式核函数及径向基核函数的优势。实验证明选用混合核函数的支持向量机,与普通核函数构造的支持向量机的评估效果进行比较,混合核函数支持向量机具有较高的分类精度。  相似文献   

6.
基于支持向量机核函数的条件,将Sobolev Hilbert空间的再生核函数进行改进,给出一种新的支持向量机核函数,并提出一种改进的最小二乘再生核支持向量机的回归模型,该回归模型的参数被减少,且仿真实验结果表明:最小二乘支持向量机的核函数采用改进的再生核函数是可行的,改进后的再生核函数不仅具有核函数的非线性映射特征,而且也继承了该再生核函数对非线性逐级精细逼近的特征,回归的效果比一般的核函数更为细腻。  相似文献   

7.
A binary classification problem is reduced to the minimization of convex regularized empirical risk functionals in a reproducing kernel Hilbert space. The solution is searched for in the form of a finite linear combination of kernel support functions (Vapnik’s support vector machines). Risk estimates for a misclassification as a function of the training sample size and other model parameters are obtained.  相似文献   

8.
Generalized discriminant analysis using a kernel approach   总被引:100,自引:0,他引:100  
Baudat G  Anouar F 《Neural computation》2000,12(10):2385-2404
We present a new method that we call generalized discriminant analysis (GDA) to deal with nonlinear discriminant analysis using kernel function operator. The underlying theory is close to the support vector machines (SVM) insofar as the GDA method provides a mapping of the input vectors into high-dimensional feature space. In the transformed space, linear properties make it easy to extend and generalize the classical linear discriminant analysis (LDA) to nonlinear discriminant analysis. The formulation is expressed as an eigenvalue problem resolution. Using a different kernel, one can cover a wide class of nonlinearities. For both simulated data and alternate kernels, we give classification results, as well as the shape of the decision function. The results are confirmed using real data to perform seed classification.  相似文献   

9.
In some nonlinear dynamic systems, the state variables function usually can be separated from the control variables function, which brings much trouble to the identification of such systems. To well solve this problem, an improved least squares support vector regression (LSSVR) model with multiple-kernel is proposed and the model is applied to the nonlinear separable system identification. This method utilizes the excellent nonlinear mapping ability of Morlet wavelet kernel function and combines the state and control variables information into a kernel matrix. Using the composite wavelet kernel, the LSSVR includes two nonlinear functions, whose variables are the state variables and the control ones respectively, in this way, the regression function can gain better nonlinear mapping ability, and it can simulate almost any curve in quadratic continuous integral space. Then, they are used to identify the two functions in the separable nonlinear dynamic system. Simulation results show that the multiple-kernel LSSVR method can greatly improve the identification accuracy than the single kernel method, and the Morlet wavelet kernel is more efficient than the other kernels.  相似文献   

10.
Kernel functions are used in support vector machines (SVM) to compute inner product in a higher dimensional feature space. SVM classification performance depends on the chosen kernel. The radial basis function (RBF) kernel is a distance-based kernel that has been successfully applied in many tasks. This paper focuses on improving the accuracy of SVM by proposing a non-linear combination of multiple RBF kernels to obtain more flexible kernel functions. Multi-scale RBF kernels are weighted and combined. The proposed kernel allows better discrimination in the feature space. This new kernel is proved to be a Mercer’s kernel. Furthermore, evolutionary strategies (ESs) are used for adjusting the hyperparameters of SVM. Training accuracy, the bound of generalization error, and subset cross-validation on training accuracy are considered to be objective functions in the evolutionary process. The experimental results show that the accuracy of multi-scale RBF kernels is better than that of a single RBF kernel. Moreover, the subset cross-validation on training accuracy is more suitable and it yields the good results on benchmark datasets.  相似文献   

11.
基于插值的核函数构造   总被引:16,自引:3,他引:16  
近年来,统计学习(SLT)和支持向量机(SVM)理论的研究日益受到当前国际机器学习领域的重视.有关核函数的研究则一直是研究的重点.这是因为不同的核函数会导致SVM的泛化能力有很大的不同.如何根据所给数据选择合适的核函数成为人们所关注的核心问题.该文首先指出满足Mercer条件的核函数的具体表达式并非问题关键,在此基础上,该文进一步提出利用散乱数据插值的办法确定特征空间中感兴趣点的内积值以代替传统核函数的一般表达式所起的作用.实验表明该方法不仅能够有效改善支持向量机的设计训练过程中的不确定性,而且泛化能力要优于绝大部分的基于传统核函数的支持向量机.  相似文献   

12.
利用组合核函数提高核主分量分析的性能   总被引:11,自引:2,他引:11  
为了提高图像分类的识别率,在对基于核的学习算法中,核函数的构成条件以及不同核函数的特性进行分析和研究的基础上,提出了一种新的核函数——组合核函数,并将它应用于核主分量分析(KPCA)中,以便进行图像特征的提取,由于新的核函数既可以提取全局特征,又可以提取局部特征,因此,可以提高KPCA在图像特征提取中的性能。为了验证所提出核函数的有效性,首先利用新的核函数进行KPCA,以便对手写数字和脸谱等图像进行特征提取,然后利用线性支持向量机(SVM)来进行识别,实验结果显示,从识别率上看,用组合核函数所提取的特征质量比原核函数所提取的特征质量高。  相似文献   

13.
The kernel function method in support vector machine (SVM) is an excellent tool for nonlinear classification. How to design a kernel function is difficult for an SVM nonlinear classification problem, even for the polynomial kernel function. In this paper, we propose a new kind of polynomial kernel functions, called semi-tensor product kernel (STP-kernel), for an SVM nonlinear classification problem by semi-tensor product of matrix (STP) theory. We have shown the existence of the STP-kernel function and verified that it is just a polynomial kernel. In addition, we have shown the existence of the reproducing kernel Hilbert space (RKHS) associated with the STP-kernel function. Compared to the existing methods, it is much easier to construct the nonlinear feature mapping for an SVM nonlinear classification problem via an STP operator.  相似文献   

14.
In the past decade, support vector machines (SVMs) have gained the attention of many researchers. SVMs are non-parametric supervised learning schemes that rely on statistical learning theory which enables learning machines to generalize well to unseen data. SVMs refer to kernel-based methods that have been introduced as a robust approach to classification and regression problems, lately has handled nonlinear identification problems, the so called support vector regression. In SVMs designs for nonlinear identification, a nonlinear model is represented by an expansion in terms of nonlinear mappings of the model input. The nonlinear mappings define a feature space, which may have infinite dimension. In this context, a relevant identification approach is the least squares support vector machines (LS-SVMs). Compared to the other identification method, LS-SVMs possess prominent advantages: its generalization performance (i.e. error rates on test sets) either matches or is significantly better than that of the competing methods, and more importantly, the performance does not depend on the dimensionality of the input data. Consider a constrained optimization problem of quadratic programing with a regularized cost function, the training process of LS-SVM involves the selection of kernel parameters and the regularization parameter of the objective function. A good choice of these parameters is crucial for the performance of the estimator. In this paper, the LS-SVMs design proposed is the combination of LS-SVM and a new chaotic differential evolution optimization approach based on Ikeda map (CDEK). The CDEK is adopted in tuning of regularization parameter and the radial basis function bandwith. Simulations using LS-SVMs on NARX (Nonlinear AutoRegressive with eXogenous inputs) for the identification of a thermal process show the effectiveness and practicality of the proposed CDEK algorithm when compared with the classical DE approach.  相似文献   

15.
Support vector learning for fuzzy rule-based classification systems   总被引:11,自引:0,他引:11  
To design a fuzzy rule-based classification system (fuzzy classifier) with good generalization ability in a high dimensional feature space has been an active research topic for a long time. As a powerful machine learning approach for pattern recognition problems, the support vector machine (SVM) is known to have good generalization ability. More importantly, an SVM can work very well on a high- (or even infinite) dimensional feature space. This paper investigates the connection between fuzzy classifiers and kernel machines, establishes a link between fuzzy rules and kernels, and proposes a learning algorithm for fuzzy classifiers. We first show that a fuzzy classifier implicitly defines a translation invariant kernel under the assumption that all membership functions associated with the same input variable are generated from location transformation of a reference function. Fuzzy inference on the IF-part of a fuzzy rule can be viewed as evaluating the kernel function. The kernel function is then proven to be a Mercer kernel if the reference functions meet a certain spectral requirement. The corresponding fuzzy classifier is named positive definite fuzzy classifier (PDFC). A PDFC can be built from the given training samples based on a support vector learning approach with the IF-part fuzzy rules given by the support vectors. Since the learning process minimizes an upper bound on the expected risk (expected prediction error) instead of the empirical risk (training error), the resulting PDFC usually has good generalization. Moreover, because of the sparsity properties of the SVMs, the number of fuzzy rules is irrelevant to the dimension of input space. In this sense, we avoid the "curse of dimensionality." Finally, PDFCs with different reference functions are constructed using the support vector learning approach. The performance of the PDFCs is illustrated by extensive experimental results. Comparisons with other methods are also provided.  相似文献   

16.
提出了一种新的非线性特征抽取方法——隐空间中参数化直接鉴别分析。其主要思想是利用一核函数将原始输入空间非线性变换到隐空间,针对在该隐空间中类内散布矩阵总是奇异等问题,利用参数化直接鉴别分析进行特征抽取。与现有的核特征抽取方法不同的是,该方法不需要核函数满足Mercer 定理,从而增加了核函数的选择范围。更为重要的是,由于在隐空间中采用了参数化直接鉴别分析,不仅保留了参数化直接鉴别分析的优点,而且有效地抽取了样本的非线性特征;在该方法中提出了一个更为合理的加权系数矩阵,提高了分类性能。在FERET人脸数据库子库上的实验结果验证了该方法的有效性。  相似文献   

17.
The Bayesian evidence framework has been successfully applied to the design of multilayer perceptrons (MLPs) in the work of MacKay. Nevertheless, the training of MLPs suffers from drawbacks like the nonconvex optimization problem and the choice of the number of hidden units. In support vector machines (SVMs) for classification, as introduced by Vapnik, a nonlinear decision boundary is obtained by mapping the input vector first in a nonlinear way to a high-dimensional kernel-induced feature space in which a linear large margin classifier is constructed. Practical expressions are formulated in the dual space in terms of the related kernel function, and the solution follows from a (convex) quadratic programming (QP) problem. In least-squares SVMs (LS-SVMs), the SVM problem formulation is modified by introducing a least-squares cost function and equality instead of inequality constraints, and the solution follows from a linear system in the dual space. Implicitly, the least-squares formulation corresponds to a regression formulation and is also related to kernel Fisher discriminant analysis. The least-squares regression formulation has advantages for deriving analytic expressions in a Bayesian evidence framework, in contrast to the classification formulations used, for example, in gaussian processes (GPs). The LS-SVM formulation has clear primal-dual interpretations, and without the bias term, one explicitly constructs a model that yields the same expressions as have been obtained with GPs for regression. In this article, the Bayesian evidence framework is combined with the LS-SVM classifier formulation. Starting from the feature space formulation, analytic expressions are obtained in the dual space on the different levels of Bayesian inference, while posterior class probabilities are obtained by marginalizing over the model parameters. Empirical results obtained on 10 public domain data sets show that the LS-SVM classifier designed within the Bayesian evidence framework consistently yields good generalization performances.  相似文献   

18.
支持向量机的核函数类型分为两类:局部核函数和全局核函数.局部核函数的值只受到相距很近数据点的影响,有很好的学习能力.全局核函数的值会受到距离较远数据点的影响,有很好的推广泛化能力.针对局部核函数学习能力良好但泛化能力差的缺点,提出一种结合局部核函数和全局核函数构造新联合函数的方法.实验结果表明,与局部核函数和全局核函数相比,新联合核函数有更好的预测能力,并且能够适应增量学习的过程.  相似文献   

19.
支持向量分类时,由于样本分布的不均匀性,单宽度的高斯核会在空间的稠密区域产生过学习现象,在稀疏区域产生欠学习现象,即存在局部风险.针对于此,构造了一个全局性次核来降低高斯核产生的局部风险.形成的混合核称为主次核.利用幂级数构造性地给出并证明了主次核的正定性条件,进一步提出了基于遗传算法的两阶段模型选择算法来优化主次核的参数.实验验证了主次核和模型选择法的优越性.  相似文献   

20.
一种基于核函数的非线性感知器算法   总被引:16,自引:1,他引:16  
为了提高经典Rosenblatt感知器算法的分类能力,该文提出一种基于核函数的非线性感知器算法,简称核感知器算法,其特点是用简单的迭代过程和核函数来实现非线性分类器的一种设计,核感知器算法能够处理原始属性空间中线性不可分问题和高维特征空间中线性可分问题。同时,文中详细分析了其算法与径向基函数神经网络、势函数方法和支持向量机等非线性算法的关系。人工和实际数据的计算结果表明:与线性感知器算法相比,核感知器算法可以有效地提高分类精度。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号