首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 234 毫秒
1.
首先讨论支持向量机(SVM)的基本思想和实现过程,随后着重对SVM核函数进行探讨,从理论上研究常用核函数的选择优化问题。采用UCI数据库中的玻璃识别数据、菖蒲植物数据以及汽车评估数据分别对选择不同的核函数情况进行实验仿真分类和比较。仿真结果表明,同类数据选择不同核函数会产生不同的分类效果,选取合适的核函数对分类效果有很大的影响。  相似文献   

2.
基于多个混合核函数的SVM决策树算法设计   总被引:5,自引:0,他引:5  
不同的核函数具有不同的特性,SVM决策树中每个子SVM面对的分类对象不同,选取的核函数及其参数也应该不同。通过调节混合核函数的参数形成不同的核函数,给出了一个用多个混合核函数训练SVM决策树的多类分类算法。仿真试验表明,该算法与只用一个核函数训练SVM决策树的算法相比,具有较高的分类精度。  相似文献   

3.
基于核变换的高性能支持向量机分类算法   总被引:1,自引:1,他引:0       下载免费PDF全文
由于传统的支持向量机(SVM)算法的核函数没有考虑训练数据自身的特点,因而相对于具体的问题来说,往往不是最优的。为了获得最优的分类结果,提出了一种基于核变换思想的支持向量机分类方法。该方法首先根据训练样本的类属信息,通过对初始核进行线性变换来间接地达到改进输入空间到输出空间的映射函数的目的,同时利用变换后的核函数来求解分类数据特征空间的超平面方程。仿真和实验结果表明,采用此方法,不仅可以提高系统的分类性能和降低噪声的干扰,而且可以增强分类结果的鲁棒性。  相似文献   

4.
在分析现有基于经验特征空间核函数优化方法局限性的基础上,提出一种基于最大子分类间隔准则的核函数优化方法。该方法首先建立最大子分类间隔准则,然后结合数据在经验特征空间中的特点给出样本数据的类间散布矩阵和类内散布矩阵的表达式,最后利用奇异值分解实现核函数参数的优化选取。本文利用UCI(University of California, Irvine)数据对算法进行仿真实验,仿真结果表明了本文方法的正确性和有效性。  相似文献   

5.
在数据分类算法的实际应用中,经常会遇到数据不平衡的问题(即正负样本的数目相差极大)。标准的分类算法在处理这一问题时,往往很难达到令人满意的性能。提出一种新的方法,通过对正负样本分别进行核函数拟合,根据拟合好的核函数对未知样本进行预测。在UCI标准数据集的仿真实验结果表明,该方法能有效地处理非平衡数据问题。  相似文献   

6.
基于拉盖尔正交多项式,提出了广义的拉盖尔多项式,由此建立了一类新的核函数—拉盖尔核函数。在双螺旋集和标准UCI数据集上的实验表明,该核函数比常用的核函数(多项式核、高斯径向基核等)具有更强的鲁棒性与更好的泛化性能,而且该核函数的参数仅在自然数中取值,能大大缩短参数优化时间。  相似文献   

7.
《微型机与应用》2017,(11):19-22
为了提高支持向量机分类效果、学习能力和外推能力,分析了两种支持向量机的核函数:K型核函数和logistic核函数,构造出一种新的混合核函数的支持向量机,并且对其性能进行了理论分析。把构造出的混合核函数支持向量机与常用核函数构造的向量机应用到二维数据分类与图片分类中进行实验对比。二维数据点和图片分类的实验结果表明,混合核函数的支持向量机的分类效果、学习能力和外推能力,明显优于常用核函数构造的支持向量机。  相似文献   

8.
核函数的选择与改进在人脸识别中的应用   总被引:1,自引:1,他引:1       下载免费PDF全文
核函数方法广泛应用于人工神经网络和支持向量机等机器学习领域,该方法的采用有效地避免了特征空间中的维数灾难的问题,改善了学习机的分类性能。但是核函数的选择及新的核函数构造一直机器学习领域的核心问题,直接关系到学习机性能的好坏。然而,这个方向的研究成果不多。以支持向量机为例,通过对核矩阵一些特性的计算和研究,从理论上对常用的核函数性能进行了预测。在此基础上,通过实验仿真证实了通过优选后的核函数所组成的混合核函数对分类性能的改善。在加权系数选择合适的情况下,学习机的识别率甚至可以达到100%。所以,不但构造出了性能优异的学习机,而且为核函数的选择提供了参考。  相似文献   

9.
基于特征可分性的核函数自适应构造   总被引:2,自引:0,他引:2  
核函数的选择与构造是支撑向量机研究中的关键问题和难点.该文针对该问题,首先讨论了特征空间的线性可分性,推导了其判别条件.然后,根据特征完全可分条件,基于函数逼近论和核函数的基本性质,提出了自适应多项式核函数和B-样条核函数模型,给出了模型参数的估计算法.实测数据仿真实验结果表明,与经典的核函数相比,该文提出的算法在分类性能上取得了明显改善.  相似文献   

10.
度量亦称距离函数,是度量空间中满足特定条件的特殊函数,一般用来反映数据间存在的一些重要距离关系.而距离对于各种分类聚类问题影响很大,因此度量学习对于这类机器学习问题有重要影响.受到现实存在的各种噪声影响,已有的各种度量学习算法在处理各种分类问题时,往往出现分类准确率较低以及分类准确率波动大的问题.针对该问题,本文提出一种基于最大相关熵准则的鲁棒度量学习算法.最大相关熵准则的核心在于高斯核函数,本文将其引入到度量学习中,通过构建以高斯核函数为核心的损失函数,利用梯度下降法进行优化,反复测试调整参数,最后得到输出的度量矩阵.通过这样的方法学习到的度量矩阵将有更好的鲁棒性,在处理受噪声影响的各种分类问题时,将有效地提高分类准确率.本文将在一些常用机器学习数据集(UCI)还有人脸数据集上进行验证实验.  相似文献   

11.
The kernel function is the core of the Support Vector Machine (SVM), and its selection directly affects the performance of SVM. There has been no theoretical basis on choosing a kernel function for speech recognition. In order to improve the learning ability and generalization ability of SVM for speech recognition, this paper presents the Optimal Relaxation Factor (ORF) kernel function, which is a set of new SVM kernel functions for speech recognition, and proves that the ORF function is a Mercer kernel function. The experiments show the ORF kernel function's effectiveness on mapping trend, bi-spiral, and speech recognition problems. The paper draws the conclusion that the ORF kernel function performs better than the Radial Basis Function (RBF), the Exponential Radial Basis Function (ERBF) and the Kernel with Moderate Decreasing (KMOD). Furthermore, the results of speech recognition with the ORF kernel function illustrate higher recognition accuracy.  相似文献   

12.
In the last few years, the applications of support vector machine (SVM) have substantially increased due to the high generalization performance and modeling of non-linear relationships. However, whether SVM behaves well largely depends on its adopted kernel function. The most commonly used kernels include linear, polynomial inner product functions and the Radial Basis Function (RBF), etc. Since the nature of the data is usually unknown, it is very difficult to make, on beforehand, a proper choice from the mentioned kernels. Usually, more than one kernel are applied to select the one which gives the best prediction performance but with a very time-consuming optimization procedure. This paper presents a kernel function based on Lorentzian function which is well-known in the field of statistics. The presented kernel can properly deal with a large variety of mapping problems due to its flexibility to vary. The applicability, suitability, performance and robustness of the presented kernel are investigated on bi-spiral benchmark data set as well as seven data sets from the UCI benchmark repository. The experiment results demonstrate that the presented kernel is robust and has stronger mapping ability comparing with the standard kernel functions, and it can obtain better generalization performance. In general, the proposed kernel can be served as a generic alternative for the common linear, polynomial and RBF kernels.  相似文献   

13.
In some nonlinear dynamic systems, the state variables function usually can be separated from the control variables function, which brings much trouble to the identification of such systems. To well solve this problem, an improved least squares support vector regression (LSSVR) model with multiple-kernel is proposed and the model is applied to the nonlinear separable system identification. This method utilizes the excellent nonlinear mapping ability of Morlet wavelet kernel function and combines the state and control variables information into a kernel matrix. Using the composite wavelet kernel, the LSSVR includes two nonlinear functions, whose variables are the state variables and the control ones respectively, in this way, the regression function can gain better nonlinear mapping ability, and it can simulate almost any curve in quadratic continuous integral space. Then, they are used to identify the two functions in the separable nonlinear dynamic system. Simulation results show that the multiple-kernel LSSVR method can greatly improve the identification accuracy than the single kernel method, and the Morlet wavelet kernel is more efficient than the other kernels.  相似文献   

14.
支持向量机的核函数类型分为两类:局部核函数和全局核函数.局部核函数的值只受到相距很近数据点的影响,有很好的学习能力.全局核函数的值会受到距离较远数据点的影响,有很好的推广泛化能力.针对局部核函数学习能力良好但泛化能力差的缺点,提出一种结合局部核函数和全局核函数构造新联合函数的方法.实验结果表明,与局部核函数和全局核函数相比,新联合核函数有更好的预测能力,并且能够适应增量学习的过程.  相似文献   

15.
Kernel methods provide high performance in a variety of machine learning tasks. However, the success of kernel methods is heavily dependent on the selection of the right kernel function and proper setting of its parameters. Several sets of kernel functions based on orthogonal polynomials have been proposed recently. Besides their good performance in the error rate, these kernel functions have only one parameter chosen from a small set of integers, and it facilitates kernel selection greatly. Two sets of orthogonal polynomial kernel functions, namely the triangularly modified Chebyshev kernels and the triangularly modified Legendre kernels, are proposed in this study. Furthermore, we compare the construction methods of some orthogonal polynomial kernels and highlight the similarities and differences among them. Experiments on 32 data sets are performed for better illustration and comparison of these kernel functions in classification and regression scenarios. In general, there is difference among these orthogonal polynomial kernels in terms of accuracy, and most orthogonal polynomial kernels can match the commonly used kernels, such as the polynomial kernel, the Gaussian kernel and the wavelet kernel. Compared with these universal kernels, the orthogonal polynomial kernels each have a unique easily optimized parameter, and they store statistically significantly less support vectors in support vector classification. New presented kernels can obtain better generalization performance both for classification tasks and regression tasks.  相似文献   

16.
In this study, we introduce a set of new kernel functions derived from the generalized Chebyshev polynomials. The proposed generalized Chebyshev polynomials allow us to derive different kernel functions. By using these polynomial functions, we generalize recently introduced Chebyshev kernel function for vector inputs and, as a result, we obtain a robust set of kernel functions for Support Vector Machine (SVM) classification. Thus in this study, besides clarifying how to apply the Chebyshev kernel functions on vector inputs, we also increase the generalization capability of the previously proposed Chebyshev kernels and show how to derive new kernel functions by using the generalized Chebyshev polynomials. The proposed set of kernel functions provides competitive performance when compared to all other common kernel functions on average for the simulation datasets. The results indicate that they can be used as a good alternative to other common kernel functions for SVM classification in order to obtain better accuracy. Moreover, test results show that the generalized Chebyshev kernel approaches to the minimum support vector number for classification in general.  相似文献   

17.
基于插值的核函数构造   总被引:16,自引:3,他引:16  
近年来,统计学习(SLT)和支持向量机(SVM)理论的研究日益受到当前国际机器学习领域的重视.有关核函数的研究则一直是研究的重点.这是因为不同的核函数会导致SVM的泛化能力有很大的不同.如何根据所给数据选择合适的核函数成为人们所关注的核心问题.该文首先指出满足Mercer条件的核函数的具体表达式并非问题关键,在此基础上,该文进一步提出利用散乱数据插值的办法确定特征空间中感兴趣点的内积值以代替传统核函数的一般表达式所起的作用.实验表明该方法不仅能够有效改善支持向量机的设计训练过程中的不确定性,而且泛化能力要优于绝大部分的基于传统核函数的支持向量机.  相似文献   

18.
针对单核网络模型的核函数选择无理论依据以及基于随机特征映射的四层神经网络(FRMFNN)节点规模过大的问题,提出了一种基于随机特征映射的四层多核学习神经网络(MK-FRMFNN)算法.首先,把原始输入特征通过特定的随机映射算法转化为随机映射特征;然后,经过不同的随机核映射生成多个基本核矩阵;最后,将基本核矩阵组成合成核...  相似文献   

19.
The conversion functions in the hidden layer of radial basis function neural networks (RBFNN) are Gaussian functions. The Gaussian functions are local to the kernel centers. In most of the existing research, the spatial local response of the sample is inaccurately calculated because the kernels have the same shape as a hypersphere, and the kernel parameters in the network are determined by experience. The influence of the fine structure in the local space is not considered during feature extraction. In addition, it is difficult to obtain a better feature extraction ability with less computational complexity. Therefore, this paper develops a multi-scale RBF kernel learning algorithm and proposes a new multi-layer RBF neural network model. For the samples of each class, the expectation maximization (EM) algorithm is used to obtain multi-layer nested sub-distribution models with different local response ranges, which are called multi-scale kernels in the network. The prior information of each sub-distribution is used as the connection weight between the multi-scale kernels. Finally, feature extraction is implemented using multi-layer kernel subspace embedding. The multi-scale kernel learning model can efficiently and accurately describe the fine structure of the samples and is fault tolerant to setting the number of kernels to a certain extent. Considering the prior probability of each kernel as the weight makes the feature extraction process satisfy the Bayes rule, which can enhance the interpretability of feature extraction in the network. This paper also theoretically proves that the proposed neural network is a generalized version of the original RBFNN. The experimental results show that the proposed method has better performance compared with some state-of-the-art algorithms.  相似文献   

20.
该文提出一种基于支持向量机的组合核函数的学习方法,它首先由遗传算法作为新的学习方法得到训练,组合核函数的权值在学习过程中被确定,并在决策模型的分类阶段用来作为参数。这种学习方法被应用在两个关于癌症诊断的公用数据集中,从而获得分类最优超平面。通过实验,这种学习方法显示出比用单一核函数具有较好的性能。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号