首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
最小二乘支持向量机采用最小二乘线性系统代替传统的支持向量即采用二次规划方法解决模式识别问题,能够有效地减少计算的复杂性。但最小二乘支持向量机失去了对支持向量的稀疏性。文中提出了一种基于边界近邻的最小二乘支持向量机,采用寻找边界近邻的方法对训练样本进行修剪,以减少了支持向量的数目。将边界近邻最小二乘支持向量机用来解决由1-a-r(one-against-rest)方法构造的支持向量机分类问题,有效地克服了用1-a-r(one-against-rest)方法构造的支持向量机分类器训练速度慢、计算资源需求比较大、存在拒分区域等缺点。实验结果表明,采用边界近邻最小二乘支持向量机分类器,识别精度和识别速度都得到了提高。  相似文献   

2.
最小二乘支持向量机采用最小二乘线性系统代替传统的支持向量即采用二次规划方法解决模式识别问题,能够有效地减少计算的复杂性.但最小二乘支持向量机失去了对支持向量的稀疏性.文中提出了一种基于边界近邻的最小二乘支持向量机,采用寻找边界近邻的方法对训练样本进行修剪,以减少了支持向量的数目.将边界近邻最小二乘支持向量机用来解决由1-a-r(one-against-rest)方法构造的支持向量机分类问题,有效地克服了用1-a-r(one-against-rest)方法构造的支持向量机分类器训练速度慢、计算资源需求比较大、存在拒分区域等缺点.实验结果表明,采用边界近邻最小二乘支持向量机分类器,识别精度和识别速度都得到了提高.  相似文献   

3.
计算机网络攻击的多样性及隐蔽性,导致了其难以被检测,针对保护网络的安全性,准确识别网络异常问题,为了克服传统网络异常检测技术检测精度低等缺点,提出基于遗传算法优化的最小二乘支持向量机的网络异常检测方法.最小二乘支持向量机分类器(LSSVC)是一种进化的支持向量机分类器(SVC),通过构造新的二次损失函数以解决支持向量机中的二次规划问题.遗传算法用于选择合适的最小二乘支持向量机参数.选取KDDCup99数据测试采用提出的方法检测性能.实验结果表明遗传算法优化的最小二乘支持向量机分类器的网络异常检测精度高,效果好,为网络安全提供了保证.  相似文献   

4.
应用非线性最小二乘支持向量机对原核生物进行基因识别,通过寻找序列开放阅读框(ORF),并与可靠基因位点文件进行比较产生训练样本集,然后提取样本GC含量和Z曲线特征,并利用T检验方法检验各特征值所包含的信息量,设计出了非线性最小二乘支持向量机分类器识别基因.结果表明非线性最小二乘支持向量机的识别率比Fisher判别和线性支持向量机在不同的特征组合下分别提高了7.09%~29.97%和10.97%~25.45%,并且在特征值信息量较小的情况下非线性最小二乘支持向量机更能表现其优越性.  相似文献   

5.
介绍并比较了支持向量机分类器和最小二乘支持向量机分类器的算法,提出了基于支持向量机的二叉树多分类变压器故障诊断模型.将标准支持向量机(C-SVM)分类器和最小二乘支持向量机(LS-SVM)分类器分别用于变压器故障诊断,通过网格搜索和交叉验证法取得支持向量机的参数,准确率较高.试验结果表明,支持向量机和最小二乘支持向量机在变压器故障诊断中具有很大的应用潜力.  相似文献   

6.
王旭辉  舒平  曹立 《计算机科学》2010,37(8):240-242
支持向量机在小样本模式识别领域具有优势,但其性能评估及核参数、正则化参数的选择尚未有标准算法.将受试者操作特性曲线(Receiver Operating Characteristic,ROC)引入支持向量机分类性能分析和建模参数优化问题.在核参数及正则化参数所构成的二维空间中,调整模型参数阈值描绘ROC曲线,通过比较不同分类器ROC曲线下面积实现模型的性能分析,研究了基于ROC曲线最佳工作点的模型优化问题.工程实例表明,ROC曲线下面积有效地量化了模型的识别性能,并给出了一定寻优范围内的模型参数最优点,可以在SVM模型参数优化问题中推广应用.  相似文献   

7.
针对单节点在低信噪比环境下调制识别率低的难题,提出了基于一种多节点信息融合和协作的信号调制方式识别方法。首先设计多个传感器节点协作方案,并提取每节点特征,然后中心节点将各节点特征进行融合,最后采用最小二乘支持向量机建立信号调制分类器。仿真结果表明,相比于其他信号调制识别方法,该方法提高了信号调制识别精度,对信噪比环境具有更好的自适应性。  相似文献   

8.
一种改进的最小二乘支持向量机及其应用   总被引:3,自引:0,他引:3       下载免费PDF全文
为了克服传统支持向量机训练速度慢、计算资源需求大等缺点,本文应用最小二乘支持向量机算法来解决分类问题。同时,本文指出了决策导向循环图算法的缺陷,采用自适应导向循环图思想来实现多类问题的分类。为了提高样本的学习速度,本文还将序贯最小优化算法与最小二乘支持向量机相结合,最终形成了ADAGLSSVM算法。考虑到最小二
乘支持向量机算法失去了支持向量的稀疏性,本文对支持向量作了修剪。实验结果表明,修剪后,分类器的识别精度和识别速度都得到了提高。  相似文献   

9.
针对神经网络分类器容易陷入局部最小值和不适用于小样本的缺点,提出一种应用零中心瞬时特征提取法提取分类特征,采用支持向量机分类器进行数字调制信号识别的方法。与传统的神经网络方法相比,该方法具有更好的泛化推广能力。实验仿真结果表明,该调制识别方法在小样本下具有较高的识别率。  相似文献   

10.
针对浅海探测中激光回波噪声源多、信噪比低,传统非加权最小二乘支持向量机和加权最小二乘支持向量机对低信噪比信号滤波不足的问题,提出将稳健最小二乘法与加权最小二乘支持向量机相结合的滤波方法(HW-LS-SVM)。首先采用强淘汰权函数计算先验权值、残差和均方误差,然后采用权函数模型计算最小二乘支持向量机的权值,最后通过迭代计算实现回波信号滤波。通过仿真实验结果表明, HW-LS-SVM方法较最小二乘支持向量机、贝叶斯最小二乘支持向量机和传统加权最小二乘支持向量机滤波效果更加稳健,在噪声率为45%的情况下,滤波效果较为理想,水面和水底回波提取正确率为100%;对实测4组深水区和4组浅水区数据滤波后提取的海水深度均与背景资料的深度吻合。由此表明, HW-LS-SVM方法具有更好的抗噪性,更适合于对信噪比低的测深激光信号的滤波处理。  相似文献   

11.
Abstract: The aim of this research was to compare classifier algorithms including the C4.5 decision tree classifier, the least squares support vector machine (LS-SVM) and the artificial immune recognition system (AIRS) for diagnosing macular and optic nerve diseases from pattern electroretinography signals. The pattern electroretinography signals were obtained by electrophysiological testing devices from 106 subjects who were optic nerve and macular disease subjects. In order to show the test performance of the classifier algorithms, the classification accuracy, receiver operating characteristic curves, sensitivity and specificity values, confusion matrix and 10-fold cross-validation have been used. The classification results obtained are 85.9%, 100% and 81.82% for the C4.5 decision tree classifier, the LS-SVM classifier and the AIRS classifier respectively using 10-fold cross-validation. It is shown that the LS-SVM classifier is a robust and effective classifier system for the determination of macular and optic nerve diseases.  相似文献   

12.
《Applied Soft Computing》2007,7(3):908-914
This paper presents a least square support vector machine (LS-SVM) that performs text classification of noisy document titles according to different predetermined categories. The system's potential is demonstrated with a corpus of 91,229 words from University of Denver's Penrose Library catalogue. The classification accuracy of the proposed LS-SVM based system is found to be over 99.9%. The final classifier is an LS-SVM array with Gaussian radial basis function (GRBF) kernel, which uses the coefficients generated by the latent semantic indexing algorithm for classification of the text titles. These coefficients are also used to generate the confidence factors for the inference engine that present the final decision of the entire classifier. The system is also compared with a K-nearest neighbor (KNN) and Naïve Bayes (NB) classifier and the comparison clearly claims that the proposed LS-SVM based architecture outperforms the KNN and NB based system. The comparison between the conventional linear SVM based classifiers and neural network based classifying agents shows that the LS-SVM with LSI based classifying agents improves text categorization performance significantly and holds a lot of potential for developing robust learning based agents for text classification.  相似文献   

13.
The support vector machine (SVM) is a powerful classifier which has been used successfully in many pattern recognition problems. It has also been shown to perform well in the handwriting recognition field. The least squares SVM (LS-SVM), like the SVM, is based on the margin-maximization principle performing structural risk minimization. However, it is easier to train than the SVM, as it requires only the solution to a convex linear problem, and not a quadratic problem as in the SVM. In this paper, we propose to conduct model selection for the LS-SVM using an empirical error criterion. Experiments on handwritten character recognition show the usefulness of this classifier and demonstrate that model selection improves the generalization performance of the LS-SVM.  相似文献   

14.
15.
Benchmarking Least Squares Support Vector Machine Classifiers   总被引:16,自引:0,他引:16  
In Support Vector Machines (SVMs), the solution of the classification problem is characterized by a (convex) quadratic programming (QP) problem. In a modified version of SVMs, called Least Squares SVM classifiers (LS-SVMs), a least squares cost function is proposed so as to obtain a linear set of equations in the dual space. While the SVM classifier has a large margin interpretation, the LS-SVM formulation is related in this paper to a ridge regression approach for classification with binary targets and to Fisher's linear discriminant analysis in the feature space. Multiclass categorization problems are represented by a set of binary classifiers using different output coding schemes. While regularization is used to control the effective number of parameters of the LS-SVM classifier, the sparseness property of SVMs is lost due to the choice of the 2-norm. Sparseness can be imposed in a second stage by gradually pruning the support value spectrum and optimizing the hyperparameters during the sparse approximation procedure. In this paper, twenty public domain benchmark datasets are used to evaluate the test set performance of LS-SVM classifiers with linear, polynomial and radial basis function (RBF) kernels. Both the SVM and LS-SVM classifier with RBF kernel in combination with standard cross-validation procedures for hyperparameter selection achieve comparable test set performances. These SVM and LS-SVM performances are consistently very good when compared to a variety of methods described in the literature including decision tree based algorithms, statistical algorithms and instance based learning methods. We show on ten UCI datasets that the LS-SVM sparse approximation procedure can be successfully applied.  相似文献   

16.
The receiver operating characteristics (ROC) curve has been extensively used for performance evaluation in multimodal biometrics fusion. However, the processes of fusion classifier design and the final ROC performance evaluation are usually conducted separately. This has been inevitable because the ROC, when taken from the error counting point of view, does not have a well-posed structure linking to the fusion classifier of interest. In this work, we propose to optimize the ROC performance directly according to the fusion classifier design. The area under the ROC curve (AUC) will be used as the optimization objective since it provides a good representation of the ROC performance. Due to the piecewise cumulative structure of the AUC, a smooth approximate formulation is proposed. This enables a direct optimization of the AUC with respect to the classifier parameters. When a fusion classifier has linear parameters, computation of the solution to optimize a quadratic AUC approximation is surprisingly simple and yet effective. Our empirical experiments on biometrics fusion show strong evidences regarding the potential of the proposed method.  相似文献   

17.
熊杨  肖怀铁  王伟 《计算机工程》2011,37(14):146-148
通过分析最小二乘支持向量机(LS-SVM)模型的超参数选择对分类器的影响,提出一种采用多样性保持的分布估计算法(EDA-DP)优化选择LS-SVM模型参数的方法。使用基于EDA-DP的LS-SVM分类器模型对基准数据集和雷达目标高分辨距离像数据集进行仿真实验,结果表明,该模型相比基于网格法的分类器模型,平均识别率分别提高了4.2%和1.76%,具有更好的分类性能和泛化能力。  相似文献   

18.
基于密度的kNN文本分类器训练样本裁剪方法   总被引:36,自引:2,他引:36  
随着WWW的迅猛发展,文本分类成为处理和组织大量文档数据的关键技术。kNN方法作为一种简单、有效、非参数的分类方法,在文本分类中得到广泛的应用。但是这种方法计算量大,而且训练样本的分布不均匀会造成分类准确率的下降。针对kNN方法存在的这两个问题,提出了一种基于密度的kNN分类器训练样本裁剪方法,这种方法不仅降低了kNN方法的计算量,而且使训练样本的分布密度趋于均匀,减少了边界点处测试样本的误判。实验结果显示,这种方法具有很好的性能。  相似文献   

19.
Nearest neighbor (NN) classifier with dynamic time warping (DTW) is considered to be an effective method for time series classification. The performance of NN-DTW is dependent on the DTW constraints because the NN classifier is sensitive to the used distance function. For time series classification, the global path constraint of DTW is learned for optimization of the alignment of time series by maximizing the nearest neighbor hypothesis margin. In addition, a reduction technique is combined with a search process to condense the prototypes. The approach is implemented and tested on UCR datasets. Experimental results show the effectiveness of the proposed method.  相似文献   

20.
Nearest neighbor (NN) classifier is the most popular non-parametric classifier. It is a simple classifier with no design phase and shows good performance. Important factors affecting the efficiency and performance of NN classifier are (i) memory required to store the training set, (ii) classification time required to search the nearest neighbor of a given test pattern, and (iii) due to the curse of dimensionality the number of training patterns needed by it to achieve a given classification accuracy becomes prohibitively large when the dimensionality of the data is high. In this paper, we propose novel techniques to improve the performance of NN classifier and at the same time to reduce its computational burden. These techniques are broadly based on: (i) overlap based pattern synthesis which can generate a larger number of artificial patterns than the number of input patterns and thus can reduce the curse of dimensionality effect, (ii) a compact representation of the given set of training patterns called overlap pattern graph (OLP-graph) which can be incrementally built by scanning the training set only once and (iii) an efficient NN classifier called OLP-NNC which directly works with OLP-graph and does implicit overlap based pattern synthesis. A comparison based on experimental results is given between some of the relevant classifiers. The proposed schemes are suitable for applications dealing with large and high dimensional datasets like those in data mining.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号