首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
By promoting the parallel hyperplanes to non-parallel ones in SVM, twin support vector machines (TWSVM) have attracted more attention. There are many modifications of them. However, most of the modifications minimize the loss function subject to the I 2-norm or I 1-norm penalty. These methods are non-adaptive since their penalty forms are fixed and pre-determined for any types of data. To overcome the above shortcoming, we propose l p norm least square twin support vector machine (l p LSTSVM). Our new model is an adaptive learning procedure with l p -norm (0<p<1), where p is viewed as an adjustable parameter and can be automatically chosen by data. By adjusting the parameter p, l p LSTSVM can not only select relevant features but also improve the classification accuracy. The solutions of the optimization problems in l p LSTSVM are obtained by solving a series systems of linear equations (LEs) and the lower bounds of the solution is established which is extremely helpful for feature selection. Experiments carried out on several standard UCI data sets and synthetic data sets show the feasibility and effectiveness of the proposed method.  相似文献   

2.
In classification problems, the data samples belonging to different classes have different number of samples. Sometimes, the imbalance in the number of samples of each class is very high and the interest is to classify the samples belonging to the minority class. Support vector machine (SVM) is one of the widely used techniques for classification problems which have been applied for solving this problem by using fuzzy based approach. In this paper, motivated by the work of Fan et al. (Knowledge-Based Systems 115: 87–99 2017), we have proposed two efficient variants of entropy based fuzzy SVM (EFSVM). By considering the fuzzy membership value for each sample, we have proposed an entropy based fuzzy least squares support vector machine (EFLSSVM-CIL) and entropy based fuzzy least squares twin support vector machine (EFLSTWSVM-CIL) for class imbalanced datasets where fuzzy membership values are assigned based on entropy values of samples. It solves a system of linear equations as compared to the quadratic programming problem (QPP) as in EFSVM. The least square versions of the entropy based SVM are faster than EFSVM and give higher generalization performance which shows its applicability and efficiency. Experiments are performed on various real world class imbalanced datasets and compared the results of proposed methods with new fuzzy twin support vector machine for pattern classification (NFTWSVM), entropy based fuzzy support vector machine (EFSVM), fuzzy twin support vector machine (FTWSVM) and twin support vector machine (TWSVM) which clearly illustrate the superiority of the proposed EFLSTWSVM-CIL.  相似文献   

3.
Traditionally, multi-plane support vector machines (SVM), including twin support vector machine (TWSVM) and least squares twin support vector machine (LSTSVM), consider all of points and view them as equally important points. In real cases, most of the samples of a dataset are highly correlated. These samples generally lie in the high-density regions and may be important for performances of classifiers. This motivates the rush toward new classifiers that can sufficiently take advantage of the points in the high-density regions. Illuminated by several new geometrically motivated algorithms, we propose density-based weighting multi-surface least squares classification (DWLSC) method, which is designed for classification. Considering the special features of multi-plane SVMs, DWLSC can measure the importance of points sharing the same labels by density weighting method and sufficiently make the full use of margin point information between pairs of points from different classes. It also includes naturally an extension of the non-linear case. In addition to keeping the respective advantages of both TWSVM and LSTSVM, our method improves the separation of the points sharing different classes and is shown to be better than other multi-plane classifiers in favor of reduction in space complexity, especially when confronted with the non-linear case. In addition, experimental evidence suggests that our method is effective in performing classification tasks.  相似文献   

4.
The least squares twin support vector machine (LSTSVM) generates two non-parallel hyperplanes by directly solving a pair of linear equations as opposed to solving two quadratic programming problems (QPPs) in the conventional twin support vector machine (TSVM), which makes learning speed of LSTSVM faster than that of the TSVM. However, LSTSVM fails to discover underlying similarity information within samples which may be important for classification performance. To address the above problem, we apply the similarity information of samples into LSTSVM to build a novel non-parallel plane classifier, called K-nearest neighbor based least squares twin support vector machine (KNN-LSTSVM). The proposed method not only retains the superior advantage of LSTSVM which is simple and fast algorithm but also incorporates the inter-class and intra-class graphs into the model to improve classification accuracy and generalization ability. The experimental results on several synthetic as well as benchmark datasets demonstrate the efficiency of our proposed method. Finally, we further went on to investigate the effectiveness of our classifier for human action recognition application.  相似文献   

5.
Neural Processing Letters - Least squares twin support vector machine (LSTSVM) is a new machine learning method, as opposed to solving two quadratic programming problems in twin support vector...  相似文献   

6.
Multisurface proximal support vector machine via generalized eigenvalues (GEPSVM), being an effective classification tool for supervised learning, tries to seek two nonparallel planes that are determined by solving two generalized eigenvalue problems (GEPs). The GEPs may lead to an instable classification performance, due to matrix singularity. Proximal support vector machine using local information (LIPSVM), as a variant of GEPSVM, attempts to avoid the above shortcoming through adopting a similar formulation to the Maximum Margin Criterion (MMC). The solution to an LIPSVM follows directly from solving two standard eigenvalue problems. Actually, an LIPSVM can be viewed as a reduced algorithm, because it uses the selectively generated points to train the classifier. A major advantage of an LIPSVM is that it is resistant to outliers. In this paper, following the geometric intuition of an LIPSVM, a novel multi-plane learning approach called Localized Twin SVM via Convex Minimization (LCTSVM) is proposed. This approach determines two nonparallel planes by solving two newly formed SVM-type problems. In addition to keeping the superior characteristics of an LIPSVM, an LCTSVM still has its additional edges: (1) it has similar or better classification capability compared to LIPSVM, TWSVM and LSTSVM; (2) each plane is generated from a quadratic programming problem (QPP) instead of a special convex difference optimization arising from an LIPSVM; (3) the solution can be reduced to solving two systems of linear equations, resulting in considerably lesser computational cost; and (4) it can find the global minimum. Experiments carried out on both toy and real-world problems disclose the effectiveness of an LCTSVM.  相似文献   

7.
刘峤  方佳艳 《控制与决策》2020,35(2):272-284
孪生支持向量机(TWSVM)以及最近提出的各种变体模型均是在高维空间内独立求解两个带有约束条件的对偶二次规划问题(QPP).然而,由于每个对偶的QPP所需求解的对偶变量的数量由他类样本的数量决定,当需要处理大规模数据集时,这种直接求解标准QPP的方法将会导致非常高的计算复杂度.为此,提出一种改进的孪生支持向量机模型,称为定点孪生支持向量机(FP-TWSVM).所提模型将传统的TWSVM及其变体模型中处在高维空间内的对偶QPP转化成一系列有限个一维空间内的单峰函数优化问题.可以采用高效的线性搜索方法求解这些一维的单峰函数优化问题,例如斐波那契算法、黄金分割法.在标准数据集包括大规模数据集上的数值实验验证了FP-TWSVM算法的有效性.实验结果表明, FP-TWSVM在保持与其他模型相当的分类精度的同时,具有更快的训练速度,消耗更少的内存空间.  相似文献   

8.
程昊翔  王坚 《控制与决策》2016,31(5):949-952
为了提高孪生支持向量机的泛化能力,提出一种新的孪生大间隔分布机算法,以增加间隔分布对于训练模型的影响.理论研究表明,间隔分布对于模型的泛化性能有着非常重要的影响.该算法在标准孪生支持向量机优化目标函数上增加了间隔分布的影响,间隔分布通过一阶和二阶数据统计特征来体现.在标准数据集上的实验结果表明,所提出的算法比SVM、TWSVM、TBSVM算法的分类精确度更高.  相似文献   

9.
孪生支持向量机(TWSVM)的研究是近来机器学习领域的一个热点。TWSVM具有分类精度高、训练速度快等优点,但训练时没有充分利用样本的统计信息。作为TWSVM的改进算法,基于马氏距离的孪生支持向量机(TMSVM)在分类过程中考虑了各类样本的协方差信息,在许多实际问题中有着很好的应用效果。然而TMSVM的训练速度有待提高,并且仅适用于二分类问题。针对这两个问题,将最小二乘思想引入TMSVM,用等式约束取代TMSVM中的不等式约束,将二次规划问题的求解简化为求解两个线性方程组,得到基于马氏距离的最小二乘孪生支持向量机(LSTMSVM),并结合有向无环图策略(DAG)设计出基于马氏距离的最小二乘孪生多分类支持向量机。为了减少DAG结构的误差累积,构造了基于马氏距离的类间可分性度量。人工数据集和UCI数据集上的实验均表明,所提算法不仅有效,而且相对于传统多分类SVM,其分类性能有明显提高。  相似文献   

10.
最小二乘孪生支持向量机通过求解两个线性规划问题来代替求解复杂的二次规划问题,具有计算简单和训练速度快的优势。然而,最小二乘孪生支持向量机得到的超平面易受异常点影响且解缺乏稀疏性。针对这一问题,基于截断最小二乘损失提出了一种鲁棒最小二乘孪生支持向量机模型,并从理论上验证了模型对异常点具有鲁棒性。为使模型可处理大规模数据,基于表示定理和不完全Cholesky分解得到了新模型的稀疏解,并提出了适合处理带异常点的大规模数据的稀疏鲁棒最小二乘孪生支持向量机算法。数值实验表明,新算法比已有算法分类准确率、稀疏性、收敛速度分别提高了1.97%~37.7%、26~199倍和6.6~2 027.4倍。  相似文献   

11.
基于正则化技术的对支持向量机特征选择算法   总被引:2,自引:0,他引:2  
对支持向量机(twin support vector machine,TWSVM)的优化思想源于基于广义特征值近似支持向量机(proxi mal SVMbased on generalized eigenvalues,GEPSVM),问题解归结为求解两个SVM型问题,因此,计算开销缩减到标准SVM的1/4.除了保留了GEPSVM优势外,在分类性能上TWSVM远优于GEPSVM,但仍需求解凸规划问题,并且,目前尚无有效的TWSVM的特征提取算法提出.首先,向TWSVM模型中引入正则项,提出了正则化TWSVM(RTWSVM).与TWSVM不同,RTWSVM保证了该问题为一个强凸规划问题.在此基础上,构造了TWSVM的特征提取算法(FRTWSVM).该分类器只需求解一个线性方程系统,无需任何凸规划软件包.在保证得到与TWSVM相当的分类性能以及较快的计算速度上,此方式还减少了输入空间的特征数.对于非线性问题,FRTWSVM可以减少核函数数目.  相似文献   

12.
双支持向量机是近年提出的一种新的支持向量机.在处理模式分类问题时,双支持向量机速度远远超过传统支持向量机,而且显示出较好的推广能力.但双支持向量机没有考虑不同输入样本点可能会对分类超平面的形成产生不同影响,在某些实际问题中具有局限性.为了克服这个缺点,提出了一种基于混合模糊隶属度的模糊双支持向量机.该算法设计了一种结合距离和紧密度的模糊隶属度函数,给不同的训练样本赋予不同的模糊隶属度,构建两个最优非平行超平面,最终实现二值分类.实验证明,该模糊双支持向量机的分类性能优于传统的双支持向量机.  相似文献   

13.
Pattern Analysis and Applications - During the last few years, multiple surface classification algorithms, such as twin support vector machine (TWSVM), least squares twin support vector machine...  相似文献   

14.
Twin support vector machine (TWSVM) is a research hot spot in the field of machine learning in recent years. Although its performance is better than traditional support vector machine (SVM), the kernel selection problem still affects the performance of TWSVM directly. Wavelet analysis has the characteristics of multivariate interpolation and sparse change, and it is suitable for the analysis of local signals and the detection of transient signals. The wavelet kernel function based on wavelet analysis can approximate any nonlinear functions. Based on the wavelet kernel features and the kernel function selection problem, wavelet twin support vector machine (WTWSVM) is proposed by this paper. It introduces the wavelet kernel function into TWSVM to make the combination of wavelet analysis techniques and TWSVM come true. The experimental results indicate that WTWSVM is feasible, and it improves the classification accuracy and generalization ability of TWSVM significantly.  相似文献   

15.
针对模糊孪生支持向量机算法(FTSVM)对噪声仍然敏感,容易过拟合以及不能有效区分支持向量和离群值等问题,提出了一种改进的鲁棒模糊孪生支持向量机算法(IRFTSVM)。将改进的k近邻隶属度函数和基于类内超平面的隶属度函数结合,构造了一种新的混合隶属度函数;在FTSVM算法的目标函数中引入正则化项和额外的约束条件,实现了结构风险最小化,避免了逆矩阵运算,且非线性问题可以像经典的SVM算法一样直接从线性问题扩展而来;将铰链损失函数替换为pinball损失函数,以此降低对噪声的敏感性。此外,在UCI数据集和人工数据集上对该算法进行评估,并与SVM、TWSVM、FTSVM、PTSVM和TBSVM五个算法进行比较。实验结果表明,该算法的分类结果是令人满意的。  相似文献   

16.
p范数正则化支持向量机分类算法   总被引:6,自引:3,他引:3  
L2范数罚支持向量机(Support vector machine,SVM)是目前使用最广泛的分类器算法之一,同时实现特征选择和分类器构造的L1范数和L0范数罚SVM算法也已经提出.但是,这两个方法中,正则化阶次都是事先给定,预设p=2或p=1.而我们的实验研究显示,对于不同的数据,使用不同的正则化阶次,可以改进分类算法的预测准确率.本文提出p范数正则化SVM分类器算法设计新模式,正则化范数的阶次p可取范围为02范数罚SVM,L1范数罚SVM和L0范数罚SVM.  相似文献   

17.
孪生支持向量机(Twin Support Vector Machine,TWSVM)是在支持向量机(Support Vector Machine,SVM)的基础上发展而来的一种新的机器学习方法。作为一种二分类的分类器,其基本思想为寻找两个超平面,使得每一个分类面靠近本类样本点而远离另一类样本点。作为一种新兴的机器学习方法,孪生支持向量机自提出以来便引起了国内外学者的广泛关注,已经成为机器学习领域的研究热点。对孪生支持向量机的最新研究进展进行综述,首先介绍了孪生支持向量机的基本概念与基本模型;然后对近几年来新型的孪生支持向量机模型与研究进展进行了总结,并对其代表算法进行了优缺点分析和实验比较;最后对将来的研究工作进行了展望。  相似文献   

18.
The least squares support vector machine (LSSVM), like standard support vector machine (SVM) which is based on structural risk minimization, can be obtained by solving a simpler optimization problem than that in SVM. However, local structure information of data samples, especially intrinsic manifold structure, is not taken full consideration in LSSVM. To address this problem and inspired by manifold learning technique, we propose a novel iterative least squares classifier, coined optimal locality preserving least squares support vector machine (OLP-LSSVM). The idea is to combine structural risk minimization and locality preserving criterion in a unified framework to take advantage of the manifold structure of data samples to enhance LSSVM. Furthermore, inspired by the recent development of simultaneous optimization technique, adjacent graph of locality preserving criterion is optimized simultaneously to give rise to improved discriminative performance. The resulting model can be solved by alternating optimization method. The experimental results on several publicly available benchmark data sets show the feasibility and effectiveness of the proposed method.  相似文献   

19.
20.
The parameter values of kernel function affect classification results to a certain extent. In the paper, a multiclass classification model based on improved least squares support vector machine (LSSVM) is presented. In the model, the non-sensitive loss function is replaced by quadratic loss function, and the inequality constraints are replaced by equality constraints. Consequently, quadratic programming problem is simplified as the problem of solving linear equation groups, and the SVM algorithm is realized by least squares method. When the LSSVM is used in multiclass classification, it is presented to choose parameter of kernel function on dynamic, which enhances preciseness rate of classification. The Fibonacci symmetry searching algorithm is simplified and improved. The changing rule of kernel function searching region and best shortening step is studied. The best multiclass classification results are obtained by means of synthesizing kernel function searching region and best shortening step. The simulation results show the validity of the model.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号