首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 375 毫秒
1.
L2范数罚最小二乘–支持向量机(least square support vector machine algorithm,LS–SVM)分类器是得到广泛研究和使用的机器学习算法,其算法中正则化阶次是事先给定的,预设q=2.本文提出q范数正则化LS–SVM分类器算法,0q∞,把q取值扩大到有理数范围.利用网格法改变正则化权衡参数c和正则化阶次q的值,在所选的c和q值上,使用迭代再权方法求解分类器目标函数,找出最小分类预测误差值,使预测误差和特征选择个数两个性能指标得到提高.通过对不同领域的实际数据进行实验,可以看到提出的分类算法分类预测更加准确同时可以实现特征选择,性能优于L2范数罚LS–SVM.  相似文献   

2.
提出L1范数正则化支持向量机(SVM)聚类算法.该算法能够同时实现聚类和特征选择功能.给出LI范数正则化SVM聚类原问题和对偶问题形式,采用类似迭代坐标下降的方法求解困难的混合整数规划问题.在多组数据集上的实验结果表明,L1范数正则化SVM聚类算法聚类准确率与L2范数正则化SVM聚类算法相近,而且能够实现特征选择.  相似文献   

3.
提出L1范数正则化支持向量机(SVM)聚类算法。该算法能够同时实现聚类和特征选择功能。给出L1范数正则化SVM聚类原问题和对偶问题形式,采用类似迭代坐标下降的方法求解困难的混合整数规划问题。在多组数据集上的实验结果表明,L1范数正则化SVM聚类算法聚类准确率与L2范数正则化SVM聚类算法相近,而且能够实现特征选择。  相似文献   

4.
提出一种迭代再权q范数正则化最小二乘支持向量机(LS SVM)分类算法。该算法通过交叉校验过程选择正则化范数的阶次q (0相似文献   

5.
刘建伟  付捷  罗雄麟 《计算机工程》2012,38(13):148-151
提出一种L1+L2范数正则化逻辑斯蒂模型分类算法。该算法引入L2范数正则化,解决L1正则化逻辑斯蒂算法迭代过程奇异问题,通过引入样本向量的扩展和新的权值向量完成L1范数非平滑问题,最终使用共轭梯度方法求解经过转化的最优化问题。在各种实际数据集上的实验结果表明,该算法优于L2范数、L1范数和Lp范数正则化逻辑斯蒂模型,具有较好的特征选择和分类性能。  相似文献   

6.
王影  王浩  俞奎  姚宏亮 《计算机科学》2012,39(1):185-189
目前基于节点排序的贝叶斯网络分类器忽略了节点序列中已选变量和类标签之间的信息,导致分类器的准确率很难进一步提高。针对这个问题,提出了一种简单高效的贝叶斯网络分类器的学习算法:L1正则化的贝叶斯网络分类器(L1-BNC)。通过调整Lasso方法中的约束值,充分利用回归残差的信息,结合点序列中已选变量和类标签的信息,形成一条优秀的有序变量拓扑序列(L1正则化路径);基于该序列,利用K2算法生成优良的贝叶斯网络分类器。实验表明,L1-BNC在分类精度上优于已有的贝叶斯网络分类器。L1-BNC也与SVM,KNN和J48分类算法进行了比较,在大部分数据集上,L1-BNC优于这些算法。  相似文献   

7.
一种基于L1范数正则化的回声状态网络   总被引:2,自引:0,他引:2  
韩敏  任伟杰  许美玲 《自动化学报》2014,40(11):2428-2435
针对回声状态网络存在的病态解以及模型规模控制问题,本文提出一种基于L1范数正则化的改进回声状态网络.该方法通过在目标函数中添加L1范数惩罚项,提高模型求解的数值稳定性,同时借助于L1范数正则化的特征选择能力,控制网络的复杂程度,防止出现过拟合.对于L1范数正则化的求解,采用最小角回归算法计算正则化路径,通过贝叶斯信息准则进行模型选择,避免估计正则化参数.将模型应用于人造数据和实际数据的时间序列预测中,仿真结果证明了本文方法的有效性和实用性.  相似文献   

8.
任胜兵  谢如良 《计算机工程》2019,45(10):189-195
在正则化多核学习中,稀疏的核函数权值会导致有用信息丢失和泛化性能退化,而通过非稀疏模型选取所有核函数则会产生较多的冗余信息并对噪声敏感。针对上述问题,基于AdaBoost框架提出一种弹性网型正则化多核学习算法。在迭代选取基本分类器时对核函数的权值进行弹性网型正则化约束,即混合L_1范数和L_p范数约束,构造基于多个基本核最优凸组合的基本分类器,并将其集成到最终的强分类器中。实验结果表明,该算法在保留集成算法优势的同时,能够实现核函数权值稀疏性和非稀疏性的平衡,与L_1-MKL和L_p-MKL算法相比,能够以较少的迭代次数获得分类精度较高的分类器。  相似文献   

9.
寻找支持向量机(SVM)的最优参数是支持向量机研究领域的热点之一。2范数软间隔SVM(L2-SVM)将样本转化成线性可分,在原始单正则化参数L2-SVM的基础上,提出双正则化参数的L2-SVM,获得它的对偶形式,从而确定了最优化的目标函数。然后结合梯度法,提出了一种新的支持向量机参数选择的新方法(Doupenalty-Gradient)。实验使用了10个基准数据集,结果表明,Doupenalty-Gradient方法是可行且有效的。对于实验所用的样本,极大地改善了分类精度。  相似文献   

10.
一种基于有向无环图的多类SVM分类器   总被引:1,自引:0,他引:1  
本文提出了一种多类SVM分类器--ACDMSVM,它是基于决策有向无环图和积极约束的多类SVM分类器.对于k类问题,它将k(k-1)/2个改进的二类SVM分类器进行组合.为了提高分类器的训练及决策速度,对标准的二类SVM分类器进行三个方面的改进:利用大间隔方法,对软间隔错误变量采用2-范数形式并应用积极约束.在训练阶段,使用含有根的二元有向无环图进行节点的选择,该有向无环图含k(k-1)/2个内部节点和k个叶节点.数值实验表明这是一种快速的多类SVM分类器.  相似文献   

11.
The sparsity driven classification technologies have attracted much attention in recent years, due to their capability of providing more compressive representations and clear interpretation. Two most popular classification approaches are support vector machines (SVMs) and kernel logistic regression (KLR), each having its own advantages. The sparsification of SVM has been well studied, and many sparse versions of 2-norm SVM, such as 1-norm SVM (1-SVM), have been developed. But, the sparsification of KLR has been less studied. The existing sparsification of KLR is mainly based on L 1 norm and L 2 norm penalties, which leads to the sparse versions that yield solutions not so sparse as it should be. A very recent study on L 1/2 regularization theory in compressive sensing shows that L 1/2 sparse modeling can yield solutions more sparse than those of 1 norm and 2 norm, and, furthermore, the model can be efficiently solved by a simple iterative thresholding procedure. The objective function dealt with in L 1/2 regularization theory is, however, of square form, the gradient of which is linear in its variables (such an objective function is the so-called linear gradient function). In this paper, through extending the linear gradient function of L 1/2 regularization framework to the logistic function, we propose a novel sparse version of KLR, the 1/2 quasi-norm kernel logistic regression (1/2-KLR). The version integrates advantages of KLR and L 1/2 regularization, and defines an efficient implementation scheme of sparse KLR. We suggest a fast iterative thresholding algorithm for 1/2-KLR and prove its convergence. We provide a series of simulations to demonstrate that 1/2-KLR can often obtain more sparse solutions than the existing sparsity driven versions of KLR, at the same or better accuracy level. The conclusion is also true even in comparison with sparse SVMs (1-SVM and 2-SVM). We show an exclusive advantage of 1/2-KLR that the regularization parameter in the algorithm can be adaptively set whenever the sparsity (correspondingly, the number of support vectors) is given, which suggests a methodology of comparing sparsity promotion capability of different sparsity driven classifiers. As an illustration of benefits of 1/2-KLR, we give two applications of 1/2-KLR in semi-supervised learning, showing that 1/2-KLR can be successfully applied to the classification tasks in which only a few data are labeled.  相似文献   

12.

Classical support vector machine (SVM) and its twin variant twin support vector machine (TWSVM) utilize the Hinge loss that shows linear behaviour, whereas the least squares version of SVM (LSSVM) and twin least squares support vector machine (LSTSVM) uses L2-norm of error which shows quadratic growth. The robust Huber loss function is considered as the generalization of Hinge loss and L2-norm loss that behaves like the quadratic L2-norm loss for closer error points and the linear Hinge loss after a specified distance. Three functional iterative approaches based on generalized Huber loss function are proposed in this paper to solve support vector classification problems of which one is based on SVM, i.e. generalized Huber support vector machine and the other two are in the spirit of TWSVM, namely generalized Huber twin support vector machine and regularization on generalized Huber twin support vector machine. The proposed approaches iteratively find the solutions and eliminate the requirements to solve any quadratic programming problem (QPP) as for SVM and TWSVM. The main advantages of the proposed approach are: firstly, utilize the robust Huber loss function for better generalization and for lesser sensitivity towards noise and outliers as compared to quadratic loss; secondly, it uses functional iterative scheme to find the solution that eliminates the need to solving QPP and also makes the proposed approaches faster. The efficacy of the proposed approach is established by performing numerical experiments on several real-world datasets and comparing the result with related methods, viz. SVM, TWSVM, LSSVM and LSTSVM. The classification results are convincing.

  相似文献   

13.
针对传统支持向量机(SVM)分类存在对离群点敏感、支持向量(SV)个数多和分类面参数非稀疏的问题,提出了平滑削边绝对偏离(SCAD)惩罚截断Hinge损失SVM(SCAD-TSVM)算法,并将其用于构建财务预警模型,同时就该模型的求解设计了一个迭代更新算法。结合沪深股市A股制造业上市公司的财务数据进行实证分析,同时对比L1范数惩罚SVM、SCAD惩罚SVM和截断Hinge损失SVM(TSVM)构建的T-2和T-3模型,结果发现SCAD-TSVM构建的T-2和T-3模型都具有最好的稀疏性和最高的预报精度,而且其在不同训练样本数上的平均预测准确率都要比L1范数SVM(L1-SVM)、SCAD-SVM和TSVM算法的高。  相似文献   

14.
Qiao  Chen  Yang  Lan  Shi  Yan  Fang  Hanfeng  Kang  Yanmei 《Applied Intelligence》2022,52(1):237-253

To have the sparsity of deep neural networks is crucial, which can improve the learning ability of them, especially for application to high-dimensional data with small sample size. Commonly used regularization terms for keeping the sparsity of deep neural networks are based on L1-norm or L2-norm; however, they are not the most reasonable substitutes of L0-norm. In this paper, based on the fact that the minimization of a log-sum function is one effective approximation to that of L0-norm, the sparse penalty term on the connection weights with the log-sum function is introduced. By embedding the corresponding iterative re-weighted-L1 minimization algorithm with k-step contrastive divergence, the connections of deep belief networks can be updated in a way of sparse self-adaption. Experiments on two kinds of biomedical datasets which are two typical small sample size datasets with a large number of variables, i.e., brain functional magnetic resonance imaging data and single nucleotide polymorphism data, show that the proposed deep belief networks with self-adaptive sparsity can learn the layer-wise sparse features effectively. And results demonstrate better performances including the identification accuracy and sparsity capability than several typical learning machines.

  相似文献   

15.
支持向量机的分类性能在很大程度上取决于其相关参数的选择,为了改善支持向量机的分类准确率,本文采用基于混沌机制的人工蜂群算法对其参数进行优化。在传统人工蜂群算法的基础上,采用Logistic混沌映射初始化种群和锦标赛选择策略,进一步提高人工蜂群算法的收敛速度和寻优精度。该方法采用分类准确率作为适应度函数,利用人工蜂群算法对支持向量机的惩罚因子和核函数参数进行优化。通过对多个标准数据集的分类测试,证明基于混沌机制的人工蜂群算法优化的支持向量机分类器能够获得更高的分类准确率。  相似文献   

16.
By promoting the parallel hyperplanes to non-parallel ones in SVM, twin support vector machines (TWSVM) have attracted more attention. There are many modifications of them. However, most of the modifications minimize the loss function subject to the I 2-norm or I 1-norm penalty. These methods are non-adaptive since their penalty forms are fixed and pre-determined for any types of data. To overcome the above shortcoming, we propose l p norm least square twin support vector machine (l p LSTSVM). Our new model is an adaptive learning procedure with l p -norm (0<p<1), where p is viewed as an adjustable parameter and can be automatically chosen by data. By adjusting the parameter p, l p LSTSVM can not only select relevant features but also improve the classification accuracy. The solutions of the optimization problems in l p LSTSVM are obtained by solving a series systems of linear equations (LEs) and the lower bounds of the solution is established which is extremely helpful for feature selection. Experiments carried out on several standard UCI data sets and synthetic data sets show the feasibility and effectiveness of the proposed method.  相似文献   

17.
Not only different databases but two classes of data within a database can also have different data structures. SVM and LS-SVM typically minimize the empirical ?-risk; regularized versions subject to fixed penalty (L2 or L1 penalty) are non-adaptive since their penalty forms are pre-determined. They often perform well only for certain types of situations. For example, LS-SVM with L2 penalty is not preferred if the underlying model is sparse. This paper proposes an adaptive penalty learning procedure called evolution strategies (ES) based adaptive Lp least squares support vector machine (ES-based Lp LS-SVM) to address the above issue. By introducing multiple kernels, a Lp penalty based nonlinear objective function is derived. The iterative re-weighted minimal solver (IRMS) algorithm is used to solve the nonlinear function. Then evolution strategies (ES) is used to solve the multi-parameters optimization problem. Penalty parameterp, kernel and regularized parameters are adaptively selected by the proposed ES-based algorithm in the process of training the data, which makes it easier to achieve the optimal solution. Numerical experiments are conducted on two artificial data sets and six real world data sets. The experiment results show that the proposed procedure offer better generalization performance than the standard SVM, the LS-SVM and other improved algorithms.  相似文献   

18.
Manifold regularization (MR) is a promising regularization framework for semi-supervised learning, which introduces an additional penalty term to regularize the smoothness of functions on data manifolds and has been shown very effective in exploiting the underlying geometric structure of data for classification. It has been shown that the performance of the MR algorithms depends highly on the design of the additional penalty term on manifolds. In this paper, we propose a new approach to define the penalty term on manifolds by the sparse representations instead of the adjacency graphs of data. The process to build this novel penalty term has two steps. First, the best sparse linear reconstruction coefficients for each data point are computed by the l1-norm minimization. Secondly, the learner is subject to a cost function which aims to preserve the sparse coefficients. The cost function is utilized as the new penalty term for regularization algorithms. Compared with previous semi-supervised learning algorithms, the new penalty term needs less input parameters and has strong discriminative power for classification. The least square classifier using our novel penalty term is proposed in this paper, which is called the Sparse Regularized Least Square Classification (S-RLSC) algorithm. Experiments on real-world data sets show that our algorithm is very effective.  相似文献   

19.
In this paper, we propose a novel ECG arrhythmia classification method using power spectral-based features and support vector machine (SVM) classifier. The method extracts electrocardiogram’s spectral and three timing interval features. Non-parametric power spectral density (PSD) estimation methods are used to extract spectral features. The proposed approach optimizes the relevant parameters of SVM classifier through an intelligent algorithm using particle swarm optimization (PSO). These parameters are: Gaussian radial basis function (GRBF) kernel parameter σ and C penalty parameter of SVM classifier. ECG records from the MIT-BIH arrhythmia database are selected as test data. It is observed that the proposed power spectral-based hybrid particle swarm optimization-support vector machine (SVMPSO) classification method offers significantly improved performance over the SVM which has constant and manually extracted parameter.  相似文献   

20.
This paper presents a novel noise-robust graph-based semi-supervised learning algorithm to deal with the challenging problem of semi-supervised learning with noisy initial labels. Inspired by the successful use of sparse coding for noise reduction, we choose to give new L1-norm formulation of Laplacian regularization for graph-based semi-supervised learning. Since our L1-norm Laplacian regularization is explicitly defined over the eigenvectors of the normalized Laplacian matrix, we formulate graph-based semi-supervised learning as an L1-norm linear reconstruction problem which can be efficiently solved by sparse coding. Furthermore, by working with only a small subset of eigenvectors, we develop a fast sparse coding algorithm for our L1-norm semi-supervised learning. Finally, we evaluate the proposed algorithm in noise-robust image classification. The experimental results on several benchmark datasets demonstrate the promising performance of the proposed algorithm.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号