首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
By promoting the parallel hyperplanes to non-parallel ones in SVM, twin support vector machines (TWSVM) have attracted more attention. There are many modifications of them. However, most of the modifications minimize the loss function subject to the I 2-norm or I 1-norm penalty. These methods are non-adaptive since their penalty forms are fixed and pre-determined for any types of data. To overcome the above shortcoming, we propose l p norm least square twin support vector machine (l p LSTSVM). Our new model is an adaptive learning procedure with l p -norm (0<p<1), where p is viewed as an adjustable parameter and can be automatically chosen by data. By adjusting the parameter p, l p LSTSVM can not only select relevant features but also improve the classification accuracy. The solutions of the optimization problems in l p LSTSVM are obtained by solving a series systems of linear equations (LEs) and the lower bounds of the solution is established which is extremely helpful for feature selection. Experiments carried out on several standard UCI data sets and synthetic data sets show the feasibility and effectiveness of the proposed method.  相似文献   

2.
Recently, joint feature selection and subspace learning, which can perform feature selection and subspace learning simultaneously, is proposed and has encouraging ability on face recognition. In the literature, a framework of utilizing L2,1-norm penalty term has also been presented, but some important algorithms cannot be covered, such as Fisher Linear Discriminant Analysis and Sparse Discriminant Analysis. Therefore, in this paper, we add L2,1-norm penalty term on FLDA and propose a feasible solution by transforming its nonlinear model into linear regression type. In addition, we modify the optimization model of SDA by replacing elastic net with L2,1-norm penalty term and present its optimization method. Experiments on three standard face databases illustrate FLDA and SDA via L2,1-norm penalty term can significantly improve their recognition performance, and obtain inspiring results with low computation cost and for low-dimension feature.  相似文献   

3.
Lp范数压缩感知图像重建优化算法   总被引:1,自引:0,他引:1       下载免费PDF全文
目的 压缩感知理论中的重构算法作为关键技术之一,在科学研究方面起到了关键的作用。常用的重构算法包括L0范数的非凸优化算法和L1范数的凸优化算法,但它们的缺点是重构精度不高,运算时间很长。为了克服这一缺陷,提高现有基于Lp范数的压缩感知图像重构算法的重建精度和算法效率,本文提出改进算法。方法 针对拉格朗日函数序列二次规划(SQP)方法中海瑟(Hesse)矩阵不正定导致计算量很大的问题,引入价值函数,修正Hesse矩阵的序列二次规划方法并结合图像分块压缩感知技术,提出了一种基于LP范数压缩感知图像重构算法。结果 在采样率同为40%情况下,本文算法下的信噪比为34.28 dB,高于BOMP(block orthogonal matching pursuit)算法信噪比2%,高于当罚函数作为修正方法时的13.2%。本文算法计算时间为190.55 s,快于BOMP算法13.4%,快于当罚函数作为修正方法时的67.5%。采样率同为50%的情况下,本文算法下的信噪比为35.42 dB,高BOMP算法信噪比2.4%,高于当罚函数作为修正方法时信噪比12.8%。本文算法的计算时间是196.67 s,快于BOMP算法68.2%,快于81.7%。在采样率同为60%的情况下,本文算法的信噪比为36.33 dB,高于BOMP算法信噪比3.2%,高于当罚函数作为修正方法时信噪比8.2%。本文算法计算时间为201.72 s,快于BOMP算法82.3%,快于当罚函数作为修正方法时86.6%。在采样率为70%的情况下,本文算法信噪比38.62 dB,高于BOMP算法信噪比2.5%,高于当罚函数作为修正方法时信噪比9.8%。本文算法计算时间为214.68 s,快于BOMP算法88.12%,快于当罚函数作为修正方法时的91.1%。实验结果显示在相同的采样率的情况下,本文改进算法在重构精度和算法时间上均优于BOMP算法等其他算法。并且采样率越高,重构图像精度越来越高,重构算法时间越来越短。结论 通过实验对本文算法、BOMP重构算法等其他算法在信噪比和算法计算时间进行对比,在不同采样率下,本文算法都明显优于其他两种算法,而且在采样率仅为20.5%时,信噪比高达85.154 3 dB,重构图像比较清晰。本文算法的最大优点在于采用了分块压缩感知技术,提高图像重构效率,降低了重构时间,缺点是在图像采样率比较低的情况下,存在图像干扰块效应。接下来研究方向是如何在采样率低的情况下,高精度地还原图片,消除图像干扰块效应。  相似文献   

4.
This paper presents a new version of support vector machine (SVM) named l 2 ? l p SVM (0 < p < 1) which introduces the l p -norm (0 < p < 1) of the normal vector of the decision plane in the standard linear SVM. To solve the nonconvex optimization problem in our model, an efficient algorithm is proposed using the constrained concave–convex procedure. Experiments with artificial data and real data demonstrate that our method is more effective than some popular methods in selecting relevant features and improving classification accuracy.  相似文献   

5.
We study the strategies in feature selection with sparse support vector machine (SVM). Recently, the socalled L p -SVM (0 < p < 1) has attracted much attention because it can encourage better sparsity than the widely used L 1-SVM. However, L p -SVM is a non-convex and non-Lipschitz optimization problem. Solving this problem numerically is challenging. In this paper, we reformulate the L p -SVM into an optimization model with linear objective function and smooth constraints (LOSC-SVM) so that it can be solved by numerical methods for smooth constrained optimization. Our numerical experiments on artificial datasets show that LOSC-SVM (0 < p < 1) can improve the classification performance in both feature selection and classification by choosing a suitable parameter p. We also apply it to some real-life datasets and experimental results show that it is superior to L 1-SVM.  相似文献   

6.
This paper presents a design approach to nonlinear feedback excitation control of power systems with unknown disturbance and unknown parameters. It is shown that the stabilizing control law with desired L2 gain from the disturbance to a penalty signal can be designed by a recursive way without linearization. A state feedback law is presented for the case of the system with known parameters, and then the control law is extended to adaptive controller for the case when the parameters of the electrical dynamics of the power system are unknown. Simulation results demonstrate that the proposed controllers guarantee transient stability of the system regardless of the system parameters and faults.  相似文献   

7.
TROP-ELM: A double-regularized ELM using LARS and Tikhonov regularization   总被引:1,自引:0,他引:1  
In this paper an improvement of the optimally pruned extreme learning machine (OP-ELM) in the form of a L2 regularization penalty applied within the OP-ELM is proposed. The OP-ELM originally proposes a wrapper methodology around the extreme learning machine (ELM) meant to reduce the sensitivity of the ELM to irrelevant variables and obtain more parsimonious models thanks to neuron pruning. The proposed modification of the OP-ELM uses a cascade of two regularization penalties: first a L1 penalty to rank the neurons of the hidden layer, followed by a L2 penalty on the regression weights (regression between hidden layer and output layer) for numerical stability and efficient pruning of the neurons. The new methodology is tested against state of the art methods such as support vector machines or Gaussian processes and the original ELM and OP-ELM, on 11 different data sets; it systematically outperforms the OP-ELM (average of 27% better mean square error) and provides more reliable results - in terms of standard deviation of the results - while remaining always less than one order of magnitude slower than the OP-ELM.  相似文献   

8.
There are often the underlying cross relatedness amongst multiple tasks, which is discarded directly by traditional single-task learning methods. Since multi-task learning can exploit these relatedness to further improve the performance, it has attracted extensive attention in many domains including multimedia. It has been shown through a meticulous empirical study that the generalization performance of Least-Squares Support Vector Machine (LS-SVM) is comparable to that of SVM. In order to generalize LS-SVM from single-task to multi-task learning, inspired by the regularized multi-task learning (RMTL), this study proposes a novel multi-task learning approach, multi-task LS-SVM (MTLS-SVM). Similar to LS-SVM, one only solves a convex linear system in the training phrase, too. What’s more, we unify the classification and regression problems in an efficient training algorithm, which effectively employs the Krylow methods. Finally, experimental results on school and dermatology validate the effectiveness of the proposed approach.  相似文献   

9.
Nowadays, a series of methods are based on a L 1 penalty to solve the variable selection problem for a Cox’s proportional hazards model. In 2010, Xu et al. have proposed a L 1/2 regularization and proved that the L 1/2 penalty is sparser than the L 1 penalty in linear regression models. In this paper, we propose a novel shooting method for the L 1/2 regularization and apply it on the Cox model for variable selection. The experimental results based on comprehensive simulation studies, real Primary Biliary Cirrhosis and diffuse large B cell lymphoma datasets show that the L 1/2 regularization shooting method performs competitively.  相似文献   

10.
In this paper we consider a class of stochastic nonlinear Volterra integral equation. The problem of LP(R0 (p ? 1) stability in the mean m (m ? 1) is examined.In Section 2, the random Banach fixed-point theorem is used to establish the existence and uniqueness of solutions of the system in some general Banach spaces. These results are then used to study the LP(R0) (p ? 1) stability in the mean m (m ? 1) of the system.For illustration, an example of the visually induced height orientation of the fly (Musca domestica) is considered.  相似文献   

11.
p范数正则化支持向量机分类算法   总被引:6,自引:3,他引:3  
L2范数罚支持向量机(Support vector machine,SVM)是目前使用最广泛的分类器算法之一,同时实现特征选择和分类器构造的L1范数和L0范数罚SVM算法也已经提出.但是,这两个方法中,正则化阶次都是事先给定,预设p=2或p=1.而我们的实验研究显示,对于不同的数据,使用不同的正则化阶次,可以改进分类算法的预测准确率.本文提出p范数正则化SVM分类器算法设计新模式,正则化范数的阶次p可取范围为02范数罚SVM,L1范数罚SVM和L0范数罚SVM.  相似文献   

12.
针对四旋翼无人机的姿态控制问题,提出一种L1自适应块控反步控制方法.将四旋翼姿态运动模型转化为一类多输入多输出不确定非线性系统的形式;根据该系统严格反馈的结构特点,对外回路设计了块控反步控制器;针对内回路存在的外部干扰和内部参数摄动等不确定性,引入L1自适应控制思想补偿其影响.稳定性分析表明闭环系统内所有信号一致有界.仿真和姿态稳定实验验证了所提控制策略的有效性和鲁棒性.  相似文献   

13.
In this work, a notion of generalized L2-gain for nonlinear systems, where the gain is considered as a function of the state instead of a (global) constant, is presented. This new notion seems to be adequate to characterize the gain properties of several nonlinear systems which do not possess a uniform L2-gain property (i.e. the L2-gain depends on the operating point). Moreover, a notion of practical L2-gain attenuation, which extends the standard definition and parallels (mutatae mutandis) the concepts of practical stability, is also proposed.  相似文献   

14.
Representation and embedding are usually the two necessary phases in designing a classifier. Fisher discriminant analysis (FDA) is regarded as seeking a direction for which the projected samples are well separated. In this paper, we analyze FDA in terms of representation and embedding. The main contribution is that we prove that the general framework of FDA is based on the simplest and most intuitive FDA with zero within-class variance and therefore the mechanism of FDA is clearly illustrated. Based on our analysis, ε-insensitive SVM regression can be viewed as a soft FDA with ε-insensitive within-class variance and L1 norm penalty. To verify this viewpoint, several real classification experiments are conducted to demonstrate that the performance of the regression-based classification technique is comparable to regular FDA and SVM.  相似文献   

15.
The sparsity driven classification technologies have attracted much attention in recent years, due to their capability of providing more compressive representations and clear interpretation. Two most popular classification approaches are support vector machines (SVMs) and kernel logistic regression (KLR), each having its own advantages. The sparsification of SVM has been well studied, and many sparse versions of 2-norm SVM, such as 1-norm SVM (1-SVM), have been developed. But, the sparsification of KLR has been less studied. The existing sparsification of KLR is mainly based on L 1 norm and L 2 norm penalties, which leads to the sparse versions that yield solutions not so sparse as it should be. A very recent study on L 1/2 regularization theory in compressive sensing shows that L 1/2 sparse modeling can yield solutions more sparse than those of 1 norm and 2 norm, and, furthermore, the model can be efficiently solved by a simple iterative thresholding procedure. The objective function dealt with in L 1/2 regularization theory is, however, of square form, the gradient of which is linear in its variables (such an objective function is the so-called linear gradient function). In this paper, through extending the linear gradient function of L 1/2 regularization framework to the logistic function, we propose a novel sparse version of KLR, the 1/2 quasi-norm kernel logistic regression (1/2-KLR). The version integrates advantages of KLR and L 1/2 regularization, and defines an efficient implementation scheme of sparse KLR. We suggest a fast iterative thresholding algorithm for 1/2-KLR and prove its convergence. We provide a series of simulations to demonstrate that 1/2-KLR can often obtain more sparse solutions than the existing sparsity driven versions of KLR, at the same or better accuracy level. The conclusion is also true even in comparison with sparse SVMs (1-SVM and 2-SVM). We show an exclusive advantage of 1/2-KLR that the regularization parameter in the algorithm can be adaptively set whenever the sparsity (correspondingly, the number of support vectors) is given, which suggests a methodology of comparing sparsity promotion capability of different sparsity driven classifiers. As an illustration of benefits of 1/2-KLR, we give two applications of 1/2-KLR in semi-supervised learning, showing that 1/2-KLR can be successfully applied to the classification tasks in which only a few data are labeled.  相似文献   

16.
正则化FDA的核化及与SVM的比较研究*   总被引:1,自引:0,他引:1  
无论是Fisher判别分析(FDA)还是基于核的FDA(KFDA),在小样本情况下都会面临矩阵的病态问题,正则化技术是解决该问题的有效途径。为了便于研究正则化FDA与支持向量机(SVM)的关系,推导了一种正则化FDA的核化算法。将约束优化问题转换为对偶的优化问题,得到了与SVM相似的形式,分析了该核化算法与SVM的联系。针对Tenessee-Eastman(TE)过程的故障诊断结果表明,正则化KFDA的诊断效果明显好于LS-SVM。  相似文献   

17.
Support vector machine (SVM), as an effective method in classification problems, tries to find the optimal hyperplane that maximizes the margin between two classes and can be obtained by solving a constrained optimization criterion using quadratic programming (QP). This QP leads to higher computational cost. Least squares support vector machine (LS-SVM), as a variant of SVM, tries to avoid the above shortcoming and obtain an analytical solution directly from solving a set of linear equations instead of QP. Both SVM and LS-SVM operate directly on patterns represented by vector, i.e., before applying SVM or LS-SVM to a pattern, any non-vector pattern such as an image has to be first vectorized into a vector pattern by some techniques like concatenation. However, some implicit structural or local contextual information may be lost in this transformation. Moreover, as the dimension d of the weight vector in SVM or LS-SVM with the linear kernel is equal to the dimension d 1 × d 2 of the original input pattern, as a result, the higher the dimension of a vector pattern is, the more space is needed for storing it. In this paper, inspired by the method of feature extraction directly based on matrix patterns and the advantages of LS-SVM, we propose a new classifier design method based on matrix patterns, called MatLSSVM, such that the new method can not only directly operate on original matrix patterns, but also efficiently reduce memory for the weight vector (d) from d 1 × d 2 to d 1 + d 2. However like LS-SVM, MatLSSVM inherits LS-SVM’s existence of unclassifiable regions when extended to multi-class problems. Thus with the fuzzy version of LS-SVM, a corresponding fuzzy version of MatLSSVM (MatFLSSVM) is further proposed to remove unclassifiable regions effectively for multi-class problems. Experimental results on some benchmark datasets show that the proposed method is competitive in classification performance compared to LS-SVM, fuzzy LS-SVM (FLS-SVM), more-recent MatPCA and MatFLDA. In addition, more importantly, the idea used here has a possibility of providing a novel way of constructing learning model.  相似文献   

18.
Backpropagation (BP) algorithm is the typical strategy to train the feedforward neural networks (FNNs). Gradient descent approach is the popular numerical optimal method which is employed to implement the BP algorithm. However, this technique frequently leads to poor generalization and slow convergence. Inspired by the sparse response character of human neuron system, several sparse-response BP algorithms were developed which effectively improve the generalization performance. The essential idea is to impose the responses of hidden layer as a specific L1 penalty term on the standard error function of FNNs. In this paper, we mainly focus on the two remaining challenging tasks: one is to solve the non-differential problem of the L1 penalty term by introducing smooth approximation functions. The other aspect is to provide a rigorous convergence analysis for this novel sparse response BP algorithm. In addition, an illustrative numerical simulation has been done to support the theoretical statement.  相似文献   

19.
The computational approximation of exact boundary controllability problems for the wave equation in two dimensions is studied. A numerical method is defined that is based on the direct solution of optimization problems that are introduced in order to determine unique solutions of the controllability problem. The uniqueness of the discrete finite-difference solutions obtained in this manner is demonstrated. The convergence properties of the method are illustrated through computational experiments. Efficient implementation strategies for the method are also discussed. It is shown that for smooth, minimum L2-norm Dirichlet controls, the method results in convergent approximations without the need to introduce regularization. Furthermore, for the generic case of nonsmooth Dirichlet controls, convergence with respect to L2 norms is also numerically demonstrated. One of the strengths of the method is the flexibility it allows for treating other controls and other minimization criteria; such generalizations are discussed. In particular, the minimum H1-norm Dirichlet controllability problem is approximated and solved, as are minimum regularized L2-norm Dirichlet controllability problems with small penalty constants. Finally, a discussion is provided about the differences between our method and existing methods; these differences may explain why our methods provide convergent approximations for problems for which existing methods produce divergent approximations unless they are regularized in some manner.  相似文献   

20.
In this paper, an L1 adaptive output‐feedback controller is developed for multivariable nonlinear systems subject to constraints using online optimization. In the L1 adaptive architecture, an adaptive law will update the adaptive parameters that represent the nonlinear uncertainties such that the estimation error between the predicted state and the real state is driven to zero at every integration time step. Of course, neglection of the unknowns for solving the error dynamic equations will introduce an estimation error in the adaptive parameters. The magnitude of this error can be lessened by choosing a proper sampling time step. A control law is designed to compensate the nonlinear uncertainties and deliver a good tracking performance with guaranteed robustness. Model predictive control is introduced to solve a receding horizon optimization problem with various constraints maintained. Numerical examples are given to illustrate the design procedures, and the simulation results demonstrate the availability and feasibility of the developed framework.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号