首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
We study the strategies in feature selection with sparse support vector machine (SVM). Recently, the socalled L p -SVM (0 < p < 1) has attracted much attention because it can encourage better sparsity than the widely used L 1-SVM. However, L p -SVM is a non-convex and non-Lipschitz optimization problem. Solving this problem numerically is challenging. In this paper, we reformulate the L p -SVM into an optimization model with linear objective function and smooth constraints (LOSC-SVM) so that it can be solved by numerical methods for smooth constrained optimization. Our numerical experiments on artificial datasets show that LOSC-SVM (0 < p < 1) can improve the classification performance in both feature selection and classification by choosing a suitable parameter p. We also apply it to some real-life datasets and experimental results show that it is superior to L 1-SVM.  相似文献   

2.
This paper presents a novel noise-robust graph-based semi-supervised learning algorithm to deal with the challenging problem of semi-supervised learning with noisy initial labels. Inspired by the successful use of sparse coding for noise reduction, we choose to give new L1-norm formulation of Laplacian regularization for graph-based semi-supervised learning. Since our L1-norm Laplacian regularization is explicitly defined over the eigenvectors of the normalized Laplacian matrix, we formulate graph-based semi-supervised learning as an L1-norm linear reconstruction problem which can be efficiently solved by sparse coding. Furthermore, by working with only a small subset of eigenvectors, we develop a fast sparse coding algorithm for our L1-norm semi-supervised learning. Finally, we evaluate the proposed algorithm in noise-robust image classification. The experimental results on several benchmark datasets demonstrate the promising performance of the proposed algorithm.  相似文献   

3.
针对现有传统的一些图像去噪方法难以获得清晰图像边缘的问题,提出了利用ε-SVR技术构建图像去噪滤波器的新方法。ε-支持向量回归机通过引入ε不敏感损失函数,可以实现具有较强鲁棒性的回归,而且回归估计是稀疏的,保留了SVM的所有优点。分析了ε-支持向量回归机理论算法及其在图像去噪中的应用,使用ε-支持向量回归机对图像进行滤波并且与最小值滤波、均值滤波和维纳滤波等常用的滤波方法相比较,还比较了SVM各种核函数对不同噪声的滤波效果和分析了不同阶数的Multinomial核的滤波效果。实验结果表明了ε-支持向量回归机能够有效地去除噪声,不但信噪比较高而且比较清晰,同时具有良好的稀疏性。  相似文献   

4.
5.
Qiao  Chen  Yang  Lan  Shi  Yan  Fang  Hanfeng  Kang  Yanmei 《Applied Intelligence》2022,52(1):237-253

To have the sparsity of deep neural networks is crucial, which can improve the learning ability of them, especially for application to high-dimensional data with small sample size. Commonly used regularization terms for keeping the sparsity of deep neural networks are based on L1-norm or L2-norm; however, they are not the most reasonable substitutes of L0-norm. In this paper, based on the fact that the minimization of a log-sum function is one effective approximation to that of L0-norm, the sparse penalty term on the connection weights with the log-sum function is introduced. By embedding the corresponding iterative re-weighted-L1 minimization algorithm with k-step contrastive divergence, the connections of deep belief networks can be updated in a way of sparse self-adaption. Experiments on two kinds of biomedical datasets which are two typical small sample size datasets with a large number of variables, i.e., brain functional magnetic resonance imaging data and single nucleotide polymorphism data, show that the proposed deep belief networks with self-adaptive sparsity can learn the layer-wise sparse features effectively. And results demonstrate better performances including the identification accuracy and sparsity capability than several typical learning machines.

  相似文献   

6.
p范数正则化支持向量机分类算法   总被引:6,自引:3,他引:3  
L2范数罚支持向量机(Support vector machine,SVM)是目前使用最广泛的分类器算法之一,同时实现特征选择和分类器构造的L1范数和L0范数罚SVM算法也已经提出.但是,这两个方法中,正则化阶次都是事先给定,预设p=2或p=1.而我们的实验研究显示,对于不同的数据,使用不同的正则化阶次,可以改进分类算法的预测准确率.本文提出p范数正则化SVM分类器算法设计新模式,正则化范数的阶次p可取范围为02范数罚SVM,L1范数罚SVM和L0范数罚SVM.  相似文献   

7.

Classical support vector machine (SVM) and its twin variant twin support vector machine (TWSVM) utilize the Hinge loss that shows linear behaviour, whereas the least squares version of SVM (LSSVM) and twin least squares support vector machine (LSTSVM) uses L2-norm of error which shows quadratic growth. The robust Huber loss function is considered as the generalization of Hinge loss and L2-norm loss that behaves like the quadratic L2-norm loss for closer error points and the linear Hinge loss after a specified distance. Three functional iterative approaches based on generalized Huber loss function are proposed in this paper to solve support vector classification problems of which one is based on SVM, i.e. generalized Huber support vector machine and the other two are in the spirit of TWSVM, namely generalized Huber twin support vector machine and regularization on generalized Huber twin support vector machine. The proposed approaches iteratively find the solutions and eliminate the requirements to solve any quadratic programming problem (QPP) as for SVM and TWSVM. The main advantages of the proposed approach are: firstly, utilize the robust Huber loss function for better generalization and for lesser sensitivity towards noise and outliers as compared to quadratic loss; secondly, it uses functional iterative scheme to find the solution that eliminates the need to solving QPP and also makes the proposed approaches faster. The efficacy of the proposed approach is established by performing numerical experiments on several real-world datasets and comparing the result with related methods, viz. SVM, TWSVM, LSSVM and LSTSVM. The classification results are convincing.

  相似文献   

8.
ABSTRACT

Hyperspectral unmixing is essential for image analysis and quantitative applications. To further improve the accuracy of hyperspectral unmixing, we propose a novel linear hyperspectral unmixing method based on l1?l2 sparsity and total variation (TV) regularization. First, the enhanced sparsity based on the l1?l2 norm is explored to depict the intrinsic sparse characteristic of the fractional abundances in a sparse regression unmixing model because the l1?l2 norm promotes stronger sparsity than the l1 norm. Then, TV is minimized to enforce the spatial smoothness by considering the spatial correlation between neighbouring pixels. Finally, the extended alternating direction method of multipliers (ADMM) is utilized to solve the proposed model. Experimental results on simulated and real hyperspectral datasets show that the proposed method outperforms several state-of-the-art unmixing methods.  相似文献   

9.
Not only different databases but two classes of data within a database can also have different data structures. SVM and LS-SVM typically minimize the empirical ?-risk; regularized versions subject to fixed penalty (L2 or L1 penalty) are non-adaptive since their penalty forms are pre-determined. They often perform well only for certain types of situations. For example, LS-SVM with L2 penalty is not preferred if the underlying model is sparse. This paper proposes an adaptive penalty learning procedure called evolution strategies (ES) based adaptive Lp least squares support vector machine (ES-based Lp LS-SVM) to address the above issue. By introducing multiple kernels, a Lp penalty based nonlinear objective function is derived. The iterative re-weighted minimal solver (IRMS) algorithm is used to solve the nonlinear function. Then evolution strategies (ES) is used to solve the multi-parameters optimization problem. Penalty parameterp, kernel and regularized parameters are adaptively selected by the proposed ES-based algorithm in the process of training the data, which makes it easier to achieve the optimal solution. Numerical experiments are conducted on two artificial data sets and six real world data sets. The experiment results show that the proposed procedure offer better generalization performance than the standard SVM, the LS-SVM and other improved algorithms.  相似文献   

10.
Classification of interfering signals that belong to different wireless standards is important topic in wireless communications. In this paper, we propose a procedure for separation and classification of wireless signals belonging to the Bluetooth and to the IEEE 802.11b standards. These signals operate in the same frequency band and may interfere with each other. The procedure is made of a few steps. In the first step, the separation of signal components is done using the eigenvalue decomposition approach. The second stage is based on the compressive sensing approach, used to reduce the number of transmitted samples. A suitable transform domain is chosen for each separated component using ℓ1-norm as a measure of sparsity. Since the Bluetooth signals are less sparse compared to the IEEE 802.11b signals, after choosing sparse domain, additional sparsification needs to performed to further enhance the sparsity. In the last step of the procedure, the classification is performed by observing the time-frequency characteristics of the reconstructed separated components. The theory is proved by the experimental results.  相似文献   

11.
Sparse representation methods based on l1 and/or l2 regularization have shown promising performance in different applications. Previous studies show that the l1 regularization based representation has more sparse property, while the l2 regularization based representation is much simpler and faster. However, when dealing with noisy data, both naive l1 and l2 regularization suffer from the issue of unsatisfactory robustness. In this paper, we explore the method to implement an antinoise sparse representation method for robust face recognition based on a joint version of l1 and l2 regularization. The contributions of this paper are mainly shown in the following aspects. First, a novel objective function combining both l1 and l2 regularization is proposed to implement an antinoise sparse representation. An iterative fitting operation via l1 regularization is integrated with l2 norm minimization, to obtain an antinoise classification. Second, the rationale how the proposed method produces promising discriminative and antinoise performance for face recognition is analyzed. The l2 regularization enhances robustness and runs fast, and l1 regularization helps cope with the noisy data. Third, the classification robustness of the proposed method is demonstrated by extensive experiments on several benchmark facial datasets. The method can be considered as an option for the expert systems for biometrics and other recognition problems facing unstable and noisy data.  相似文献   

12.
Lu  Haohan  Chen  Hongmei  Li  Tianrui  Chen  Hao  Luo  Chuan 《Applied Intelligence》2022,52(10):11652-11671

The dimension of data in the domain of multi-label learning is usually high, which makes the calculation cost very high. As an important data dimension reduction technology, feature selection has attracted the attention of many researchers. And the imbalance of data labels is also one of the factors that perplex multi-label learning. To tackle these problems, we propose a new multi-label feature selection algorithm named IMRFS, which combines manifold learning and label imbalance. Firstly, in order to keep the manifold structure between samples, the Laplacian graph is used to construct the manifold regularization. In addition, the local manifold structure of each label is considered to find the correlation between labels. And the imbalance distribution of labels is also considered, which is embedded into the manifold structure of labels. Furthermore, in order to ensure the robustness and sparsity of the IMRFS method, the L2,1-norm is applied to loss function and sparse regularization term simultaneously. Then, we adopt an iterative strategy to optimize the objective function of IMRFS. Finally, comparison results on multiple datasets show the effectiveness of IMRFS method.

  相似文献   

13.
组合预测模型的权重确定方式对于提高模型精度至关重要,为研究正则化与交叉验证是否能改善组合预测模型的预测效果,提出将正则化和交叉验证应用于基于最小二乘法的组合预测模型.通过在组合模型的最优化求解中分别加入L1L2范数正则化项,并对数据集进行留一交叉验证后发现:L1L2范数正则化都对组合模型的预测精度具有改善效果,且L1范数正则化比L2范数正则化对组合预测模型的改善效果更好,并且参与组合预测的单项预测模型越多,正则化的改善效果越好,交叉验证对组合预测模型的改善效果则与给定实验数据量呈现正相关.  相似文献   

14.
We propose a new approach to the problem of robust estimation for a class of inverse problems arising in multiview geometry. Inspired by recent advances in the statistical theory of recovering sparse vectors, we define our estimator as a Bayesian maximum a posteriori with multivariate Laplace prior on the vector describing the outliers. This leads to an estimator in which the fidelity to the data is measured by the L ??-norm while the regularization is done by the L 1-norm. The proposed procedure is fairly fast since the outlier removal is done by solving one linear program (LP). An important difference compared to existing algorithms is that for our estimator it is not necessary to specify neither the number nor the proportion of the outliers; only an upper bound on the maximal measurement error for the inliers should be specified. We present theoretical results assessing the accuracy of our procedure, as well as numerical examples illustrating its efficiency on synthetic and real data.  相似文献   

15.
Regularized classifiers are known to be a kind of kernel-based classification methods generated from Tikhonov regularization schemes, and the trigonometric polynomial kernels are ones of the most important kernels and play key roles in signal processing. The main target of this paper is to provide convergence rates of classification algorithms generated by regularization schemes with trigonometric polynomial kernels. As a special case, an error analysis for the support vector machines (SVMs) soft margin classifier is presented. The norms of Fejér operator in reproducing kernel Hilbert space and properties of approximation of the operator in L 1 space with periodic function play key roles in the analysis of regularization error. Some new bounds on the learning rate of regularization algorithms based on the measure of covering number for normalized loss functions are established. Together with the analysis of sample error, the explicit learning rates for SVM are also derived.  相似文献   

16.
ε-支持向量回归机算法及其应用   总被引:2,自引:0,他引:2  
针对现有传统的一些图像去噪方法难以获得清晰图像边缘的问题,提出了利用ε-SVR技术构建图像去噪滤波器的新方法。ε-支持向量回归机通过引入ε不敏感损失函数,可以实现具有较强鲁棒性的回归,而且回归估计是稀疏的,保留了SVM的所有优点。分析了ε-支持向量回归机理论算法及其在图像去噪中的应用,使用ε-支持向量回归机对图像进行滤波并且与最小值滤波、均值滤波和维纳滤波等常用的滤波方法相比较,还比较了SVM各种核函数对不同噪声的滤波效果和分析了不同阶数的Multinomial核的滤波效果。实验结果表明了ε-支持向量回归机能够有效地去除噪声,不但信噪比较高而且比较清晰,同时具有良好的稀疏性。  相似文献   

17.
Various sparse principal component analysis (PCA) methods have recently been proposed to enhance the interpretability of the classical PCA technique by extracting principal components (PCs) of the given data with sparse non-zero loadings. However, the performance of these methods is prone to be adversely affected by the presence of outliers and noises. To alleviate this problem, a new sparse PCA method is proposed in this paper. Instead of maximizing the L2-norm variance of the input data as the conventional sparse PCA methods, the new method attempts to capture the maximal L1-norm variance of the data, which is intrinsically less sensitive to noises and outliers. A simple algorithm for the method is specifically designed, which is easy to be implemented and converges to a local optimum of the problem. The efficiency and the robustness of the proposed method are theoretically analyzed and empirically verified by a series of experiments implemented on multiple synthetic and face reconstruction problems, as compared with the classical PCA method and other typical sparse PCA methods.  相似文献   

18.
Learning a compact predictive model in an online setting has recently gained a great deal of attention.The combination of online learning with sparsity-inducing regularization enables faster learning with a smaller memory space than the previous learning frameworks.Many optimization methods and learning algorithms have been developed on the basis of online learning with L1-regularization.L1-regularization tends to truncate some types of parameters,such as those that rarely occur or have a small range of values,unless they are emphasized in advance.However,the inclusion of a pre-processing step would make it very difficult to preserve the advantages of online learning.We propose a new regularization framework for sparse online learning.We focus on regularization terms,and we enhance the state-of-the-art regularization approach by integrating information on all previous subgradients of the loss function into a regularization term.The resulting algorithms enable online learning to adjust the intensity of each feature’s truncations without pre-processing and eventually eliminate the bias of L1-regularization.We show theoretical properties of our framework,the computational complexity and upper bound of regret.Experiments demonstrated that our algorithms outperformed previous methods in many classification tasks.  相似文献   

19.
This paper presents kernel regularization information criterion (KRIC), which is a new criterion for tuning regularization parameters in kernel logistic regression (KLR) and support vector machines (SVMs). The main idea of the KRIC is based on the regularization information criterion (RIC). We derive an eigenvalue equation to calculate the KRIC and solve the problem. The computational cost for parameter tuning by the KRIC is reduced drastically by using the Nystro/spl uml/m approximation. The test error rate of SVMs or KLR with the regularization parameter tuned by the KRIC is comparable with the one by the cross validation or evaluation of the evidence. The computational cost of the KRIC is significantly lower than the one of the other criteria.  相似文献   

20.
Nowadays, a series of methods are based on a L 1 penalty to solve the variable selection problem for a Cox’s proportional hazards model. In 2010, Xu et al. have proposed a L 1/2 regularization and proved that the L 1/2 penalty is sparser than the L 1 penalty in linear regression models. In this paper, we propose a novel shooting method for the L 1/2 regularization and apply it on the Cox model for variable selection. The experimental results based on comprehensive simulation studies, real Primary Biliary Cirrhosis and diffuse large B cell lymphoma datasets show that the L 1/2 regularization shooting method performs competitively.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号