首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
一种改进的在线最小二乘支持向量机回归算法   总被引:4,自引:0,他引:4  
针对一般最小二乘支持向量机处理大规模数据集会出现训练速度幔、计算量大、不易在线训练的缺点,将修正后的遗忘因子矩形窗方法与支持向量机相结合,提出一种基于改进的遗忘因子矩形窗算法的在线最小二乘支持向量机回归算法,既突出了当前窗口数据的作用,又考虑了历史数据的影响.所提出的算法可减少计算量,提高在线辨识精度.仿真算例表明了该方法的有效性.  相似文献   

2.
改进的支持向量机分类算法   总被引:1,自引:0,他引:1  
在研究了标准SVM分类算法后,本文提出了一种快速的支持向量机分类方法.该方法通过解决两类相关的SVM问题,找到两个非平行的平面,其中每个平面靠近其相应的类样本点,远离另一类样本点,最后通过这两个平面找到一个将两类样本分开的最优平面.在处理非线性情况下,引入一种快速核函数分类方法.使用该算法可以使分类的速度得到很大提高,针对实际数据集的实验表明了该算法的有效性.  相似文献   

3.
Feature engineering is one of the most complex aspects of system design in machine learning. Fortunately, kernel methods provide the designer with formidable tools to tackle such complexity. Among others, tree kernels (TKs) have been successfully applied for representing structured data in diverse domains, ranging from bioinformatics and data mining to natural language processing. One drawback of such methods is that learning with them typically requires a large number of kernel computations (quadratic in the number of training examples) between training examples. However, in practice substructures often repeat in the data which makes it possible to avoid a large number of redundant kernel evaluations. In this paper, we propose the use of Directed Acyclic Graphs (DAGs) to compactly represent trees in the training algorithm of Support Vector Machines. In particular, we use DAGs for each iteration of the cutting plane algorithm (CPA) to encode the model composed by a set of trees. This enables DAG kernels to efficiently evaluate TKs between the current model and a given training tree. Consequently, the amount of total computation is reduced by avoiding redundant evaluations over shared substructures. We provide theory and algorithms to formally characterize the above idea, which we tested on several datasets. The empirical results confirm the benefits of the approach in terms of significant speedups over previous state-of-the-art methods. In addition, we propose an alternative sampling strategy within the CPA to address the class-imbalance problem, which coupled with fast learning methods provides a viable TK learning framework for a large class of real-world applications.  相似文献   

4.
An instance-weighted variant of the support vector machine (SVM) has attracted considerable attention recently since they are useful in various machine learning tasks such as non-stationary data analysis, heteroscedastic data modeling, transfer learning, learning to rank, and transduction. An important challenge in these scenarios is to overcome the computational bottleneck??instance weights often change dynamically or adaptively, and thus the weighted SVM solutions must be repeatedly computed. In this paper, we develop an algorithm that can efficiently and exactly update the weighted SVM solutions for arbitrary change of instance weights. Technically, this contribution can be regarded as an extension of the conventional solution-path algorithm for a single regularization parameter to multiple instance-weight parameters. However, this extension gives rise to a significant problem that breakpoints (at which the solution path turns) have to be identified in high-dimensional space. To facilitate this, we introduce a parametric representation of instance weights. We also provide a geometric interpretation in weight space using a notion of critical region: a polyhedron in which the current affine solution remains to be optimal. Then we find breakpoints at intersections of the solution path and boundaries of polyhedrons. Through extensive experiments on various practical applications, we demonstrate the usefulness of the proposed algorithm.  相似文献   

5.
传统支持向量机是对小样本提出,对于大样本会出现训练速度慢、内存占用多等问题.并且不具有增量学习性能.而常用的增量学习方法又会出现局部极小等问题.本文阐述了一种改进的支持向量机算法(快速增量加权支持向量机算法)用于证券指数预测.该算法先对指数样本做相空间重构,再分解成若干个工作子集,针对样本重要程度给出不同权重构建预测模型.实验分析表明,在泛化精度保持略好情况下,训练速度明显提高.  相似文献   

6.
提出一种求解支持向量机(SVMs)的光滑型算法.该算法基于其对偶优化模型的KKT系统,提出一类新的光滑函数族,将其KKT系统重构为一个光滑方程组,并采用光滑型算法进行求解.在适当的条件下,该算法是全局收敛和局部超线性收敛的.多个算例表明该算法非常有效,具有广阔的应用前景.  相似文献   

7.
Least squares support vector machines ensemble models for credit scoring   总被引:1,自引:0,他引:1  
Due to recent financial crisis and regulatory concerns of Basel II, credit risk assessment is becoming one of the most important topics in the field of financial risk management. Quantitative credit scoring models are widely used tools for credit risk assessment in financial institutions. Although single support vector machines (SVM) have been demonstrated with good performance in classification, a single classifier with a fixed group of training samples and parameters setting may have some kind of inductive bias. One effective way to reduce the bias is ensemble model. In this study, several ensemble models based on least squares support vector machines (LSSVM) are brought forward for credit scoring. The models are tested on two real world datasets and the results show that ensemble strategies can help to improve the performance in some degree and are effective for building credit scoring models.  相似文献   

8.
针对支持向量机类增量学习过程中参与训练的两类样本数量不平衡而导致的错分问题,给出了一种加权类增量学习算法,将新增类作为正类,原有类作为负类,利用一对多方法训练子分类器,训练时根据训练样本所占的比例对类加权值,提高了小类别样本的分类精度。实验证明了该方法的有效性。  相似文献   

9.
针对二类支持向量机分类器在隐秘图像检测中训练步骤复杂与推广性弱的缺点,提出了一种新的基于遗传算法和一类支持向量机的隐秘图像检测方案。采用遗传算法进行图像特征选择,一类支持向量机作为分类器。实验结果表明,与只利用一类支持向量机分类,但未进行特征选择的隐秘检测方法相比,提高了隐秘图像检测的识别率和系统检测效率。  相似文献   

10.
Embedding feature selection in nonlinear support vector machines (SVMs) leads to a challenging non-convex minimization problem, which can be prone to suboptimal solutions. This paper develops an effective algorithm to directly solve the embedded feature selection primal problem. We use a trust-region method, which is better suited for non-convex optimization compared to line-search methods, and guarantees convergence to a minimizer. We devise an alternating optimization approach to tackle the problem efficiently, breaking it down into a convex subproblem, corresponding to standard SVM optimization, and a non-convex subproblem for feature selection. Importantly, we show that a straightforward alternating optimization approach can be susceptible to saddle point solutions. We propose a novel technique, which shares an explicit margin variable to overcome saddle point convergence and improve solution quality. Experiment results show our method outperforms the state-of-the-art embedded SVM feature selection method, as well as other leading filter and wrapper approaches.  相似文献   

11.
12.
This paper focuses on learning recognition systems able to cope with sequential data for classification and segmentation tasks. It investigates the integration of discriminant power in the learning of generative models, which are usually used for such data. Based on a procedure that transforms a sample data into a generative model, learning is viewed as the selection of efficient component models in a mixture of generative models. This may be done through the learning of a support vector machine. We propose a few kernels for this and report experimental results for classification and segmentation tasks.  相似文献   

13.
一种新的模糊支持向量机多分类算法   总被引:5,自引:3,他引:2  
在模糊多分类问题中,由于训练样本在训练过程中所起的作用不同,对所有数据包括异常数据赋予一个隶属度。针对模糊支持向量机(fuzzy support vector machines,FSVM)的第一种形式,引入类中心的概念,结合一对多1-a-a(one-against-all)组合分类方法,提出了一种基于一对多组合的模糊支持向量机多分类算法,并与1-a-1(one-against-one)组合和1-a-a组合的分类算法比较。数值实验表明,该算法是有效的,有较高的分类准确率,有更好的泛化能力。  相似文献   

14.
Hong Qiao 《Pattern recognition》2007,40(9):2543-2549
Support vector machines (SVMs) are a new and important tool in data classification. Recently much attention has been devoted to large scale data classifications where decomposition methods for SVMs play an important role.So far, several decomposition algorithms for SVMs have been proposed and applied in practice. The algorithms proposed recently and based on rate certifying pair/set provide very attractive features compared with many other decomposition algorithms. They converge not only with finite termination but also in polynomial time. However, it is difficult to reach a good balance between low computational cost and fast convergence.In this paper, we propose a new simple decomposition algorithm based on a new philosophy on working set selection. It has been proven that the working set selected by the new algorithm is a rate certifying set. Further, compared with the existing algorithms based on rate certifying pair/set, our algorithm provides a very good feature in combination of lower computational complexity and faster convergence.  相似文献   

15.
基于支持向量机的不平衡数据分类算法的研究*   总被引:1,自引:0,他引:1  
针对不平衡数据分类问题,提出了基于Smote与核函数修改相结合的算法。首先用Smote方法处理数 据,降低不平衡度;然后以黎曼几何为依据,利用保角变换,对核函数进行修改,提高支持向量机的分类泛化能 力;最后用修改后的支持向量机对新的数据进行处理。实验结果表明,这种方法能在保持整体正确率的前提下 有效地提高少数类样本的分类准确率。  相似文献   

16.
基于启发式遗传算法的SVM模型自动选择   总被引:6,自引:0,他引:6  
支撑矢量机(SVM)模型的自动选择是其实际应用的关键.常用的基于穷举搜索的留一法(LOO)很繁杂且效率很低.到目前为止,大多数的算法并不能有效地实现模型自动选择.本文利用实值编码的启发式遗传算法实现基于高斯核函数的SVM模型自动选择.在重点分析了SVM超参数对其性能的影响和两种SVM性能估计的基础上,确定了合适的遗传算法适应度函数.人造数据及实际数据的仿真结果表明了所提方法的可行性和高效性.  相似文献   

17.
This paper presents a novel blind source separation algorithm integrating the estimation of the probability density function with the fixed-point algorithm. Firstly, the kernel function is constructed by the radial basis function; then the sparse representation of the probability density function of the mixed signals are established, this sparse representation is based on the support vector machines recursion method of neural network theory, thus the closed form expression of the probability density function is obtained; finally, a new estimation method of the activation function is put forward, combining the Fast ICA with the estimation method, we can get a new algorithm of blind source separation. The simulation results have verified that the algorithm can successfully separate the mixed sub-Gaussian and super-Gaussian source signals, and the performance of the algorithm is excellent.  相似文献   

18.
基于支持向量机的参数自整定PID非线性系统控制   总被引:3,自引:0,他引:3  
对非线性系统提出了一种基于支持向量机的自整定PID控制新方法.用支持向量机辨识系统的非线性关系,并对之进行线性化,提取出瞬时线性模型,采用最小方差的准则获取PID控制器的最优参数.为改善控制器的性能,提出了一些改进措施,包括使用一阶滤波器、控制器参数更新标准及惩罚系数的调整等.通过对典型非线性系统的仿真,验证了该方法的有效性和可行性.  相似文献   

19.
In this paper, we propose to reinforce the Self-Training strategy in semi-supervised mode by using a generative classifier that may help to train the main discriminative classifier to label the unlabeled data. We call this semi-supervised strategy Help-Training and apply it to training kernel machine classifiers as support vector machines (SVMs) and as least squares support vector machines. In addition, we propose a model selection strategy for semi-supervised training. Experimental results on both artificial and real problems demonstrate that Help-Training outperforms significantly the standard Self-Training. Moreover, compared to other semi-supervised methods developed for SVMs, our Help-Training strategy often gives the lowest error rate.  相似文献   

20.
Support vector machines (SVMs) have good accuracy and generalization properties, but they tend to be slow to classify new examples. In contrast to previous work that aims to reduce the time required to fully classify all examples, we present a method that provides the best-possible classification given a specific amount of computational time. We construct two SVMs: a “full” SVM that is optimized for high accuracy, and an approximation SVM (via reduced-set or subset methods) that provides extremely fast, but less accurate, classifications. We apply the approximate SVM to the full data set, estimate the posterior probability that each classification is correct, and then use the full SVM to reclassify items in order of their likelihood of misclassification. Our experimental results show that this method rapidly achieves high accuracy, by selectively devoting resources (reclassification) only where needed. It also provides the first such progressive SVM solution that can be applied to multiclass problems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号