首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
面向时序预测的支持向量回归参数选择方法   总被引:1,自引:0,他引:1  
支持向量回归作为一种新的学习方法,在用于时间序列建模与预测时具有较好的泛化性能和预测能力.在支持向量回归建模的过程中,参数的选择对于模型的准确性至关重要.针对目前支持向量回归模型参数优化中存在的问题,提出一种面向时间序列预测的支持向量回归参数选择方法.根据时间序列及其预测的特点,对传统的交叉验证方法进行了改进,在保证时间序列预测方向性特征的基础上,充分挖掘有限样本所包含的信息,并将之与(-加权的支持向量回归相结合以选择好的模型参数.典型时间序列上的实验结果表明了所提出的支持向量回归参数选择方法的有效性,该方法在用于时间序列预测时取得了良好的效果.  相似文献   

2.
研究支持向量机参数优化问题,由于算法要求准确选择SVM参数,支持向量机在处理大样本数据集时和最优模型参数确定时,消耗的时间长、占有内存大,易获得局部最优解的难题.为了解决支持向量机存在的不足,采用深度优先搜索算法对其参数优化机机制进行改进.将向量机参数优化视成一个组合优化问题,将支持向量机模型的分类误差作为优化目标函数,采用深度优先算法对其进行求解,最后将模型应用于3个标准分类数据集.仿真结果表明,优化参数后的支持向量机加快模型的训练速度度,提高了分类的准确率,很好的解决了支持向量机参数优化难题.  相似文献   

3.
基于LS-SVM算法的混沌时序递推预测   总被引:2,自引:0,他引:2       下载免费PDF全文
研究利用最小二乘支持向量机(LS-SVM)预测变参数混沌时间序列。支持向量机方法是基于结构风险最小化原理导出的,最小二乘支持向量机是一种在二次损失函数下采用等式约束求解问题的支持向量机,保留支持向量机优点的同时计算量大大减少。变参数混沌时间序列预测是典型的小样本学习问题,由于参数的慢变导致系统的动力学特性不断发生变化,全局建模预测方法很难适用,必须在线实时预测。为了快速跟踪预测变参数混沌系统的时间序列,研究了利用一种简化的最小二乘支持向量机在线递推算法进行预测。最后对典型变参数混沌时间序列的预测实验结果表明了该方法的有效性。  相似文献   

4.
针对入侵检测系统产生的高维数据和支持向量机参数优化问题,提出一种遗传算法同步选择特征和支持向量机参数的网络入侵检测模型。首先将特征子集和支持向量机参数编码成染色体,将网络入侵检测的分类准确率作为种群个体的适应度值,然后通过遗传算法的全局搜索能力,同步找到对分类算法最有影响的特征组合和支持向量机最优参数,最后采用KDD99数据集进行仿真实验。结果表明,该模型可以快速找到最优特征子集和支持向量机参数,提高了网络入侵检测正确率,是一种较好的网络入侵检测算法。  相似文献   

5.
混沌粒子群算法对支持向量机模型参数的优化   总被引:1,自引:1,他引:0  
研究支持向量机模型优化问题,支持向量机的参数选择决定了其学习性能和泛化能力,由于在参数的选择范围内可选择的数量很多,在多个参数中进和盲目搜索最优参数是需要极大的时间代价,并且很难得到最优参数.常用的支持向量机优化方法有遗传算法、粒子群算法都存在易陷入局部极值,优化效果较差.为解决支持向量机参数寻优问题,提出一种基于混沌粒子群的支持向量机参数选择方法.将混沌理论引入粒子群优化算法中,从而提高种群的多样性和粒子搜索的遍历性,从而有效地提高了PSO算法的收敛速度和精度,得了优化支持向量机模型.并以信用卡案例数据作为研究对象进行了仿真,实验结果表明,混沌粒子群优化的SVM分类器比传统算法优化的SVM分类器的精度高和更高的效率,应用效果好.  相似文献   

6.
李景灿    丁世飞   《智能系统学报》2019,14(6):1121-1126
孪生支持向量机(twin support vector machine, TWSVM)是在支持向量机的基础上产生的机器学习算法,具有训练速度快、分类性能优越等优点。但是孪生支持向量机无法很好地处理参数选择问题,不合适的参数会降低分类能力。人工鱼群算法(artificial fish swarm algorithm, AFSA)是一种群智能优化算法,具有较强的全局寻优能力和并行处理能力。本文将孪生支持向量机与人工鱼群算法结合,来解决孪生支持向量机的参数选择问题。首先将孪生支持向量机的参数作为人工鱼的位置信息,同时将分类准确率作为目标函数,然后通过人工鱼的觅食、聚群、追尾和随机行为来更新位置和最优解,最后迭代结束时得到最优参数和最优分类准确率。该算法在训练过程中自动确定孪生支持向量机的参数,避免了参数选择的盲目性,提高了孪生支持向量机的分类性能。  相似文献   

7.
遗传优化支持向量机的软件可靠性预测模型   总被引:5,自引:0,他引:5       下载免费PDF全文
软件可靠性预测在软件开发的早期就能预测出哪些模块有出错倾向。提出一种改进的支持向量机来进行软件可靠性预测。针对支持向量机参数难选择的问题,将遗传算法引入到支持向量机的参数选择中,构造基于遗传算法优化支持向量机的软件可靠性预测模型,并用主成分分析的方法对软件度量数据进行降维,通过仿真实验,证明该模型比支持向量机、BP神经网络、分类回归树和聚类分析等预测模型具有更高的预测精度。  相似文献   

8.
采用遗传算法优化最小二乘支持向量机参数的方法   总被引:11,自引:1,他引:11  
支持向量机是建立在统计学习理论上的一种学习算法,较好地解决了小样本学习问题.由不同的参数和核函数构造的支持向量机在性能上存在很大差异,而在参数和核函数的选择上目前还没有明确的理论依据.针对支持向量机的参数选择问题,提出了一种采用遗传算法优化最小二乘支持向量机参数的方法.结合LS-SVMlab工具箱,在MATLAB实验平台的仿真实验表明,该方法提高了支持向量机的参数选择效率,得到的参数对测试样本的分类结果是最优的,从而避免了人为设定参数的不足,同时缩短了优化时间.  相似文献   

9.
针对网络流量预测模型存在预测稳定性不好、精度较低等问题,提出一种改进布谷鸟搜索算法优化支持向量机的网络流量预测模型(GCS.SVM)。将网络流量时间序列进行重构,采用改进布谷鸟搜索算法优化支持向量机参数,使用这组最优参数建立网络流量预测模型。仿真结果表明,GCS—SVM模型对网络流量预测是有效可行的。  相似文献   

10.
基于PSO优化的SVM预测应用研究*   总被引:7,自引:2,他引:5  
支持向量机参数对支持向量机的性能有着重要影响,参数选择问题是支持向量机的重要研究内容。针对此问题,提出一种基于粒子群优化算法的支持向量机参数选择方法。实验结果表明,经粒子群优化算法优化的支持向量机回归模型具有较高的预测精度,粒子群优化算法是选取支持向量机参数的有效方法。  相似文献   

11.
Support vector machines (SVMs) are a class of popular classification algorithms for their high generalization ability. However, it is time-consuming to train SVMs with a large set of learning samples. Improving learning efficiency is one of most important research tasks on SVMs. It is known that although there are many candidate training samples in some learning tasks, only the samples near decision boundary which are called support vectors have impact on the optimal classification hyper-planes. Finding these samples and training SVMs with them will greatly decrease training time and space complexity. Based on the observation, we introduce neighborhood based rough set model to search boundary samples. Using the model, we firstly divide sample spaces into three subsets: positive region, boundary and noise. Furthermore, we partition the input features into four subsets: strongly relevant features, weakly relevant and indispensable features, weakly relevant and superfluous features, and irrelevant features. Then we train SVMs only with the boundary samples in the relevant and indispensable feature subspaces, thus feature and sample selection is simultaneously conducted with the proposed model. A set of experimental results show the model can select very few features and samples for training; in the mean time the classification performances are preserved or even improved.  相似文献   

12.
一种用于文本分类的语义SVM及其在线学习算法   总被引:2,自引:1,他引:1  
该文利用SVM在小训练样本集条件下仍有高泛化能力的特性,结合文本分类问题中同类别文本的特征在特征空间中具有聚类性分布的特点,提出一种使用语义中心集代替原训练样本集作为训练样本和支持向量的SVM:语义SVM。文中给出语义中心集的生成步骤,进而给出语义SVM的在线学习(在线分类知识积累)算法框架,以及基于SMO算法的在线学习算法的实现。实验结果说明语义SVM及其在线学习算法具有巨大的应用潜力:不仅在线学习速度和分类速度相对于标准SVM及其简单增量算法有数量级提高,而且分类准确率方面具有一定优势。  相似文献   

13.
Incremental training of support vector machines   总被引:13,自引:0,他引:13  
We propose a new algorithm for the incremental training of support vector machines (SVMs) that is suitable for problems of sequentially arriving data and fast constraint parameter variation. Our method involves using a "warm-start" algorithm for the training of SVMs, which allows us to take advantage of the natural incremental properties of the standard active set approach to linearly constrained optimization problems. Incremental training involves quickly retraining a support vector machine after adding a small number of additional training vectors to the training set of an existing (trained) support vector machine. Similarly, the problem of fast constraint parameter variation involves quickly retraining an existing support vector machine using the same training set but different constraint parameters. In both cases, we demonstrate the computational superiority of incremental training over the usual batch retraining method.  相似文献   

14.
Determining the kernel and error penalty parameters for support vector machines (SVMs) is very problem-dependent in practice. A popular method to deciding the kernel parameters is the grid search method. In the training process, classifiers are trained with different kernel parameters, and only one of the classifiers is required for the testing process. This makes the training process time-consuming. In this paper we propose using the inter-cluster distances in the feature spaces to choose the kernel parameters. Calculating such distance costs much less computation time than training the corresponding SVM classifiers; thus the proper kernel parameters can be chosen much faster. Experiment results show that the inter-cluster distance can choose proper kernel parameters with which the testing accuracy of trained SVMs is competitive to the standard ones, and the training time can be significantly shortened.  相似文献   

15.
A parallel mixture of SVMs for very large scale problems   总被引:7,自引:0,他引:7  
Support vector machines (SVMs) are the state-of-the-art models for many classification problems, but they suffer from the complexity of their training algorithm, which is at least quadratic with respect to the number of examples. Hence, it is hopeless to try to solve real-life problems having more than a few hundred thousand examples with SVMs. This article proposes a new mixture of SVMs that can be easily implemented in parallel and where each SVM is trained on a small subset of the whole data set. Experiments on a large benchmark data set (Forest) yielded significant time improvement (time complexity appears empirically to locally grow linearly with the number of examples). In addition, and surprisingly, a significant improvement in generalization was observed.  相似文献   

16.
Random forest classifier for remote sensing classification   总被引:4,自引:0,他引:4  
Growing an ensemble of decision trees and allowing them to vote for the most popular class produced a significant increase in classification accuracy for land cover classification. The objective of this study is to present results obtained with the random forest classifier and to compare its performance with the support vector machines (SVMs) in terms of classification accuracy, training time and user defined parameters. Landsat Enhanced Thematic Mapper Plus (ETM+) data of an area in the UK with seven different land covers were used. Results from this study suggest that the random forest classifier performs equally well to SVMs in terms of classification accuracy and training time. This study also concludes that the number of user‐defined parameters required by random forest classifiers is less than the number required for SVMs and easier to define.  相似文献   

17.
We present new fingerprint classification algorithms based on two machine learning approaches: support vector machines (SVMs) and recursive neural networks (RNNs). RNNs are trained on a structured representation of the fingerprint image. They are also used to extract a set of distributed features of the fingerprint which can be integrated in the SVM. SVMs are combined with a new error-correcting code scheme. This approach has two main advantages: (a) It can tolerate the presence of ambiguous fingerprint images in the training set and (b) it can effectively identify the most difficult fingerprint images in the test set. By rejecting these images the accuracy of the system improves significantly. We report experiments on the fingerprint database NIST-4. Our best classification accuracy is of 95.6 percent at 20 percent rejection rate and is obtained by training SVMs on both FingerCode and RNN-extracted features. This result indicates the benefit of integrating global and structured representations and suggests that SVMs are a promising approach for fingerprint classification.  相似文献   

18.
Fabien  Grard 《Neurocomputing》2008,71(7-9):1578-1594
For classification, support vector machines (SVMs) have recently been introduced and quickly became the state of the art. Now, the incorporation of prior knowledge into SVMs is the key element that allows to increase the performance in many applications. This paper gives a review of the current state of research regarding the incorporation of two general types of prior knowledge into SVMs for classification. The particular forms of prior knowledge considered here are presented in two main groups: class-invariance and knowledge on the data. The first one includes invariances to transformations, to permutations and in domains of input space, whereas the second one contains knowledge on unlabeled data, the imbalance of the training set or the quality of the data. The methods are then described and classified into the three categories that have been used in literature: sample methods based on the modification of the training data, kernel methods based on the modification of the kernel and optimization methods based on the modification of the problem formulation. A recent method, developed for support vector regression, considers prior knowledge on arbitrary regions of the input space. It is exposed here when applied to the classification case. A discussion is then conducted to regroup sample and optimization methods under a regularization framework.  相似文献   

19.
训练样本选择是支持向量机的一个重要研究课题。但是,目前大部分样本选择方法的一个共同的不足就是,其训练样本的候选集是整个样本空间,因此可能会选择一些对分类效果影响不大的内部样本,或者选择一些可能会降低分类效果的“过边界”样本。提出了两种基于“有效”候选集的样本选择方法。该方法首先通过“挖心”和剔除“过边界”样本来确定训练样本的“有效”候选集,然后在此“有效”候选集上进行训练样本的选择。实验结果表明,该方法在保留“有效”候选样本的同时,也提高了支持向量机分类器的正确识别率。  相似文献   

20.
This article presents a sufficient comparison of two types of advanced non-parametric classifiers implemented in remote sensing for land cover classification. A SPOT-5 HRG image of Yanqing County, Beijing, China, was used, in which agriculture and forest dominate land use. Artificial neural networks (ANNs), including the adaptive backpropagation (ABP) algorithm, Levenberg–Marquardt (LM) algorithm, Quasi-Newton (QN) algorithm and radial basis function (RBF) were carefully tested. The LM–ANN and RBF–ANN, which outperform the other two, were selected to make a detailed comparison with support vector machines (SVMs). The experiments show that those well-trained ANNs and SVMs have no significant difference in classification accuracy, but the SVM usually performs slightly better. Analysis of the effect of the training set size highlights that the SVM classifier has great tolerance on a small training set and avoids the problem of insufficient training of ANN classifiers. The testing also illustrates that the ANNs and SVMs can vary greatly with regard to training time. The LM–ANN can converge very quickly but not in a stable manner. By contrast, the training of RBF–ANN and SVM classifiers is fast and can be repeatable.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号