共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
By introducing the rough set theory into the support vector machine (SVM), a rough margin based SVM (RMSVM) is proposed to deal with the overfitting problem due to outliers. Similar to the classical SVM, the RMSVM searches for the separating hyper-plane that maximizes the rough margin, defined by the lower and upper margin. In this way, more data points are adaptively considered rather than the few extreme value points used in the classical SVM. In addition, different support vectors may have different effects on the learning of the separating hyper-plane depending on their positions in the rough margin. Points in the lower margin have more effects than those in the boundary of the rough margin. From experimental results on six benchmark datasets, the classification accuracy of this algorithm is improved without additional computational expense compared with the classical ν-SVM. 相似文献
3.
Recently, researchers are focusing more on the study of support vector machine (SVM) due to its useful applications in a number of areas, such as pattern recognition, multimedia, image processing and bioinformatics. One of the main research issues is how to improve the efficiency of the original SVM model, while preventing any deterioration of the classification performance of the model. In this paper, we propose a modified SVM based on the properties of support vectors and a pruning strategy to preserve support vectors, while eliminating redundant training vectors at the same time. The experiments on real images show that (1) our proposed approach can reduce the number of input training vectors, while preserving the support vectors, which leads to a significant reduction in the computational cost while attaining similar levels of accuracy. (2)The approach also works well when applied to image segmentation. 相似文献
4.
High-accuracy positioning is not only an essential issue for efficient running of high-speed train (HST), but also an important guarantee for the safe operation of high-speed train. Positioning error is zero when the train is passing through a balise. However, positioning error between adjacent balises is going up as the train is moving away from the previous balise. Although average speed method (ASM) is commonly used to compute the position of train in engineering, its positioning error is somewhat large by analyzing the field data. In this paper, we firstly establish a mathematical model for computing position of HST after analyzing wireless message from the train control system. Then, we propose three position computation models based on least square method (LSM), support vector machine (SVM) and least square support vector machine (LSSVM). Finally, the proposed models are trained and tested by the field data collected in Wuhan-Guangzhou high-speed railway. The results show that: (1) compared with ASM, the three models proposed are capable of reducing positioning error; (2) compared with ASM, the percentage error of LSM model is reduced by 50.2% in training and 53.9% in testing; (3) compared with LSM model, the percentage error of SVM model is further reduced by 38.8% in training and 14.3% in testing; (4) although LSSVM model performs almost the same with SVM model, LSSVM model has advantages over SVM model in terms of running time. We also put forward some online learning methods to update the parameters in the three models and better positioning accuracy is obtained. With the three position computation models we proposed, we can improve the positioning accuracy for HST and potentially reduce the number of balises to achieve the same positioning accuracy. 相似文献
5.
6.
7.
Parallel randomized sampling for support vector machine (SVM) and support vector regression (SVR) 总被引:1,自引:1,他引:0
A parallel randomized support vector machine (PRSVM) and a parallel randomized support vector regression (PRSVR) algorithm
based on a randomized sampling technique are proposed in this paper. The proposed PRSVM and PRSVR have four major advantages
over previous methods. (1) We prove that the proposed algorithms achieve an average convergence rate that is so far the fastest
bounded convergence rate, among all SVM decomposition training algorithms to the best of our knowledge. The fast average convergence
bound is achieved by a unique priority based sampling mechanism. (2) Unlike previous work (Provably fast training algorithm
for support vector machines, 2001) the proposed algorithms work for general linear-nonseparable SVM and general non-linear
SVR problems. This improvement is achieved by modeling new LP-type problems based on Karush–Kuhn–Tucker optimality conditions.
(3) The proposed algorithms are the first parallel version of randomized sampling algorithms for SVM and SVR. Both the analytical
convergence bound and the numerical results in a real application show that the proposed algorithm has good scalability. (4)
We present demonstrations of the algorithms based on both synthetic data and data obtained from a real word application. Performance
comparisons with SVMlight show that the proposed algorithms may be efficiently implemented. 相似文献
8.
9.
10.
The support vector machine (SVM) has a high generalisation ability to solve binary classification problems, but its extension to multi-class problems is still an ongoing research issue. Among the existing multi-class SVM methods, the one-against-one method is one of the most suitable methods for practical use. This paper presents a new multi-class SVM method that can reduce the number of hyperplanes of the one-against-one method and thus it returns fewer support vectors. The proposed algorithm works as follows. While producing the boundary of a class, no more hyperplanes are constructed if the discriminating hyperplanes of neighbouring classes happen to separate the rest of the classes. We present a large number of experiments that show that the training time of the proposed method is the least among the existing multi-class SVM methods. The experimental results also show that the testing time of the proposed method is less than that of the one-against-one method because of the reduction of hyperplanes and support vectors. The proposed method can resolve unclassifiable regions and alleviate the over-fitting problem in a much better way than the one-against-one method by reducing the number of hyperplanes. We also present a direct acyclic graph SVM (DAGSVM) based testing methodology that improves the testing time of the DAGSVM method. 相似文献
11.
针对语音识别率不高的问题,提出一种基于PCS-PCA和支持向量机的分级说话人确认方法.首先采用主成分分析法对话者特征向量降维的同时,得到说话人特征向量的主成份空间,在此空间中构造PCS-PCA分类器,筛选可能的目标说话人,然后采用支持向量机进行最终的说话人确认.仿真实验结果表明该方法具有较高的识别率和较快的训练速度. 相似文献
12.
13.
在电力系统中,利用图像识别技术对没有数据传送接口的数字仪表进行识别有利于系统自动化水平的提高和安全运行。文章介绍了图像处理过程和数字仪表显示值的识别方法,阐述了支持向量机方法的基本原理,分别采用一对多和一对一的策略方法组合多个二值分类器解决了10类数字的识别问题,并利用这两种多分类器对仪表显示值进行了识别。最后,比较了支持向量机方法和其它方法的识别结果。实验结果表明,支持向量机方法具有更高的识别率。 相似文献
14.
15.
Guoqi Li Author VitaeChangyun WenAuthor Vitae Guang-Bin HuangAuthor VitaeYan Chen Author Vitae 《Neurocomputing》2011,74(5):771-782
Most existing online algorithms in support vector machines (SVM) can only grow support vectors. This paper proposes an online error tolerance based support vector machine (ET-SVM) which not only grows but also prunes support vectors. Similar to least square support vector machines (LS-SVM), ET-SVM converts the original quadratic program (QP) in standard SVM into a group of easily solved linear equations. Different from LS-SVM, ET-SVM remains support vectors sparse and realizes a compact structure. Thus, ET-SVM can significantly reduce computational time while ensuring satisfactory learning accuracy. Simulation results verify the effectiveness of the newly proposed algorithm. 相似文献
16.
17.
18.
19.
基于支持向量机的软测量技术及其应用 总被引:3,自引:0,他引:3
支持向量机(SVM)是一种基于结构风险最小化原理,具有很好推广性能的学习算法。讨论了基于最小二乘支持向量机(LS-SVM)的软测量数据建模原理和方法,并将其应用在汽车排放的氮氧化合物NOX软测量中。通过与基于神经网络的软测量方法进行比较,结果显示出SVM的明显的优势,特别是对小样本、非线性、高维数一类软测量问题的建模,提供了一种有效的途径。 相似文献
20.
This paper presents a four-step training method for increasing the efficiency of support vector machine (SVM). First, a SVM is initially trained by all the training samples, thereby producing a number of support vectors. Second, the support vectors, which make the hypersurface highly convoluted, are excluded from the training set. Third, the SVM is re-trained only by the remaining samples in the training set. Finally, the complexity of the trained SVM is further reduced by approximating the separation hypersurface with a subset of the support vectors. Compared to the initially trained SVM by all samples, the efficiency of the finally-trained SVM is highly improved, without system degradation. 相似文献