首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 562 毫秒
1.
李恒杰 《计算机应用》2007,27(6):1339-1342
Online支持向量机作为一种新的分类方法可以在异常入侵检测中提供良好的分类效果。根据Online算法对传统支持向量机、Robust支持向量机和One-class支持向量机进行改进,将改进后的算法与原始算法进行比较,然后使用1999 DARPA数据作为评估数据。通过实验和比较发现,改进后的支持向量机可以实现在线训练,而且使用更少的支持向量,训练时间也有效缩短,在噪声数据存在的情况下检测正确率和虚警率比未改进前有一定程度的提升。  相似文献   

2.
增量回归支持向量机改进学习算法   总被引:1,自引:0,他引:1  
传统的支持向量机不具有增量学习性能,而常用的增量学习方法具有不同的优缺点,为了解决这些问题,提高支持向量机的训练速度,文章分析了支持向量机的本质特征,根据支持向量机回归仅与支持向量有关的特点,提出了一种适合于支持向量机增量学习的算法(IRSVM),提高了支持向量机的训练速度和大样本学习的能力,而支持向量机的回归能力基本不受影响,取得了较好的效果。  相似文献   

3.
支持向量机(SVM)作为一种有效的模式分类方法,当数据集规模较大时,学习时间长、泛化能力下降;而核向量机(CVM)分类算法的时间复杂度与样本规模无关,但随着支持向量的增加,CVM的学习时间会快速增长。针对以上问题,提出一种CVM与SVM相结合的二阶段快速学习算法(CCS),首先使用CVM初步训练样本,基于最小包围球(MEB)筛选出潜在核向量,构建新的最有可能影响问题解的训练样本,以此降低样本规模,并使用标记方法快速提取新样本;然后对得到的新训练样本使用SVM进行训练。通过在6个数据集上与SVM和CVM进行比较,实验结果表明,CCS在保持分类精度的同时训练时间平均减少了30%以上,是一种有效的大规模分类学习算法。  相似文献   

4.
一种新的最小二乘支持向量聚类   总被引:2,自引:1,他引:1       下载免费PDF全文
凌萍  周春光  王喆 《计算机工程》2009,35(7):14-16,3
针对传统支持向量聚类的低性能和高耗费问题,提出最小二乘支持向量聚类(LSSVC)模型,设计自适应参数化方案。模型中包括两步簇划分算法和快速训练算法。前者对支持向量和非支持向量分别进行划分,后者采用增量方式,每次增量对应聚类模型的双向学习过程。实验结果证明,LSSVC可有效提高同类算法的效率,具有良好聚类能力,当数据增量为工作集大小的10%时,算法可在时间耗费和聚类准确率之间取得良好的平衡。  相似文献   

5.
Adaptive binary tree for fast SVM multiclass classification   总被引:1,自引:0,他引:1  
Jin  Cheng  Runsheng   《Neurocomputing》2009,72(13-15):3370
This paper presents an adaptive binary tree (ABT) to reduce the test computational complexity of multiclass support vector machine (SVM). It achieves a fast classification by: (1) reducing the number of binary SVMs for one classification by using separating planes of some binary SVMs to discriminate other binary problems; (2) selecting the binary SVMs with the fewest average number of support vectors (SVs). The average number of SVs is proposed to denote the computational complexity to exclude one class. Compared with five well-known methods, experiments on many benchmark data sets demonstrate our method can speed up the test phase while remain the high accuracy of SVMs.  相似文献   

6.
基于支持向量机的多分类增量学习算法   总被引:8,自引:0,他引:8  
朱美琳  杨佩 《计算机工程》2006,32(17):77-79
支持向量机被成功地应用在分类和回归问题中,但是由于其需要求解二次规划,使得支持向量机在求解大规模数据上具有一定的缺陷,尤其是对于多分类问题,现有的支持向量机算法具有太高的算法复杂性。该文提出一种基于支持向量机的增量学习算法,适合多分类问题,并将之用于解决实际问题。  相似文献   

7.
The support vector machine (SVM) is known as one of the most influential and powerful tools for solving classification and regression problems, but the original SVM does not have an online learning technique. Therefore, many researchers have introduced online learning techniques to the SVM. In a previous article, we proposed an unsupervised online learning method using the technique of the self-organized map for the SVM. In another article, we proposed the midpoint validation method for an improved SVM. We test the performance of the SVM using a combination of the two techniques in this article. In addition, we compare its performance with the original hard-margin SVM, the soft-margin SVM, and the k-NN method, and also experiment with our proposed method on surface electromyogram recognition problems with changes in the position of the electrode. These experiments showed that our proposed method gave a better performance than the other SVMs and corresponded to the changing data.  相似文献   

8.
为了提高大规模高维度数据的训练速度和分类精度,提出了一种基于局部敏感哈希的SVM快速增量学习方法。算法首先利用局部敏感哈希能快速查找相似数据的特性,在SVM算法的基础上筛选出增量中可能成为SV的样本,然后将这些样本与已有SV一起作为后续训练的基础。使用多个数据集对该算法进行了验证。实验表明,在大规模增量数据样本中,提出的SVM快速增量学习算法能有效地提高训练学习的速度,并能保持有效的准确率。  相似文献   

9.
基于向量投影的支撑向量预选取   总被引:21,自引:0,他引:21  
支撑向量机是近年来新兴的模式识别方法,在解决小样本、非线性及高维模式识别问题中表现出了突出的优点.但在支撑向量机中,支撑向量的选取相当困难,这也成为限制其应用的瓶颈问题.该文对支撑向量机的机理经过认真分析,研究其支撑向量的分布特性,在不影响分类性能的前提下,提出了基于向量投影的支撑向量预选取法,从训练样本中预先选择具有一定特征的边界向量来代替训练样本进行训练,这样就减少了训练样本,大大加快了支撑向量机的训练速度。  相似文献   

10.
Pruning Support Vector Machines Without Altering Performances   总被引:1,自引:0,他引:1  
Support vector machines (SV machines, SVMs) have many merits that distinguish themselves from many other machine-learning algorithms, such as the nonexistence of local minima, the possession of the largest distance from the separating hyperplane to the SVs, and a solid theoretical foundation. However, SVM training algorithms such as the efficient sequential minimal optimization (SMO) often produce many SVs. Some scholars have found that the kernel outputs are frequently of similar levels, which insinuate the redundancy of SVs. By analyzing the overlapped information of kernel outputs, a succinct separating-hyperplane-securing method for pruning the dispensable SVs based on crosswise propagation (CP) is systematically developed. The method also circumvents the problem of explicitly discerning SVs in feature space as the SVM formulation does. Experiments with the famous SMO-based software LibSVM reveal that all typical kernels with different parameters on the data sets contribute the dispensable SVs. Some 1% ~ 9% (in some scenarios, more than 50%) dispensable SVs are found. Furthermore, the experimental results also verify that the pruning method does not alter the SVMs' performances at all. As a corollary, this paper further contributes in theory a new lower upper bound on the number of SVs in the high-dimensional feature space.  相似文献   

11.
处理非平衡数据的粒度SVM学习算法   总被引:3,自引:1,他引:2       下载免费PDF全文
针对支持向量机对于非平衡数据不能进行有效分类的问题,提出一种粒度支持向量机学习算法。根据粒度计算思想对多数类样本进行粒划分并从中获取信息粒,以使数据趋于平衡。通过这些信息粒来寻找局部支持向量,并在这些局部支持向量和少数类样本上进行有效学习,使SVM在非平衡数据集上获得令人满意的泛化能力。  相似文献   

12.
提出一种模式识别算法——双层支持量机算法,用来提高表面肌电识别精度。该算法融合集成学习中元学习的并行方法和叠加法的递进思想,把基本SVM分类器并行分布在第1层,第1层的预测结果作为第2层的输入,由第2层再进行分类识别,从而通过多层分类器组合来融合多源特征。以手臂表面肌电数据集为测试数据,采用文中的双层支持向量机,各肌肉的肌电信号分别输入基支持向量机,组合器融合各肌肉电信号特征,集成识别前臂肌肉群的肌电信号,从而实现运动意图的精确识别。实验结果显示,在预测精度上,此算法优于单个SVM分类器。在预测性能上(识别精度、耗时、鲁棒性),此算法优于随机森林和旋转森林等集成分类器。  相似文献   

13.
Di Wang  Peng Zhang 《Pattern recognition》2010,43(10):3468-3482
Support vector machine (SVM) is a widely used classification technique. However, it is difficult to use SVMs to deal with very large data sets efficiently. Although decomposed SVMs (DSVMs) and core vector machines (CVMs) have been proposed to overcome this difficulty, they cannot be applied to online classification (or classification with learning ability) because, when new coming samples are misclassified, the classifier has to be adjusted based on the new coming misclassified samples and all the training samples. The purpose of this paper is to address this issue by proposing an online CVM classifier with adaptive minimum-enclosing-ball (MEB) adjustment, called online CVMs (OCVMs). The OCVM algorithm has two features: (1) many training samples are permanently deleted during the training process, which would not influence the final trained classifier; (2) with a limited number of selected samples obtained in the training step, the adjustment of the classifier can be made online based on new coming misclassified samples. Experiments on both synthetic and real-world data have shown the validity and effectiveness of the OCVM algorithm.  相似文献   

14.
提出一个新的基于MRSVM的说话人辨识方法,首先对语音特征矢量进行LDA降维,得到具有区分力的特征矢量,然后对其进行模糊核聚类,根据样本选择算法,选择聚类边界的特征矢量作为支持向量训练支持向量机,在不影响识别率的情况下,大大减少了支持向量机的存储量和训练量。实验表明该方法具有较好的总体效果。  相似文献   

15.
Abstract: Bankruptcy prediction and credit scoring are the two important problems facing financial decision support. The multilayer perceptron (MLP) network has shown its applicability to these problems and its performance is usually superior to those of other traditional statistical models. Support vector machines (SVMs) are the core machine learning techniques and have been used to compare with MLP as the benchmark. However, the performance of SVMs is not fully understood in the literature because an insufficient number of data sets is considered and different kernel functions are used to train the SVMs. In this paper, four public data sets are used. In particular, three different sizes of training and testing data in each of the four data sets are considered (i.e. 3:7, 1:1 and 7:3) in order to examine and fully understand the performance of SVMs. For SVM model construction, the linear, radial basis function and polynomial kernel functions are used to construct the SVMs. Using MLP as the benchmark, the SVM classifier only performs better in one of the four data sets. On the other hand, the prediction results of the MLP and SVM classifiers are not significantly different for the three different sizes of training and testing data.  相似文献   

16.
文益民 《计算机工程》2006,32(21):177-179,182
基于支持向量能够代表训练集分类特征的特点,该文提出了一种基于支持向量的分层并行筛选训练样本的机器学习方法。该方法按照分而治之的思想将原分类问题分解成若干子问题,将训练样本的筛选过程分解成级联的2个层次。每层采用并行方法提取各训练集中的支持向量,这些被提取的支持向量将作为下一层的训练样本,各层训练集中的非支持向量通过学习被逐步筛选掉。为了保证问题的一致性,引入了交叉合并规则,仿真实验结果表明该方法在保证分类器推广能力的情况下,缩短了支持向量机的训练时间,减少了支持向量的数目。  相似文献   

17.
支持向量机的训练算法综述   总被引:1,自引:0,他引:1  
支持向量机(SVM)是在统计学习理论基础上发展起来的新方法,其训练算法本质上是一个二次规划的求解问题.首先简要概述了SVM的基本原理,然后对SVM训练算法的国内外研究现状进行综述,重点分析SVM的缩减算法和具有线性收敛性质的算法,对这些算法的性能进行比较,并且对SVM的扩展算法也进行简单介绍.最后对该领域存在的问题和发展趋势进行了展望.  相似文献   

18.
Kernel machines have gained much popularity in applications of machine learning. Support vector machines (SVMs) are a subset of kernel machines and generalize well for classification, regression, and anomaly detection tasks. The training procedure for traditional SVMs involves solving a quadratic programming (QP) problem. The QP problem scales super linearly in computational effort with the number of training samples and is often used for the offline batch processing of data. Kernel machines operate by retaining a subset of observed data during training. The data vectors contained within this subset are referred to as support vectors (SVs). The work presented in this paper introduces a subset selection method for the use of kernel machines in online, changing environments. Our algorithm works by using a stochastic indexing technique when selecting a subset of SVs when computing the kernel expansion. The work described here is novel because it separates the selection of kernel basis functions from the training algorithm used. The subset selection algorithm presented here can be used in conjunction with any online training technique. It is important for online kernel machines to be computationally efficient due to the real-time requirements of online environments. Our algorithm is an important contribution because it scales linearly with the number of training samples and is compatible with current training techniques. Our algorithm outperforms standard techniques in terms of computational efficiency and provides increased recognition accuracy in our experiments. We provide results from experiments using both simulated and real-world data sets to verify our algorithm.  相似文献   

19.
支持向量机(SVM)是最为流行的分类工具,但处理大规模的数据集时,需要大量的内存资源和训练时间,通常在大集群并行环境下才能实现。提出一种新的并行SVM算法,RF-CCASVM,可在有限计算资源上求解大规模SVM。通过随机傅里叶映射,应用低维显示特征映射一致近似高斯核对应的无限维隐式特征映射,从而用线性SVM一致近似高斯核SVM。提出一致中心调节的并行化方法。具体地,将数据集划分成若干子数据集,多个进程并行地在各自的子数据集上独立训练SVM。当各个子数据集上的最优超平面即将求出时,用由各个子集上获得的一致中心解取代当前解,继续在各子集上训练直到一致中心解在各个子集上达到最优。标准数据集的对比实验验证了RF-CCASVM的正确性和有效性。  相似文献   

20.
Machine learning techniques have facilitated image retrieval by automatically classifying and annotating images with keywords. Among them, Support Vector Machines (SVMs) are used extensively due to their generalization properties. SVM was initially designed for binary classifications. However, most classification problems arising in domains such as image annotation usually involve more than two classes. Notably, SVM training is a computationally intensive process especially when the training dataset is large. This paper presents a resource aware parallel multiclass SVM algorithm (named RAMSMO) for large-scale image annotation which partitions the training dataset into smaller binary chunks and optimizes SVM training in parallel using a cluster of computers. A genetic algorithm-based load balancing scheme is designed to optimize the performance of RAMSMO in balancing the computation of multiclass data chunks in heterogeneous computing environments. RAMSMO is evaluated in both experimental and simulation environments, and the results show that it reduces the training time significantly while maintaining a high level of accuracy in classifications.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号