首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 65 毫秒
1.
Land use classification is an important part of many remote sensing applications. A lot of research has gone into the application of statistical and neural network classifiers to remote‐sensing images. This research involves the study and implementation of a new pattern recognition technique introduced within the framework of statistical learning theory called Support Vector Machines (SVMs), and its application to remote‐sensing image classification. Standard classifiers such as Artificial Neural Network (ANN) need a number of training samples that exponentially increase with the dimension of the input feature space. With a limited number of training samples, the classification rate thus decreases as the dimensionality increases. SVMs are independent of the dimensionality of feature space as the main idea behind this classification technique is to separate the classes with a surface that maximizes the margin between them, using boundary pixels to create the decision surface. Results from SVMs are compared with traditional Maximum Likelihood Classification (MLC) and an ANN classifier. The findings suggest that the ANN and SVM classifiers perform better than the traditional MLC. The SVM and the ANN show comparable results. However, accuracy is dependent on factors such as the number of hidden nodes (in the case of ANN) and kernel parameters (in the case of SVM). The training time taken by the SVM is several magnitudes less.  相似文献   

2.
Type-2 fuzzy logic-based classifier fusion for support vector machines   总被引:1,自引:0,他引:1  
As a machine-learning tool, support vector machines (SVMs) have been gaining popularity due to their promising performance. However, the generalization abilities of SVMs often rely on whether the selected kernel functions are suitable for real classification data. To lessen the sensitivity of different kernels in SVMs classification and improve SVMs generalization ability, this paper proposes a fuzzy fusion model to combine multiple SVMs classifiers. To better handle uncertainties existing in real classification data and in the membership functions (MFs) in the traditional type-1 fuzzy logic system (FLS), we apply interval type-2 fuzzy sets to construct a type-2 SVMs fusion FLS. This type-2 fusion architecture takes considerations of the classification results from individual SVMs classifiers and generates the combined classification decision as the output. Besides the distances of data examples to SVMs hyperplanes, the type-2 fuzzy SVMs fusion system also considers the accuracy information of individual SVMs. Our experiments show that the type-2 based SVM fusion classifiers outperform individual SVM classifiers in most cases. The experiments also show that the type-2 fuzzy logic-based SVMs fusion model is better than the type-1 based SVM fusion model in general.  相似文献   

3.
SVM在多源遥感图像分类中的应用研究   总被引:7,自引:1,他引:7  
在利用遥感图像进行土地利用/覆盖分类过程中,可采用以下两种途径来提高分类精度:一是通过增加有利于分类的数据源,引入地理辅助数据和归一化植被指数(NDVI)来进行多源信息融合;二是选择更好的分类方法,例如支持向量机(SVM)学习方法,由于该方法克服了最大似然法和神经网络的弱点,非常适合高维、复杂的小样本多源数据的分类。为了提高多源遥感图像分类的精度,还研究了支持向量机在遥感图像分类中模型的选择,包括多类模型和核函数的选择。分类结果表明,支持向量机比传统的分类方法具有更高的精度,尤其是基于径向基核函数和一对一多类方法的支持向量机模型更适合多源遥感图像分类,因此,基于支持向量机的多源土地利用/覆盖分类能大大提高分类精度。  相似文献   

4.
Abstract: Bankruptcy prediction and credit scoring are the two important problems facing financial decision support. The multilayer perceptron (MLP) network has shown its applicability to these problems and its performance is usually superior to those of other traditional statistical models. Support vector machines (SVMs) are the core machine learning techniques and have been used to compare with MLP as the benchmark. However, the performance of SVMs is not fully understood in the literature because an insufficient number of data sets is considered and different kernel functions are used to train the SVMs. In this paper, four public data sets are used. In particular, three different sizes of training and testing data in each of the four data sets are considered (i.e. 3:7, 1:1 and 7:3) in order to examine and fully understand the performance of SVMs. For SVM model construction, the linear, radial basis function and polynomial kernel functions are used to construct the SVMs. Using MLP as the benchmark, the SVM classifier only performs better in one of the four data sets. On the other hand, the prediction results of the MLP and SVM classifiers are not significantly different for the three different sizes of training and testing data.  相似文献   

5.
The long-term streamflow forecasts are very significant in planing and reservoir operations. The streamflow forecasts have to deal with a complex and highly nonlinear data patterns. This study employs support vector machines (SVMs) in predicting monthly streamflows. SVMs are proved to be a good tool for forecasting the nonlinear time series. But the performance of the SVM depends solely upon the appropriate choice of parameters. Hence, particle swarm optimization technique is employed in tuning SVM parameters. The proposed SVM-PSO model is used in forecasting the streamflow values of Swan River near Bigfork and St. Regis River near Clark Fork of Montana, United States. Further SVM model with various input structures is constructed, and the best structure is determined using various statistical performances. Later, the performance of the SVM model is compared with the autoregressive moving average model (ARMA) and artificial neural networks (ANN's). The results indicate that SVM could be a better alternative for predicting monthly streamflows as it provides a high degree of accuracy and reliability.  相似文献   

6.
Graph structure is vital to graph based semi-supervised learning. However, the problem of constructing a graph that reflects the underlying data distribution has been seldom investigated in semi-supervised learning, especially for high dimensional data. In this paper, we focus on graph construction for semi-supervised learning and propose a novel method called Semi-Supervised Classification based on Random Subspace Dimensionality Reduction, SSC-RSDR in short. Different from traditional methods that perform graph-based dimensionality reduction and classification in the original space, SSC-RSDR performs these tasks in subspaces. More specifically, SSC-RSDR generates several random subspaces of the original space and applies graph-based semi-supervised dimensionality reduction in these random subspaces. It then constructs graphs in these processed random subspaces and trains semi-supervised classifiers on the graphs. Finally, it combines the resulting base classifiers into an ensemble classifier. Experimental results on face recognition tasks demonstrate that SSC-RSDR not only has superior recognition performance with respect to competitive methods, but also is robust against a wide range of values of input parameters.  相似文献   

7.
Support vector machines (SVM) has achieved great success in multi-class classification. However, with the increase in dimension, the irrelevant or redundant features may degrade the generalization performances of the SVM classifiers, which make dimensionality reduction (DR) become indispensable for high-dimensional data. At present, most of the DR algorithms reduce all data points to the same dimension for multi-class datasets, or search the local latent dimension for each class, but they neglect the fact that different class pairs also have different local latent dimensions. In this paper, we propose an adaptive class pairwise dimensionality reduction algorithm (ACPDR) to improve the generalization performances of the multi-class SVM classifiers. In the proposed algorithm, on the one hand, different class pairs are reduced to different dimensions; on the other hand, a tabu strategy is adopted to select adaptively a suitable embedding dimension. Five popular DR algorithms are employed in our experiment, and the numerical results on some benchmark multi-class datasets show that compared with the traditional DR algorithms, the proposed ACPDR can improve the generalization performances of the multi-class SVM classifiers, and also verify that it is reasonable to consider the different class pairs have different local dimensions.  相似文献   

8.
Support vector machines (SVMs) are one of the most popular methodologies for the design of pattern classification systems with sound theoretical foundations and high generalizing performance. The SVM framework focuses on linear and nonlinear models that maximize the separating margin between objects belonging in different classes. This paper extends the SVM modeling context toward the development of additive models that combine the simplicity and transparency/interpretability of linear classifiers with the generalizing performance of nonlinear models. Experimental results are also presented on the performance of the new methodology over existing SVM techniques.  相似文献   

9.
Gender recognition has been playing a very important role in various applications such as human–computer interaction, surveillance, and security. Nonlinear support vector machines (SVMs) were investigated for the identification of gender using the Face Recognition Technology (FERET) image face database. It was shown that SVM classifiers outperform the traditional pattern classifiers (linear, quadratic, Fisher linear discriminant, and nearest neighbour). In this context, this paper aims to improve the SVM classification accuracy in the gender classification system and propose new models for a better performance. We have evaluated different SVM learning algorithms; the SVM‐radial basis function with a 5% outlier fraction outperformed other SVM classifiers. We have examined the effectiveness of different feature selection methods. AdaBoost performs better than the other feature selection methods in selecting the most discriminating features. We have proposed two classification methods that focus on training subsets of images among the training images. Method 1 combines the outcome of different classifiers based on different image subsets, whereas method 2 is based on clustering the training data and building a classifier for each cluster. Experimental results showed that both methods have increased the classification accuracy.  相似文献   

10.
In this paper, we investigate the stability of linear and quadratic programming support vector machines (SVMs) with bounded noise in the input data using a robust optimisation model. For a linear discriminant function, this model is expressed as a second order cone optimisation problem. Using the concept of the kernel function, we generalise for nonlinear discriminant functions. Intuitively, it looks quite clear that large margin classifiers are robust in terms of bounded input noise. However, there is no theoretical analysis investigating this behaviour. We show that the SVM solution is stable under bounded perturbations of the data both in the linear programming and quadratic programming. Computational results are also presented for toy and real-world data.  相似文献   

11.
Support vector machines (SVMs) have been demonstrated very efficient for binary classification problems; however, computationally efficient and effective multiclass SVMs are still missing. Most existing multiclass SVM classifiers are constructed either by combining multiple binary SVM classifiers, which often perform moderately for some problems, or by converting multiclass problems into one single optimization problem, which is unfortunately computationally expensive. To address these issues, a novel and principled multiclass SVM based on geometric properties of hyperspheres, termed SVMGH, is proposed in this paper. Different from existing SVM‐based methods that seek a cutting hyperplane between two classes, SVMGH draws the discriminative information of each class by constructing a minimum hypersphere containing all class members, and then defines a label function based on the geometric properties of the minimum hyperspheres. We prove theoretically the geometric properties of the minimum hyperspheres to guarantee the validation of SVMGH. The computational efficiency is enhanced by a data reduction strategy as well as a fast training method. Experimental results demonstrate that the proposed SVMGH shows better performance and higher computational efficiency than the state of the art on multiclassification problems while maintaining comparable performance and efficiency on binary classification problems.  相似文献   

12.
When performing visualization and classification, people often confront the problem of dimensionality reduction. Isomap is one of the most promising nonlinear dimensionality reduction techniques. However, when Isomap is applied to real-world data, it shows some limitations, such as being sensitive to noise. In this paper, an improved version of Isomap, namely S-Isomap, is proposed. S-Isomap utilizes class information to guide the procedure of nonlinear dimensionality reduction. Such a kind of procedure is called supervised nonlinear dimensionality reduction. In S-Isomap, the neighborhood graph of the input data is constructed according to a certain kind of dissimilarity between data points, which is specially designed to integrate the class information. The dissimilarity has several good properties which help to discover the true neighborhood of the data and, thus, makes S-Isomap a robust technique for both visualization and classification, especially for real-world problems. In the visualization experiments, S-Isomap is compared with Isomap, LLE, and WeightedIso. The results show that S-Isomap performs the best. In the classification experiments, S-Isomap is used as a preprocess of classification and compared with Isomap, WeightedIso, as well as some other well-established classification methods, including the K-nearest neighbor classifier, BP neural network, J4.8 decision tree, and SVM. The results reveal that S-Isomap excels compared to Isomap and WeightedIso in classification, and it is highly competitive with those well-known classification methods.  相似文献   

13.
In the past decade, support vector machines (SVMs) have gained the attention of many researchers. SVMs are non-parametric supervised learning schemes that rely on statistical learning theory which enables learning machines to generalize well to unseen data. SVMs refer to kernel-based methods that have been introduced as a robust approach to classification and regression problems, lately has handled nonlinear identification problems, the so called support vector regression. In SVMs designs for nonlinear identification, a nonlinear model is represented by an expansion in terms of nonlinear mappings of the model input. The nonlinear mappings define a feature space, which may have infinite dimension. In this context, a relevant identification approach is the least squares support vector machines (LS-SVMs). Compared to the other identification method, LS-SVMs possess prominent advantages: its generalization performance (i.e. error rates on test sets) either matches or is significantly better than that of the competing methods, and more importantly, the performance does not depend on the dimensionality of the input data. Consider a constrained optimization problem of quadratic programing with a regularized cost function, the training process of LS-SVM involves the selection of kernel parameters and the regularization parameter of the objective function. A good choice of these parameters is crucial for the performance of the estimator. In this paper, the LS-SVMs design proposed is the combination of LS-SVM and a new chaotic differential evolution optimization approach based on Ikeda map (CDEK). The CDEK is adopted in tuning of regularization parameter and the radial basis function bandwith. Simulations using LS-SVMs on NARX (Nonlinear AutoRegressive with eXogenous inputs) for the identification of a thermal process show the effectiveness and practicality of the proposed CDEK algorithm when compared with the classical DE approach.  相似文献   

14.
提出一种模式识别算法——双层支持量机算法,用来提高表面肌电识别精度。该算法融合集成学习中元学习的并行方法和叠加法的递进思想,把基本SVM分类器并行分布在第1层,第1层的预测结果作为第2层的输入,由第2层再进行分类识别,从而通过多层分类器组合来融合多源特征。以手臂表面肌电数据集为测试数据,采用文中的双层支持向量机,各肌肉的肌电信号分别输入基支持向量机,组合器融合各肌肉电信号特征,集成识别前臂肌肉群的肌电信号,从而实现运动意图的精确识别。实验结果显示,在预测精度上,此算法优于单个SVM分类器。在预测性能上(识别精度、耗时、鲁棒性),此算法优于随机森林和旋转森林等集成分类器。  相似文献   

15.
Based on the principle of one-against-one support vector machines (SVMs) multi-class classification algorithm, this paper proposes an extended SVMs method which couples adaptive resonance theory (ART) network to reconstruct a multi-class classifier. Different coupling strategies to reconstruct a multi-class classifier from binary SVM classifiers are compared with application to fault diagnosis of transmission line. Majority voting, a mixture matrix and self-organizing map (SOM) network are compared in reconstructing the global classification decision. In order to evaluate the method’s efficiency, one-against-all, decision directed acyclic graph (DDAG) and decision-tree (DT) algorithm based SVM are compared too. The comparison is done with simulations and the best method is validated with experimental data.  相似文献   

16.
The subprime mortgage crisis have triggered a significant economic decline over the world. Credit rating forecasting has been a critical issue in the global banking systems. The study trained a Gaussian process based multi-class classifier (GPC), a highly flexible probabilistic kernel machine, using variational Bayesian methods. GPC provides full predictive distributions and model selection simultaneously. During training process, the input features are automatically weighted by their relevances with respect to the output labels. Benefiting from the inherent feature scaling scheme, GPCs outperformed convectional multi-class classifiers and support vector machines (SVMs). In the second stage, conventional SVMs enhanced by feature selection and dimensionality reduction schemes were also compared with GPCs. Empirical results indicated that GPCs still performed the best.  相似文献   

17.
A comparison of methods for multiclass support vector machines   总被引:126,自引:0,他引:126  
Support vector machines (SVMs) were originally designed for binary classification. How to effectively extend it for multiclass classification is still an ongoing research issue. Several methods have been proposed where typically we construct a multiclass classifier by combining several binary classifiers. Some authors also proposed methods that consider all classes at once. As it is computationally more expensive to solve multiclass problems, comparisons of these methods using large-scale problems have not been seriously conducted. Especially for methods solving multiclass SVM in one step, a much larger optimization problem is required so up to now experiments are limited to small data sets. In this paper we give decomposition implementations for two such "all-together" methods. We then compare their performance with three methods based on binary classifications: "one-against-all," "one-against-one," and directed acyclic graph SVM (DAGSVM). Our experiments indicate that the "one-against-one" and DAG methods are more suitable for practical use than the other methods. Results also show that for large problems methods by considering all data at once in general need fewer support vectors.  相似文献   

18.
The article presents an experimental study on multiclass Support Vector Machine (SVM) methods over a cardiac arrhythmia dataset that has missing attribute values for electrocardiogram (ECG) diagnostic application. The presence of an incomplete dataset and high data dimensionality can affect the performance of classifiers. Imputation of missing data and discriminant analysis are commonly used as preprocessing techniques in such large datasets. The article proposes experiments to evaluate performance of One-Against-All (OAA) and One-Against-One (OAO) approaches in kernel multiclass SVM for a heartbeat classification problem with imputation and dimension reduction techniques. The results indicate that the OAA approach has superiority over OAO in multiclass SVM for ECG data analysis with missing values.  相似文献   

19.
Support vector learning for fuzzy rule-based classification systems   总被引:11,自引:0,他引:11  
To design a fuzzy rule-based classification system (fuzzy classifier) with good generalization ability in a high dimensional feature space has been an active research topic for a long time. As a powerful machine learning approach for pattern recognition problems, the support vector machine (SVM) is known to have good generalization ability. More importantly, an SVM can work very well on a high- (or even infinite) dimensional feature space. This paper investigates the connection between fuzzy classifiers and kernel machines, establishes a link between fuzzy rules and kernels, and proposes a learning algorithm for fuzzy classifiers. We first show that a fuzzy classifier implicitly defines a translation invariant kernel under the assumption that all membership functions associated with the same input variable are generated from location transformation of a reference function. Fuzzy inference on the IF-part of a fuzzy rule can be viewed as evaluating the kernel function. The kernel function is then proven to be a Mercer kernel if the reference functions meet a certain spectral requirement. The corresponding fuzzy classifier is named positive definite fuzzy classifier (PDFC). A PDFC can be built from the given training samples based on a support vector learning approach with the IF-part fuzzy rules given by the support vectors. Since the learning process minimizes an upper bound on the expected risk (expected prediction error) instead of the empirical risk (training error), the resulting PDFC usually has good generalization. Moreover, because of the sparsity properties of the SVMs, the number of fuzzy rules is irrelevant to the dimension of input space. In this sense, we avoid the "curse of dimensionality." Finally, PDFCs with different reference functions are constructed using the support vector learning approach. The performance of the PDFCs is illustrated by extensive experimental results. Comparisons with other methods are also provided.  相似文献   

20.
一种LDA与SVM混合的多类分类方法   总被引:2,自引:0,他引:2  
针对决策有向无环图支持向量机(DDAGSVM)需训练大量支持向量机(SVM)和误差积累的问题,提出一种线性判别分析(LDA)与SVM 混合的多类分类算法.首先根据高维样本在低维空间中投影的特点,给出一种优化LDA 分类阈值;然后以优化LDA 对每个二类问题的分类误差作为类间线性可分度,对线性可分度较低的问题采用非线性SVM 加以解决,并以分类误差作为对应二类问题的可分度;最后将可分度作为混合DDAG 分类器的决策依据.实验表明,与DDAGSVM 相比,所提出算法在确保泛化精度的条件下具有更高的训练和分类速度.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号