首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper we extend the conformal method of modifying a kernel function to improve the performance of Support Vector Machine classifiers [14, 15]. The kernel function is conformally transformed in a data-dependent way by using the information of Support Vectors obtained in primary training. We further investigate the performances of modified Gaussian Radial Basis Function and Polynomial kernels. Simulation results for two artificial data sets show that the method is very effective, especially for correcting bad kernels. This revised version was published online in August 2006 with corrections to the Cover Date.  相似文献   

2.
Kernel methods provide high performance in a variety of machine learning tasks. However, the success of kernel methods is heavily dependent on the selection of the right kernel function and proper setting of its parameters. Several sets of kernel functions based on orthogonal polynomials have been proposed recently. Besides their good performance in the error rate, these kernel functions have only one parameter chosen from a small set of integers, and it facilitates kernel selection greatly. Two sets of orthogonal polynomial kernel functions, namely the triangularly modified Chebyshev kernels and the triangularly modified Legendre kernels, are proposed in this study. Furthermore, we compare the construction methods of some orthogonal polynomial kernels and highlight the similarities and differences among them. Experiments on 32 data sets are performed for better illustration and comparison of these kernel functions in classification and regression scenarios. In general, there is difference among these orthogonal polynomial kernels in terms of accuracy, and most orthogonal polynomial kernels can match the commonly used kernels, such as the polynomial kernel, the Gaussian kernel and the wavelet kernel. Compared with these universal kernels, the orthogonal polynomial kernels each have a unique easily optimized parameter, and they store statistically significantly less support vectors in support vector classification. New presented kernels can obtain better generalization performance both for classification tasks and regression tasks.  相似文献   

3.
Type-2 fuzzy logic-based classifier fusion for support vector machines   总被引:1,自引:0,他引:1  
As a machine-learning tool, support vector machines (SVMs) have been gaining popularity due to their promising performance. However, the generalization abilities of SVMs often rely on whether the selected kernel functions are suitable for real classification data. To lessen the sensitivity of different kernels in SVMs classification and improve SVMs generalization ability, this paper proposes a fuzzy fusion model to combine multiple SVMs classifiers. To better handle uncertainties existing in real classification data and in the membership functions (MFs) in the traditional type-1 fuzzy logic system (FLS), we apply interval type-2 fuzzy sets to construct a type-2 SVMs fusion FLS. This type-2 fusion architecture takes considerations of the classification results from individual SVMs classifiers and generates the combined classification decision as the output. Besides the distances of data examples to SVMs hyperplanes, the type-2 fuzzy SVMs fusion system also considers the accuracy information of individual SVMs. Our experiments show that the type-2 based SVM fusion classifiers outperform individual SVM classifiers in most cases. The experiments also show that the type-2 fuzzy logic-based SVMs fusion model is better than the type-1 based SVM fusion model in general.  相似文献   

4.
In the last few years, the applications of support vector machine (SVM) have substantially increased due to the high generalization performance and modeling of non-linear relationships. However, whether SVM behaves well largely depends on its adopted kernel function. The most commonly used kernels include linear, polynomial inner product functions and the Radial Basis Function (RBF), etc. Since the nature of the data is usually unknown, it is very difficult to make, on beforehand, a proper choice from the mentioned kernels. Usually, more than one kernel are applied to select the one which gives the best prediction performance but with a very time-consuming optimization procedure. This paper presents a kernel function based on Lorentzian function which is well-known in the field of statistics. The presented kernel can properly deal with a large variety of mapping problems due to its flexibility to vary. The applicability, suitability, performance and robustness of the presented kernel are investigated on bi-spiral benchmark data set as well as seven data sets from the UCI benchmark repository. The experiment results demonstrate that the presented kernel is robust and has stronger mapping ability comparing with the standard kernel functions, and it can obtain better generalization performance. In general, the proposed kernel can be served as a generic alternative for the common linear, polynomial and RBF kernels.  相似文献   

5.
In this article, the task of remote-sensing image classification is tackled with local maximal margin approaches. First, we introduce a set of local kernel-based classifiers that alleviate the computational limitations of local support vector machines (SVMs), maintaining at the same time high classification accuracies. Such methods rely on the following idea: (a) during training, build a set of local models covering the considered data and (b) during prediction, choose the most appropriate local model for each sample to evaluate. Additionally, we present a family of operators on kernels aiming to integrate the local information into existing (input) kernels in order to obtain a quasi-local (QL) kernel. To compare the performances achieved by the different local approaches, an experimental analysis was conducted on three distinct remote-sensing data sets. The obtained results show that interesting performances can be achieved in terms of both classification accuracy and computational cost.  相似文献   

6.
Discriminative classifiers are a popular approach to solving classification problems. However, one of the problems with these approaches, in particular kernel based classifiers such as support vector machines (SVMs), is that they are hard to adapt to mismatches between the training and test data. This paper describes a scheme for overcoming this problem for speech recognition in noise by adapting the kernel rather than the SVM decision boundary. Generative kernels, defined using generative models, are one type of kernel that allows SVMs to handle sequence data. By compensating the parameters of the generative models for each noise condition noise-specific generative kernels can be obtained. These can be used to train a noise-independent SVM on a range of noise conditions, which can then be used with a test-set noise kernel for classification. The noise-specific kernels used in this paper are based on Vector Taylor Series (VTS) model-based compensation. VTS allows all the model parameters to be compensated and the background noise to be estimated in a maximum likelihood fashion. A brief discussion of VTS, and the optimisation of the mismatch function representing the impact of noise on the clean speech, is also included. Experiments using these VTS-based test-set noise kernels were run on the AURORA 2 continuous digit task. The proposed SVM rescoring scheme yields large gains in performance over the VTS compensated models.  相似文献   

7.
Multiple kernel learning (MKL), as a principled classification method, selects and combines base kernels to increase the categorization accuracy of Support Vector Machines (SVMs). The group method of data handling neural network (GMDH-NN) has been applied in many fields of optimization, data mining, and pattern recognition. It can automatically seek interrelatedness in data, select an optimal structure for the model or network, and enhance the accuracy of existing algorithms. We can utilize the advantages of the GMDH-NN to build a multiple graph kernel learning (MGKL) method and enhance the categorization performance of graph kernel SVMs. In this paper, we first define a unitized symmetric regularity criterion (USRC) to improve the symmetric regularity criterion of GMDH-NN. Second, a novel structure for the initial model of the GMDH-NN is defined, which uses the posterior probability output of graph kernel SVMs. We then use a hybrid graph kernel in the H1-space for MGKL in combination with the GMDH-NN. This way, we can obtain a pool of optimal graph kernels with different kernel parameters. Our experiments on standard graph datasets show that this new MGKL method is highly effective.  相似文献   

8.
In this study, a Discriminator Model for Glaucoma Diagnosis (DMGD) using soft computing techniques is presented. As the biomedical images such as fundus images are often acquired in high resolution, the Region of Interest (ROI) for glaucoma diagnosis must be selected at first to reduce the complexity of any system. The DMGD system uses a series of pre-processing; initial cropping by the green channel’s intensity, Spatially Weighted Fuzzy C Means (SWFCM), blood vessel detection and removal by Gaussian Derivative Filters (GDF) and inpainting algorithms. Once the ROI has been selected, the numerical features such as colour, spatial domain features from Local Binary Pattern (LBP) and frequency domain features from LAWS are generated from the corresponding ROI for further classification using kernel based Support Vector Machine (SVM). The DMGD system performances are validated using four fundus image databases; ORIGA, RIM-ONE, DRISHTI-GS1, and HRF with four different kernels; Linear Kernel (LK), Polynomial Kernel (PK), Radial Basis Function (RBFK) kernel, Quadratic Kernel (QK) based SVM classifiers. Results show that the DMGD system classifies the fundus images accurately using the multiple features and kernel based classifies from the properly segmented ROI.  相似文献   

9.
A Note on the Universal Approximation Capability of Support Vector Machines   总被引:2,自引:0,他引:2  
The approximation capability of support vector machines (SVMs) is investigated. We show the universal approximation capability of SVMs with various kernels, including Gaussian, several dot product, or polynomial kernels, based on the universal approximation capability of their standard feedforward neural network counterparts. Moreover, it is shown that an SVM with polynomial kernel of degree p − 1 which is trained on a training set of size p can approximate the p training points up to any accuracy. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

10.
Large-margin methods, such as support vector machines (SVMs), have been very successful in classification problems. Recently, maximum margin discriminant analysis (MMDA) was proposed that extends the large-margin idea to feature extraction. It often outperforms traditional methods such as kernel principal component analysis (KPCA) and kernel Fisher discriminant analysis (KFD). However, as in the SVM, its time complexity is cubic in the number of training points m, and is thus computationally inefficient on massive data sets. In this paper, we propose an (1+epsilon)(2)-approximation algorithm for obtaining the MMDA features by extending the core vector machine. The resultant time complexity is only linear in m, while its space complexity is independent of m. Extensive comparisons with the original MMDA, KPCA, and KFD on a number of large data sets show that the proposed feature extractor can improve classification accuracy, and is also faster than these kernel-based methods by over an order of magnitude.  相似文献   

11.
We present a mechanism to train support vector machines (SVMs) with a hybrid kernel and minimal Vapnik-Chervonenkis (VC) dimension. After describing the VC dimension of sets of separating hyperplanes in a high-dimensional feature space produced by a mapping related to kernels from the input space, we proposed an optimization criterion to design SVMs by minimizing the upper bound of the VC dimension. This method realizes a structural risk minimization and utilizes a flexible kernel function such that a superior generalization over test data can be obtained. In order to obtain a flexible kernel function, we develop a hybrid kernel function and a sufficient condition to be an admissible Mercer kernel based on common Mercer kernels (polynomial, radial basis function, two-layer neural network, etc.). The nonnegative combination coefficients and parameters of the hybrid kernel are determined subject to the minimal upper bound of the VC dimension of the learning machine. The use of the hybrid kernel results in a better performance than those with a single common kernel. Experimental results are discussed to illustrate the proposed method and show that the SVM with the hybrid kernel outperforms that with a single common kernel in terms of generalization power.  相似文献   

12.
多尺度核方法是当前核机器学习领域的一个热点。通常多尺度核的学习在多核处理时存在诸如多核平均组合、迭代学习时间长、经验选择合成系数等弊端。文中基于核目标度量规则,提出一种多尺度核方法的自适应序列学习算法,实现多核加权系数的自动快速求取。实验表明,该方法在回归精度、分类正确率方面比单核支持向量机方法结果更优,函数拟合与分类稳定性更强,证明该算法具有普遍适用性。  相似文献   

13.
The canonical support vector machines (SVMs) are based on a single kernel, recent publications have shown that using multiple kernels instead of a single one can enhance interpretability of the decision function and promote classification accuracy. However, most of existing approaches mainly reformulate the multiple kernel learning as a saddle point optimization problem which concentrates on solving the dual. In this paper, we show that the multiple kernel learning (MKL) problem can be reformulated as a BiConvex optimization and can also be solved in the primal. While the saddle point method still lacks convergence results, our proposed method exhibits strong optimization convergence properties. To solve the MKL problem, a two-stage algorithm that optimizes canonical SVMs and kernel weights alternately is proposed. Since standard Newton and gradient methods are too time-consuming, we employ the truncated-Newton method to optimize the canonical SVMs. The Hessian matrix need not be stored explicitly, and the Newton direction can be computed using several Preconditioned Conjugate Gradient steps on the Hessian operator equation, the algorithm is shown more efficient than the current primal approaches in this MKL setting. Furthermore, we use the Nesterov’s optimal gradient method to optimize the kernel weights. One remarkable advantage of solving in the primal is that it achieves much faster convergence rate than solving in the dual and does not require a two-stage algorithm even for the single kernel LapSVM. Introducing the Laplacian regularizer, we also extend our primal method to semi-supervised scenario. Extensive experiments on some UCI benchmarks have shown that the proposed algorithm converges rapidly and achieves competitive accuracy.  相似文献   

14.
Multiclass LS-SVMs: Moderated Outputs and Coding-Decoding Schemes   总被引:2,自引:0,他引:2  
A common way of solving the multiclass categorization problem is to reformulate the problem into a set of binary classification problems. Discriminative binary classifiers like, e.g., Support Vector Machines (SVMs), directly optimize the decision boundary with respect to a certain cost function. In a pragmatic and computationally simple approach, Least Squares SVMs (LS-SVMs) are inferred by minimizing a related regression least squares cost function. The moderated outputs of the binary classifiers are obtained in a second step within the evidence framework. In this paper, Bayes' rule is repeatedly applied to infer the posterior multiclass probabilities, using the moderated outputs of the binary plug-in classifiers and the prior multiclass probabilities. This Bayesian decoding motivates the use of loss function based decoding instead of Hamming decoding. For SVMs and LS-SVMs with linear kernel, experimental evidence suggests the use of one-versus-one coding. With a Radial Basis Function kernel one-versus-one and error correcting output codes yield the best performances, but simpler codings may still yield satisfactory results. This revised version was published online in August 2006 with corrections to the Cover Date.  相似文献   

15.
Many practical engineering applications require the usage of accurate automatic decision systems, usually operating under tight computational constraints. Support Vector Machines (SVMs) endowed with a Radial Basis Function (RBF) as kernel are broadly accepted as the current state of the art for decision problems, but require cross-validation to select the free parameters, which is computationally costly. In this work we investigate low-cost methods to select the spread parameter in SVMs with an RBF kernel. Our proposal relies on the use of simple local methods that gather information about the local structure of each dataset. Empirical results in UCI datasets show that the proposed methods can be used as a fast alternative to the standard cross-validation procedure, with the additional advantage of avoiding the (often heuristic) task of a priori fixing the values of the spread parameter to be explored.  相似文献   

16.
根据数据特征构造核函数是当前SVM(支持向量机)的难点,文章采用重构数据样本相似度曲面的方法构造三种新的核函数.证明前两种核是Mercer核,并且讨论了三种核的存在性、稳定性和唯一性.指出核函数的本质是表达相似性的工具,核函数与Mercer条件、正定性、对称性互为非充分非必要条件.仿真研究表明,本核函数对学习样本本身的分类是完美的,而且其泛化能力优于传统核函数的SVM.  相似文献   

17.
核函数是核主成分分析(Kernel Principal Component Analysis,KPCA)的核心,目前使用的核函数都是单一核函数。尝试通过将光谱角径向基核函数(Spectral Angle Radial Basis Function,SA-RBF)与RBF组合形成混合核函数。在研究中,利用基于该混合核函数的KPCA进行特征提取,将其光谱特征波段和纹理特征相结合用于盐碱土的SVM分类,将分类结果与其他SVM分类进行比较,结果表明:该方法优于其他SVM方法,能有效提取玛纳斯河流域绿洲区的盐碱土专题信息,分类精度是89.000%,kappa系数是0.876。  相似文献   

18.
Multivariate satellite-image time-series (MSITS) are a valuable source of information for a wide range of agricultural applications. Image classification, one of the main applications of this type of data, is a challenging task. It is mainly because MSITS are generated by a complex interaction among several sources of information, which are known as the factors of variation. These factors contain different information with different levels of relevance to a classification task. Thus, a proper representation of MSITS data is required in order to extract and model the most useful information from these factors for classification purpose. To this end, this article proposes three multiple kernel representations of MSITS data. These representations extract the most classification-related information from these data through combining the basis kernels constructed from different factors of variation of the MSITS data. In the proposed representations, the combination of the basis kernels was achieved by using the multiple kernel learning algorithms. The efficiency of the proposed multiple kernel representations was evaluated based both on analysing the relevance of their kernels to the classification task and their classification performances. Two different MSITS data sets composed of 10 RapidEye imageries of an agricultural area were used to evaluate the performances of the proposed methods. In addition, the classification results of both MSITS using a single kernel were considered as the baseline for comparison. The results showed an increase of up to 14% in overall accuracy of the classification maps by using the multiple kernel representations. Moreover, these particular representations for classification of time-series observations were able to handle the undesirable effects in image data such as the presence of clouds and their shadows.  相似文献   

19.
Large scale nonlinear support vector machines (SVMs) can be approximated by linear ones using a suitable feature map. The linear SVMs are in general much faster to learn and evaluate (test) than the original nonlinear SVMs. This work introduces explicit feature maps for the additive class of kernels, such as the intersection, Hellinger's, and χ2 kernels, commonly used in computer vision, and enables their use in large scale problems. In particular, we: 1) provide explicit feature maps for all additive homogeneous kernels along with closed form expression for all common kernels; 2) derive corresponding approximate finite-dimensional feature maps based on a spectral analysis; and 3) quantify the error of the approximation, showing that the error is independent of the data dimension and decays exponentially fast with the approximation order for selected kernels such as χ2. We demonstrate that the approximations have indistinguishable performance from the full kernels yet greatly reduce the train/test times of SVMs. We also compare with two other approximation methods: Nystrom's approximation of Perronnin et al., which is data dependent, and the explicit map of Maji and Berg for the intersection kernel, which, as in the case of our approximations, is data independent. The approximations are evaluated on a number of standard data sets, including Caltech-101, Daimler-Chrysler pedestrians, and INRIA pedestrians.  相似文献   

20.
将支持向量机(SVM)用于高光谱遥感影像分类的研究,采用决策边界特征提取(DBFE)算法对高光谱影像进行维数约简,以径向基函数(RBF)作为SVM模型的核函数,把混沌优化搜索技术引入到PSO算法中,以基本PSO算法为主体流程,对种群中最好的粒子进行给定步数的混沌优化搜索,以改进基本PSO算法进化后期收敛速度慢、易陷入局部极小值的缺陷。利用改进的混合粒子群优化算法(PSO)来实现SVM模型参数的自动选择,继而构建了一种参数最优的粒子群优化支持向量机(PSO-SVM)多类分类模型。选用220波段的AVIRIS高光谱遥感影像进行了分类试验。结果表明,与采用基于留一法(LOO)网格搜索策略的传统SVM相比,改进后的PSO-SVM算法可以提高分类精度约8.8%。该方法对于小样本、非均衡条件下的遥感影像数据分类非常有效。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号