首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper proposes a self-splitting fuzzy classifier with support vector learning in expanded high-order consequent space (SFC-SVHC) for classification accuracy improvement. The SFC-SVHC expands the rule-mapped consequent space of a first-order Takagi-Sugeno (TS)-type fuzzy system by including high-order terms to enhance the rule discrimination capability. A novel structure and parameter learning approach is proposed to construct the SFC-SVHC. For structure learning, a variance-based self-splitting clustering (VSSC) algorithm is used to determine distributions of the fuzzy sets in the input space. There are no rules in the SFC-SVHC initially. The VSSC algorithm generates a new cluster by splitting an existing cluster into two according to a predefined cluster-variance criterion. The SFC-SVHC uses trigonometric functions to expand the rule-mapped first-order consequent space to a higher-dimensional space. For parameter optimization in the expanded rule-mapped consequent space, a support vector machine is employed to endow the SFC-SVHC with high generalization ability. Experimental results on several classification benchmark problems show that the SFC-SVHC achieves good classification results with a small number of rules. Comparisons with different classifiers demonstrate the superiority of the SFC-SVHC in classification accuracy.  相似文献   

2.
This paper proposes a Fuzzy System learned through Fuzzy Clustering and Support Vector Machine (FS-FCSVM). The FS-FCSVM is a fuzzy system constructed by fuzzy if-then rules with fuzzy singletons in the consequence. The structure of FS-FCSVM is constructed by fuzzy clustering on the input data, which helps to reduce the number of rules. Parameters in FS-FCSVM are learned through a support vector machine (SVM) for the purpose of achieving higher generalization ability. In contrast to nonlinear kernel-based SVM or some other fuzzy systems with a support vector learning mechanism, both the number of parameters/rules in FS-FCSVM and the computation time are much smaller. FS-FCSVM is applied to skin color segmentation. For color information representation, different types of features based on scaled hue and saturation color space are used. Comparisons with a fuzzy neural network, the Gaussian kernel SVM, and mixture of Gaussian classifiers are performed to show the advantage of FS-FCSVM.  相似文献   

3.
This paper presents a new fuzzy inference system for modeling of nonlinear dynamic systems based on input and output data with measurement noise. The proposed fuzzy system has a number of fuzzy rules and parameter values of membership functions which are automatically generated using the extended relevance vector machine (RVM). The RVM has a probabilistic Bayesian learning framework and has good generalization capability. The RVM consists of the sum of product of weight and kernel function which projects input space into high dimensional feature space. The structure of proposed fuzzy system is same as that of the Takagi-Sugeno fuzzy model. However, in the proposed method, the number of fuzzy rules can be reduced under the process of optimizing a marginal likelihood by adjusting parameter values of kernel functions using the gradient ascent method. After a fuzzy system is determined, coefficients in consequent part are found by the least square method. Examples illustrate effectiveness of the proposed new fuzzy inference system.  相似文献   

4.
构造性核覆盖算法在图像识别中的应用   总被引:14,自引:0,他引:14       下载免费PDF全文
构造性神经网络的主要特点是:在对给定的具体数据的处理过程中,能同时给出网络的结构和参数;支持向量机就是先通过引入核函数的非线性变换,然后在这个核空间中求取最优线性分类面,其所求得的分类函数,形式上类似于一个神经网络,而构造性核覆盖算法(简称为CKCA)则是一种将神经网络中的构造性学习方法(如覆盖算法)与支持向量机(SVM)中的核函数法相结合的方法。CKCA方法具有运算量小、构造性强、直观等特点,适于处理大规模分类问题和图像识别问题。为验证CKCA算法的应用效果,利用图像质量不高的车牌字符进行了识别实验,并取得了较好的结果。  相似文献   

5.
In this paper, a new scheme for constructing parsimonious fuzzy classifiers is proposed based on the L2-support vector machine (L2-SVM) technique with model selection and feature ranking performed simultaneously in an integrated manner, in which fuzzy rules are optimally generated from data by L2-SVM learning. In order to identify the most influential fuzzy rules induced from the SVM learning, two novel indexes for fuzzy rule ranking are proposed and named as alpha-values and omega-values of fuzzy rules in this paper. The alpha-values are defined as the Lagrangian multipliers of the L2-SVM and adopted to evaluate the output contribution of fuzzy rules, while the omega-values are developed by considering both the rule base structure and the output contribution of fuzzy rules. As a prototype-based classifier, the L2-SVM-based fuzzy classifier evades the curse of dimensionality in high-dimensional space in the sense that the number of support vectors, which equals the number of induced fuzzy rules, is not related to the dimensionality. Experimental results on high-dimensional benchmark problems have shown that by using the proposed scheme the most influential fuzzy rules can be effectively induced and selected, and at the same time feature ranking results can also be obtained to construct parsimonious fuzzy classifiers with better generalization performance than the well-known algorithms in literature.  相似文献   

6.
This paper presents the implementation of a new text document classification framework that uses the Support Vector Machine (SVM) approach in the training phase and the Euclidean distance function in the classification phase, coined as Euclidean-SVM. The SVM constructs a classifier by generating a decision surface, namely the optimal separating hyper-plane, to partition different categories of data points in the vector space. The concept of the optimal separating hyper-plane can be generalized for the non-linearly separable cases by introducing kernel functions to map the data points from the input space into a high dimensional feature space so that they could be separated by a linear hyper-plane. This characteristic causes the implementation of different kernel functions to have a high impact on the classification accuracy of the SVM. Other than the kernel functions, the value of soft margin parameter, C is another critical component in determining the performance of the SVM classifier. Hence, one of the critical problems of the conventional SVM classification framework is the necessity of determining the appropriate kernel function and the appropriate value of parameter C for different datasets of varying characteristics, in order to guarantee high accuracy of the classifier. In this paper, we introduce a distance measurement technique, using the Euclidean distance function to replace the optimal separating hyper-plane as the classification decision making function in the SVM. In our approach, the support vectors for each category are identified from the training data points during training phase using the SVM. In the classification phase, when a new data point is mapped into the original vector space, the average distances between the new data point and the support vectors from different categories are measured using the Euclidean distance function. The classification decision is made based on the category of support vectors which has the lowest average distance with the new data point, and this makes the classification decision irrespective of the efficacy of hyper-plane formed by applying the particular kernel function and soft margin parameter. We tested our proposed framework using several text datasets. The experimental results show that this approach makes the accuracy of the Euclidean-SVM text classifier to have a low impact on the implementation of kernel functions and soft margin parameter C.  相似文献   

7.
The common vector (CV) method is a linear subspace classifier method which allows one to discriminate between classes of data sets, such as those arising in image and word recognition. This method utilizes subspaces that represent classes during classification. Each subspace is modeled such that common features of all samples in the corresponding class are extracted. To accomplish this goal, the method eliminates features that are in the direction of the eigenvectors corresponding to the nonzero eigenvalues of the covariance matrix of each class. In this paper, we introduce a variation of the CV method, which will be referred to as the modified CV (MCV) method. Then, a novel approach is proposed to apply the MCV method in a nonlinearly mapped higher dimensional feature space. In this approach, all samples are mapped into a higher dimensional feature space using a kernel mapping function, and then, the MCV method is applied in the mapped space. Under certain conditions, each class gives rise to a unique CV, and the method guarantees a 100% recognition rate with respect to the training set data. Moreover, experiments with several test cases also show that the generalization performance of the proposed kernel method is comparable to the generalization performances of other linear subspace classifier methods as well as the kernel-based nonlinear subspace method. While both the MCV method and its kernel counterpart did not outperform the support vector machine (SVM) classifier in most of the reported experiments, the application of our proposed methods is simpler than that of the multiclass SVM classifier. In addition, it is not necessary to adjust any parameters in our approach.  相似文献   

8.
核函数是SVM的关键技术,核函数的选择将影响着学习机器的学习能力和泛化能力。不同的核函数确定了不同的非线性变换和特征空间,选取不同核函数训练SVM就会得到不同的分类效果。本文提出了一种混合的核函数[1]Kmix=λKpoly+(1-λ)Krbf,从而兼并二项式核函数及径向基核函数的优势。实验证明选用混合核函数的支持向量机,与普通核函数构造的支持向量机的评估效果进行比较,混合核函数支持向量机具有较高的分类精度。  相似文献   

9.
Support-vector-based fuzzy neural network for pattern classification   总被引:3,自引:0,他引:3  
Fuzzy neural networks (FNNs) for pattern classification usually use the backpropagation or C-cluster type learning algorithms to learn the parameters of the fuzzy rules and membership functions from the training data. However, such kinds of learning algorithms usually cannot minimize the empirical risk (training error) and expected risk (testing error) simultaneously, and thus cannot reach a good classification performance in the testing phase. To tackle this drawback, a support-vector-based fuzzy neural network (SVFNN) is proposed for pattern classification in this paper. The SVFNN combines the superior classification power of support vector machine (SVM) in high dimensional data spaces and the efficient human-like reasoning of FNN in handling uncertainty information. A learning algorithm consisting of three learning phases is developed to construct the SVFNN and train its parameters. In the first phase, the fuzzy rules and membership functions are automatically determined by the clustering principle. In the second phase, the parameters of FNN are calculated by the SVM with the proposed adaptive fuzzy kernel function. In the third phase, the relevant fuzzy rules are selected by the proposed reducing fuzzy rule method. To investigate the effectiveness of the proposed SVFNN classification, it is applied to the Iris, Vehicle, Dna, Satimage, Ijcnn1 datasets from the UCI Repository, Statlog collection and IJCNN challenge 2001, respectively. Experimental results show that the proposed SVFNN for pattern classification can achieve good classification performance with drastically reduced number of fuzzy kernel functions.  相似文献   

10.
何强  张娇阳 《智能系统学报》2019,14(6):1163-1169
支持向量机(SVMs)是当前被广泛使用的机器学习技术,其通过最优分割超平面来提高分类器的泛化能力,在实际应用中表现优异。然而SVM也存在易受噪声影响,以及核函数选择等难题。针对以上问题,本文将基于核对齐的多核学习方法引入到模糊支持向量机(fuzzy support vector machine, FSVM)中,提出了模糊多核支持向量机模型(multiple kernel fuzzy support vector machine,MFSVM)。MFSVM通过模糊粗糙集方法计算每一样例隶属度;其次,利用核对齐的多核方法计算每一单核权重,并将组合核引入到模糊支持向量机中。该方法不仅提高了支持向量机的抗噪声能力,也有效避免了核选择难题。在UCI数据库上进行实验,结果表明本文所提方法具有较高的分类精度,验证了该方法的可行性与有效性。  相似文献   

11.
最小二乘Littlewood-Paley小波支持向量机   总被引:11,自引:0,他引:11  
基于小波分解理论和支持向量机核函数的条件,提出了一种多维允许支持向量核函数——Littlewood-Paley小波核函数.该核函数不仅具有平移正交性,而且可以以其正交性逼近二次可积空间上的任意曲线,从而提升了支持向量机的泛化性能.在Littlewood-Paley小波函数作为支持向量核函数的基础上,提出了最小二乘Littlewood-Paley小波支持向量机(LS-LPWSVM).实验结果表明,LS-LPWSVM在同等条件下比最小二乘支持向量机的学习精度要高,因而更适用于复杂函数的学习问题.  相似文献   

12.
本文研究了一种支持向量机(SVM)和基于转换的错误驱动学习相结合的汉语组块识别方法。SVM在选取特征方面有突出的优点,并且在高维特征空间也具有较高的泛化性能,通过核函数的原则,SVM能够在独立于训练数据维数的小计算范围内进行训练。利用基于转换的错误驱动学习方法对SVM的标注结果进行校正,转换规则较好地处理了语言现象中的
的特殊情况,进一步提高了SVM的识别结果。实验结果表明,该方法具有较好的效果。  相似文献   

13.
We propose a novel architecture for a higher order fuzzy inference system (FIS) and develop a learning algorithm to build the FIS. The consequent part of the proposed FIS is expressed as a nonlinear combination of the input variables, which can be obtained by introducing an implicit mapping from the input space to a high dimensional feature space. The proposed learning algorithm consists of two phases. In the first phase, the antecedent fuzzy sets are estimated by the kernel-based fuzzy c-means clustering. In the second phase, the consequent parameters are identified by support vector machine whose kernel function is constructed by fuzzy membership functions and the Gaussian kernel. The performance of the proposed model is verified through several numerical examples generally used in fuzzy modeling. Comparative analysis shows that, compared with the zero-order fuzzy model, first-order fuzzy model, and polynomial fuzzy model, the proposed model exhibits higher accuracy, better generalization performance, and satisfactory robustness.  相似文献   

14.
An Electrocardiogram or ECG is an electrical recording of the heart and is used in the investigation of heart disease. This ECG can be classified as normal and abnormal signals. The classification of the ECG signals is presently performed with the support vector machine. The generalization performance of the SVM classifier is not sufficient for the correct classification of ECG signals. To overcome this problem, the ELM classifier is used which works by searching for the best value of the parameters that tune its discriminant function and upstream by looking for the best subset of features that feed the classifier. The experiments were conducted on the ECG data from the Physionet arrhythmia database to classify five kinds of abnormal waveforms and normal beats. In this paper, a thorough experimental study was done to show the superiority of the generalization capability of the Extreme Learning Machine (ELM) that is presented and compared with support vector machine (SVM) approach in the automatic classification of ECG beats. In particular, the sensitivity of the ELM classifier is tested and that is compared with SVM combined with two classifiers, and they are the k-nearest Neighbor Classifier and the radial basis function neural network classifier, with respect to the curse of dimensionality and the number of available training beats. The obtained results clearly confirm the superiority of the ELM approach as compared with traditional classifiers.  相似文献   

15.
模糊多核支持向量机将模糊支持向量机与多核学习方法结合,通过构造隶属度函数和利用多个核函数的组合形式有效缓解了传统支持向量机模型对噪声数据敏感和多源异构数据学习困难等问题,广泛应用于模式识别和人工智能领域.综述了模糊多核支持向量机的理论基础及其研究现状,详细介绍模糊多核支持向量机中的关键问题,即模糊隶属度函数设计与多核学习方法,最后对模糊多核支持向量机算法未来的研究进行展望.  相似文献   

16.
支持向量机(Support Vector Machine,SVM)作为一种经典的非线性分类器,用于模式识别,可以将训练样本从不可线性分类的低维空间映射到可线性分类的高维空间,再做分类,本文主要训练支持向量机使它学会区分人脸和非人脸。支持向量机的数学推导完备,算法逻辑严密,整体上比Adaboost算法复杂,但在样本量较少的情况下效果良好,因此有样本优势。支撑它的理论包含泛化性理论、最优化理论和核函数等,这些理论也被学术界广泛用于其他机器学习算法如神经网络,几十年来被证明具有很高的可靠性。同时本文论述主成分分析技术(PCA)用于压缩数据,实现数据降维,在数据预处理方面算法提供了很大帮助,使SVM支持向量机的输入数据维数大幅下降,大大提高了运算和检测时间。  相似文献   

17.
In this paper, we propose an active learning technique for solving multiclass problems with support vector machine (SVM) classifiers. The technique is based on both uncertainty and diversity criteria. The uncertainty criterion is implemented by analyzing the one-dimensional output space of the SVM classifier. A simple histogram thresholding algorithm is used to find out the low density region in the SVM output space to identify the most uncertain samples. Then the diversity criterion exploits the kernel k-means clustering algorithm to select uncorrelated informative samples among the selected uncertain samples. To assess the effectiveness of the proposed method we compared it with other batch mode active learning techniques presented in the literature using one toy data set and three real data sets. Experimental results confirmed that the proposed technique provided a very good tradeoff among robustness to biased initial training samples, classification accuracy, computational complexity, and number of new labeled samples necessary to reach the convergence.  相似文献   

18.
使用超椭球参数化坐标的支持向量机   总被引:1,自引:0,他引:1  
基于n维超椭球面坐标变换公式,构造一类核函数--n维超椭球坐标变换核.由于是同维映射,且增大了类间距离,这类核函数在一定程度上改善了支持向量机的性能.与其他核函数(如高斯核)相比,将所构造的核函数用于支持向量机,仅产生了很少的支持向量,因而大大加快了学习速度,改善了泛化性能.数值实验结果表明了所构造的核函数的有效性和正确性.  相似文献   

19.
In dealing with the Two-Class classification problems, the traditional support vector machine (SVM) often cannot achieve good classification accuracy when outliers exist in the training data set. The fuzzy support vector machine (FSVM) can resolve this problem with an appropriate fuzzy membership for each data point. The effect of the outliers can be effectively reduced when the classification problem is solved. In this paper, a new fuzzy membership function is employed in the linear and nonlinear fuzzy support vector machine respectively. The fuzzy membership is calculated based on the structural information of two classes in the input space and in the feature space. This method can distinguish the support vectors and the outliers effectively. Experimental results show that this approach contributes greatly to the reduction of the effect of the outliers and significantly improves the classification accuracy and generalization.  相似文献   

20.
Type-2 fuzzy logic-based classifier fusion for support vector machines   总被引:1,自引:0,他引:1  
As a machine-learning tool, support vector machines (SVMs) have been gaining popularity due to their promising performance. However, the generalization abilities of SVMs often rely on whether the selected kernel functions are suitable for real classification data. To lessen the sensitivity of different kernels in SVMs classification and improve SVMs generalization ability, this paper proposes a fuzzy fusion model to combine multiple SVMs classifiers. To better handle uncertainties existing in real classification data and in the membership functions (MFs) in the traditional type-1 fuzzy logic system (FLS), we apply interval type-2 fuzzy sets to construct a type-2 SVMs fusion FLS. This type-2 fusion architecture takes considerations of the classification results from individual SVMs classifiers and generates the combined classification decision as the output. Besides the distances of data examples to SVMs hyperplanes, the type-2 fuzzy SVMs fusion system also considers the accuracy information of individual SVMs. Our experiments show that the type-2 based SVM fusion classifiers outperform individual SVM classifiers in most cases. The experiments also show that the type-2 fuzzy logic-based SVMs fusion model is better than the type-1 based SVM fusion model in general.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号