首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
基于遗传算法和支持向量机的玉米品种识别   总被引:5,自引:0,他引:5  
提出了一种基于遗传算法(GA)和支持向量机(SVM)的玉米种子的图像特征选择和分类识别的新方法。该方法首先用遗传算法对采集到的玉米种子图像的特征进行优化,而后采用决策二叉树的支持向量机分类算法对玉米品种进行识别。该分类算法将分类器分布在各个结点上,构成多类支持向量机,减少了分类器的数量和重复训练样本的数量。实验结果表明该方法能选出适合于识别的玉米种子特征并能对玉米种子进行正确地识别。  相似文献   

2.
This paper presents a novel and uniform framework for face recognition. This framework is based on a combination of Gabor wavelets, direct linear discriminant analysis (DLDA) and support vector machine (SVM). First, feature vectors are extracted from raw face images using Gabor wavelets. These Gabor-based features are robust against local distortions caused by the variance of illumination, expression and pose. Next, the extracted feature vectors are projected to a low-dimensional subspace using DLDA technique. The Gabor-based DLDA feature vectors are then applied to SVM classifier. A new kernel function for SVM called hyperhemispherically normalized polynomial (HNP) is also proposed in this paper and its validity on the improvement of classification accuracy is theoretically proved and experimentally tested for face recognition. The proposed algorithm was evaluated using the FERET database. Experimental results show that the proposed face recognition system outperforms other related approaches in terms of recognition rate.  相似文献   

3.
The common vector (CV) method is a linear subspace classifier method which allows one to discriminate between classes of data sets, such as those arising in image and word recognition. This method utilizes subspaces that represent classes during classification. Each subspace is modeled such that common features of all samples in the corresponding class are extracted. To accomplish this goal, the method eliminates features that are in the direction of the eigenvectors corresponding to the nonzero eigenvalues of the covariance matrix of each class. In this paper, we introduce a variation of the CV method, which will be referred to as the modified CV (MCV) method. Then, a novel approach is proposed to apply the MCV method in a nonlinearly mapped higher dimensional feature space. In this approach, all samples are mapped into a higher dimensional feature space using a kernel mapping function, and then, the MCV method is applied in the mapped space. Under certain conditions, each class gives rise to a unique CV, and the method guarantees a 100% recognition rate with respect to the training set data. Moreover, experiments with several test cases also show that the generalization performance of the proposed kernel method is comparable to the generalization performances of other linear subspace classifier methods as well as the kernel-based nonlinear subspace method. While both the MCV method and its kernel counterpart did not outperform the support vector machine (SVM) classifier in most of the reported experiments, the application of our proposed methods is simpler than that of the multiclass SVM classifier. In addition, it is not necessary to adjust any parameters in our approach.  相似文献   

4.
利用支持向量机识别汽车颜色   总被引:3,自引:0,他引:3  
大类别数分类时支持向量机(SVM)数量较多,文中通过类别合并和特征空间分解,结合决策树判别方法.对SVM数量进行优化,提出了一种基于优化SVM的汽车颜色识别方法.该方法与最近邻分类方法相比,无论是在速度上还是识别正确率上都得到了提高.实验结果表明,该方法是一种快速且正确率较高的多类别分类方法,可以满足实时识别的要求.  相似文献   

5.
基于FLD特征提取的SVM人脸表情识别方法   总被引:6,自引:1,他引:5  
摘 要 本文通Fisher’s Linear Discriminant(FLD)提取静态人脸表情特征,采用“一对一”支持向量机分类器进行了多种表情识别。在JAFFE人脸表情库上分别进行了测试人参与训练和不参与训练两种方案仿真实验,并与最近邻分类器进行比较,支持向量机都取得了更好的识别结果,说明了支持向量机分类器应用于表情识别是可行的  相似文献   

6.
本文提出了一种新的基于SVM多类问题的策略Half-Against-Half,用该方法训练的基本思想是从多个类别中选择相近或相似的类别,相近的类别放在一个子集里,把多个类别分成两个子集,一直递归地使用这种思想,用类似决策树的思想构造,直到通过多个二分SVM分类器能把每个类别分开。从理论上看,该方法在训练时间、速度、训练集大小等方面比传统的方法OVA、OVO、DAG有一定的优势,并在实践方面得到了实验数据的支持。  相似文献   

7.
小波分解提取脸谱特征具有对表情变化不敏感的特点,支持向量机竹=为分类器具有很高的推广性能,无需先验知识,针对小波分解和支持向量机所具有的优点,提出了一种新的脸谱识别算法,在该算法中无需对洲练图像进行预处理,直接使用小波分解方法对脸谱图像进行特征提取,用所提取的脸谱特征向量组合成新的脸谱特征向链洲练多分类支持向量机模型,最后用训练好的支持向量机进行脸谱识别,在训练中分别采用了三种不同的核函数;使用ORL脸谱图像库对该算法进行了测试和评估,测试结果表明了该算法在识别性能方面的优越性。  相似文献   

8.
This paper presents the implementation of a new text document classification framework that uses the Support Vector Machine (SVM) approach in the training phase and the Euclidean distance function in the classification phase, coined as Euclidean-SVM. The SVM constructs a classifier by generating a decision surface, namely the optimal separating hyper-plane, to partition different categories of data points in the vector space. The concept of the optimal separating hyper-plane can be generalized for the non-linearly separable cases by introducing kernel functions to map the data points from the input space into a high dimensional feature space so that they could be separated by a linear hyper-plane. This characteristic causes the implementation of different kernel functions to have a high impact on the classification accuracy of the SVM. Other than the kernel functions, the value of soft margin parameter, C is another critical component in determining the performance of the SVM classifier. Hence, one of the critical problems of the conventional SVM classification framework is the necessity of determining the appropriate kernel function and the appropriate value of parameter C for different datasets of varying characteristics, in order to guarantee high accuracy of the classifier. In this paper, we introduce a distance measurement technique, using the Euclidean distance function to replace the optimal separating hyper-plane as the classification decision making function in the SVM. In our approach, the support vectors for each category are identified from the training data points during training phase using the SVM. In the classification phase, when a new data point is mapped into the original vector space, the average distances between the new data point and the support vectors from different categories are measured using the Euclidean distance function. The classification decision is made based on the category of support vectors which has the lowest average distance with the new data point, and this makes the classification decision irrespective of the efficacy of hyper-plane formed by applying the particular kernel function and soft margin parameter. We tested our proposed framework using several text datasets. The experimental results show that this approach makes the accuracy of the Euclidean-SVM text classifier to have a low impact on the implementation of kernel functions and soft margin parameter C.  相似文献   

9.
黄晓娟  张莉 《计算机应用》2015,35(10):2798-2802
为处理癌症多分类问题,已经提出了多类支持向量机递归特征消除(MSVM-RFE)方法,但该方法考虑的是所有子分类器的权重融合,忽略了各子分类器自身挑选特征的能力。为提高多分类问题的识别率,提出了一种改进的多类支持向量机递归特征消除(MMSVM-RFE)方法。所提方法利用一对多策略把多类问题化解为多个两类问题,每个两类问题均采用支持向量机递归特征消除来逐渐剔除掉冗余特征,得到一个特征子集;然后将得到的多个特征子集合并得到最终的特征子集;最后用SVM分类器对获得的特征子集进行建模。在3个基因数据集上的实验结果表明,改进的算法整体识别率提高了大约2%,单个类别的精度有大幅度提升甚至100%。与随机森林、k近邻分类器以及主成分分析(PCA)降维方法的比较均验证了所提算法的优势。  相似文献   

10.
针对采用多类分类方法进行白酒真假识别时存在的真酒样本和假酒样本(正类样本和异类样本)数量无法均衡以及异类样本无法全部获取的问题,提出应用单类支持向量机分别单独对每一种品牌的白酒训练单类分类器进行真假识别的方法。首先采用自主设计的电子鼻系统对不同品牌白酒进行采样测试;采样后的传感器阵列数据依次经过数据预处理、特征生成、特征选择降维处理,得到可用于分类的白酒样本;再通过格点搜索获取每种白酒单类分类器的最优参数;最后测试各个单类分类器对相应品牌白酒的真假识别效果。各单类分类器的真假识别率分布在93%~98%之间,结果表明,采用自主设计的电子鼻结合单类支持向量机可以很好地对白酒真假进行识别。  相似文献   

11.
基于融合的多类支持向量机   总被引:2,自引:1,他引:1       下载免费PDF全文
支持向量机可以处理2类问题,通过“一对一”和“一对多”方式能将2类支持向量机扩展为多类支持向量机。提出一种基于两类支持向量机融合的多类支持向量机构成方法。对分类器融合采用极大值法、极小值法、乘积法、均值法、中值法、投票法和各种决策模板融合方法。在日本女性表情数据库JAFFE上应用该方法进行人脸表情识别,结果证明了其有效性。  相似文献   

12.
基于支持向量机的二值分类原理,提出了一种由自适应共振理论方法与支持向量机相结合的改进型多类分类方法,此方法改进了传统支持向量机的一对一多类分类方法;对于每个二值分类器的结果进行决策时没有采用投票原则,而是采用自适应共振理论网络融合二值分类器的输出信息,从而克服了当分类器输出结果接近于O时投票法容易出现决策错误和票数相同时无法决策的不足.此算法已应用于玻璃的分类.仿真实验证明,此方法具有较好的分类效果.  相似文献   

13.
The identification of non-cell objects in biological images is not a trivial task largely due to the difficulty in describing their characteristics in recognition systems. In order to better reduce the false positive rate caused by the presence of non-cell particles, we propose a novel approach using a local jet context features scheme combined with a two-tier object classification system. The newly proposed feature scheme, namely local jet context feature, integrates part of global features with the “local jet” features. The scheme aims to effectively describe the particle characteristics that are invariant to shift and rotation, and hence help to retain the critical shape information. The proposed two-tier particle classification strategy consists of a pre-recognition stage first and later a further filtering phase. Using the local jet context features coupled with a multi-class SVM classifier, the pre-recognition stage intends to assign the particles to their corresponding classes as many as possible. To further reduce the false positive particles, next a decision tree classifier based on shape-centered features is applied. Our experimental study shows that through the proposed two-tier classification strategy, we are able to achieve 85% of identification accuracy and 80% of F1 value in urinary particle recognition. The experiment results demonstrate that the proposed local jet context features are capable to discriminate particles in terms of shape and texture characteristics. Overall, the two-tier classification stage is found to be effective in reducing the false positive rate caused by non-cell particles.  相似文献   

14.
对支持向量机的多类分类问题进行研究,提出了一种基于核聚类的多类分类方法。利用核聚类方法将原始样本特征映射到高维特征进行聚类分组,对每一组使用一个支持向量机二值分类器进行分类,并用这些二值分类器组成决策树的节点,构成了一个决策分类树。给出决策树的生成算法,提出了利用交叠系数来控制交叠,从而克服错分积累,提高分类准确率。实验结果表明,采用该方法,手写体汉字识别速度和正确率都达到了实用的要求。  相似文献   

15.
免疫多域特征融合的多核学习SVM运动想象脑电信号分类   总被引:2,自引:1,他引:1  
张宪法  郝矿荣  陈磊 《自动化学报》2020,46(11):2417-2426
针对多通道四类运动想象(Motor imagery, MI)脑电信号(Electroencephalography, EEG)的分类问题, 提出免疫多域特征融合的多核学习SVM (Support vector machine)运动想象脑电信号分类算法.首先, 通过离散小波变换(Discrete wavelet transform, DWT)提取脑电信号的时频域特征, 并利用一对多公共空间模式(One versus the rest common spatial patterns, OVR-CSP)提取脑电信号的空域特征, 融合时频空域特征形成特征向量.其次, 利用多核学习支持向量机(Multiple kernel learning support vector machine, MKL-SVM)对提取的特征向量进行分类.最后, 利用免疫遗传算法(Immune genetic algorithm, IGA)对模型的相关参数进行优化, 得到识别率更高的脑电信号分类模型.采用BCI2005desc-Ⅲa数据集进行实验验证, 对比结果表明, 本文所提出的分类模型有效地解决了传统单域特征提取算法特征单一、信息描述不足的问题, 更准确地表达了不同受试者个性化的多域特征, 取得了94.21%的识别率, 优于使用相同数据集的其他方法.  相似文献   

16.
基于SVM模型的自然环境声音的分类   总被引:1,自引:0,他引:1  
提出了一种基于支持向量机(SVM)模型对自然环境声音进行分类的方法。首先,提取Mel频率倒谱系数(MFCCs)来分析声音信号;其次,对自然环境的声音基于MFCC特征集建立SVM模型;最后,使用交叉验证的测试方法得到基于SVM算法的分类结果。使用SVM模型对50类自然环境中的声音进行分类的正确率可达99.5704%,分类效果明显优于K最近邻(KNN)和二分嵌套整合(END)这两种算法。  相似文献   

17.
Semi-supervised learning has attracted much attention in pattern recognition and machine learning. Most semi-supervised learning algorithms are proposed for binary classification, and then extended to multi-class cases by using approaches such as one-against-the-rest. In this work, we propose a semi-supervised learning method by using the multi-class boosting, which can directly classify the multi-class data and achieve high classification accuracy by exploiting the unlabeled data. There are two distinct features in our proposed semi-supervised learning approach: (1) handling multi-class cases directly without reducing them to multiple two-class problems, and (2) the classification accuracy of each base classifier requiring only at least 1/K or better than 1/K (K is the number of classes). Experimental results show that the proposed method is effective based on the testing of 21 UCI benchmark data sets.  相似文献   

18.
The support vector machine (SVM) has a high generalisation ability to solve binary classification problems, but its extension to multi-class problems is still an ongoing research issue. Among the existing multi-class SVM methods, the one-against-one method is one of the most suitable methods for practical use. This paper presents a new multi-class SVM method that can reduce the number of hyperplanes of the one-against-one method and thus it returns fewer support vectors. The proposed algorithm works as follows. While producing the boundary of a class, no more hyperplanes are constructed if the discriminating hyperplanes of neighbouring classes happen to separate the rest of the classes. We present a large number of experiments that show that the training time of the proposed method is the least among the existing multi-class SVM methods. The experimental results also show that the testing time of the proposed method is less than that of the one-against-one method because of the reduction of hyperplanes and support vectors. The proposed method can resolve unclassifiable regions and alleviate the over-fitting problem in a much better way than the one-against-one method by reducing the number of hyperplanes. We also present a direct acyclic graph SVM (DAGSVM) based testing methodology that improves the testing time of the DAGSVM method.  相似文献   

19.
分析了文本分类过程中存在的混淆类现象,主要研究混淆类的判别技术,进而改善文本分类的性能.首先,提出了一种基于分类错误分布的混淆类识别技术,识别预定义类别中的混淆类集合.为了有效判别混淆类,提出了一种基于判别能力的特征选取技术,通过评价某一特征对类别之间的判别能力实现特征选取.最后,通过基于两阶段的分类器设计框架,将初始分类器和混淆类分类器进行集成,组合了两个阶段的分类结果作为最后输出.混淆类分类器的激活条件是:当测试文本被初始分类器标注为混淆类类别时,即采用混淆类分类器进行重新判别.在比较实验中采用了Newsgroup和863中文评测语料,针对单标签、多类分类器.实验结果显示,该技术有效地改善了分类性能.  相似文献   

20.
Single pass text classification by direct feature weighting   总被引:2,自引:1,他引:1  
The Feature Weighting Classifier (FWC) is an efficient multi-class classification algorithm for text data that uses Information Gain to directly estimate per-class feature weights in the classifier. This classifier requires only a single pass over the dataset to compute the feature frequencies per class, is easy to implement, and has memory usage that is linear in the number of features. Results of experiments performed on 128 binary and multi-class text and web datasets show that FWC??s performance is at least comparable to, and often better than that of Naive Bayes, TWCNB, Winnow, Balanced Winnow and linear SVM. On a large-scale web dataset with 12,294 classes and 135,973 training instances, FWC trained in 13?s and yielded comparable classification performance to a state of the art multi-class SVM implementation, which took over 15?min to train.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号