首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Discriminant waveletfaces and nearest feature classifiers for face recognition   总被引:19,自引:0,他引:19  
Feature extraction, discriminant analysis, and classification rules are three crucial issues for face recognition. We present hybrid approaches to handle three issues together. For feature extraction, we apply the multiresolution wavelet transform to extract the waveletface. We also perform the linear discriminant analysis on waveletfaces to reinforce discriminant power. During classification, the nearest feature plane (NFP) and nearest feature space (NFS) classifiers are explored for robust decisions in presence of wide facial variations. Their relationships to conventional nearest neighbor and nearest feature line classifiers are demonstrated. In the experiments, the discriminant waveletface incorporated with the NFS classifier achieves the best face recognition performance.  相似文献   

2.
Spasmodic Dysphonia is a voice disorder caused due to spasm of involuntary muscles in the voice box. These spasms can leads to breathy, soundless voice breaks, strangled voice by interrupting the opening of the vocal folds. There is no specific test for the diagnosis of spasmodic dysphonia. The cause of occurrence is unknown, there is no cure for the disorder, but treatments can improve the quality of voice. The main aim and objectives of the study are (i) to diagnose the dysphonia and to have comparative analysis on both continuous speech signal and sustained phonation /a/ by extracting the acoustic features. (ii) to extract the acoustic features by means of semi automated method using PRAAT software and automated method using FFT algorithm (ii) to classify the normal and spasmodic dysphonic patients using different classifiers such as Levenberg Marquardt Back propagation algorithm, K-Nearest Neighbor (KNN) and Support Vector Machine (SVM) based on sensitivity and accuracy. Thirty normal and thirty abnormal patients were considered in the proposed study. The performance of three different classifiers was studied and it was observed that SVM and KNN were 100% accurate, whereas Levinberg BPN network produced an accuracy of about 96.7%. The voice sample of dysphonia patients showed variations from the normal speech samples. Automated analysis method was able to detect dysphonia and provides better results compared to semi automated method.  相似文献   

3.
针对支持向量机在特征选择方面具有自动选择的功能,提出了一种改进的最少核分类器。在样本测试中使用更少的特征维数,减少识别过程计算量。数值试验表明,改进过的分类器能有效压缩无用的特征属性,具有较强的泛化能力。  相似文献   

4.
Supervised learning of classifiers often resorts to the minimization of a quadratic error, even if this criterion is more especially matched to nonlinear regression problems. It is shown that the mapping built by a quadratic error minimization (QEM) tends to output the Bayesian discriminating rules even with nonuniform losses, provided the desired responses are chosen accordingly. This property is for instance shared by the multilayer perceptron (MLP). It is shown that their ultimate performance can be assessed with finite learning sets by establishing links with kernel estimators of density.  相似文献   

5.
This paper presents a novel method for facial expression recognition that employs the combination of two different feature sets in an ensemble approach. A pool of base support vector machine classifiers is created using Gabor filters and Local Binary Patterns. Then a multi-objective genetic algorithm is used to search for the best ensemble using as objective functions the minimization of both the error rate and the size of the ensemble. Experimental results on JAFFE and Cohn-Kanade databases have shown the efficiency of the proposed strategy in finding powerful ensembles, which improves the recognition rates between 5% and 10% over conventional approaches that employ single feature sets and single classifiers.  相似文献   

6.
Combining feature reduction and case selection in building CBR classifiers   总被引:4,自引:0,他引:4  
CBR systems that are built for the classification problems are called CBR classifiers. This paper presents a novel and fast approach to building efficient and competent CBR classifiers that combines both feature reduction (FR) and case selection (CS). It has three central contributions: 1) it develops a fast rough-set method based on relative attribute dependency among features to compute the approximate reduct, 2) it constructs and compares different case selection methods based on the similarity measure and the concepts of case coverage and case reachability, and 3) CBR classifiers built using a combination of the FR and CS processes can reduce the training burden as well as the need to acquire domain knowledge. The overall experimental results demonstrating on four real-life data sets show that the combined FR and CS method can preserve, and may also improve, the solution accuracy while at the same time substantially reducing the storage space. The case retrieval time is also greatly reduced because the use of CBR classifier contains a smaller amount of cases with fewer features. The developed FR and CS combination method is also compared with the kernel PCA and SVMs techniques. Their storage requirement, classification accuracy, and classification speed are presented and discussed.  相似文献   

7.
Classifier ensembles are systems composed of a set of individual classifiers structured in a parallel way and a combination module, which is responsible for providing the final output of the system. In the design of these systems, diversity is considered as one of the main aspects to be taken into account, since there is no gain in combining identical classification methods. One way of increasing diversity is to provide different datasets (patterns and/or attributes) for the individual classifiers. In this context, it is envisaged to use, for instance, feature selection methods in order to select subsets of attributes for the individual classifiers. In this paper, it is investigated the ReinSel method, which is a class-based feature selection method for ensemble systems. This method is inserted into the filter approach of feature selection methods and it chooses only the attributes that are important only for a specific class through the use of a reinforcement procedure.  相似文献   

8.
A remedy has been found for hierarchical classifiers which relieves the tendency toward misclassification and/or ‘reject’ decisions with the Kulkarni-Kanal S-admissible search strategy, when empty bins are present in the histograms derived by discretization of feature ranges.  相似文献   

9.
This paper presents an experimental comparison of the nearest feature classifiers, using an approach based on binomial tests in order to evaluate their strengths and weaknesses. In addition, classification accuracies and the accuracy-dimensionality tradeoff have been considered as comparison criteria. We extend two of the nearest feature classifiers to label the query point by a majority vote of the samples. Comparisons were carried out for face recognition using ORL database. We apply the eigenface representation for feature extraction. Experimental results showed that even though the classification accuracy of k-NFP outperforms k-NFL in some dimensions, these rate differences do not have statistical significance.  相似文献   

10.
针对最近特征线(NFL)与最近特征平面(NFP)分类器在大数据样本量与高维数时计算复杂度大的问题,依据局部最近邻准则,提出了一种新的搜索策略,使这两种分类器在保持较高识别率的同时,提高了算法的实时性能。对三类不同飞机实测数据的分类结果表明了所提方法的有效性。  相似文献   

11.
Recent research has linked backpropagation (BP) and radial basis function (RBF) network classifiers, trained by minimizing the standard mean square error (MSE), to two main topics in statistical pattern recognition (SPR), namely the Bayes decision theory and discriminant analysis. However, so far, the establishment of these links has resulted in only a few practical applications for training, using, and evaluating these classifiers. The paper aims at providing more of these applications. It first illustrates that while training a linear output BP network, the explicit utilization of the network discriminant capability leads to an improvement in its classification performance. Then, for linear output BP and RBF networks, the paper defines a new generalization measure that provides information about the closeness of the network classification performance to the optimal performance. The estimation procedure of this measure is described and its use as an efficient criterion for terminating the learning algorithm and choosing the network topology is explained. The paper finally proposes an upper bound on the number of hidden units needed by an RBF network classifier to achieve an arbitrary value of the minimized MSE. Experimental results are presented to validate all proposed applications.  相似文献   

12.
以3个主要处理阶段来实现一个高识别率的虹膜识别系统。撷取人眼图像进而分离出虹膜图像,再利用图像处理予以改善,使得虹膜图像更适于后续的识别。接着建立虹膜的特征向量,在虹膜图像展开的过程中,解决了虹膜图像旋转不变性的问题,然后利用直接线性判别分析(D-LDA)的方式进行特征抽取,使得所产生出来的特征向量拥有最大类别间距离与最小类别内距离的特性。最后,探讨多种最近特征分类法与其识别效果,并将上述方法设计完成一套眼虹膜识别系统。实验结果显示,在样本特征向量数较少的情况下识别率有96.47%,如果在每个类别中增加样本特征向量的数量,则系统的识别率可以达到98.50%。  相似文献   

13.
特征选择是文本分类中一种重要的文本预处理技术,它能够有效地提高分类器的精度和效率。文本分类中特征选择的关键是寻求有效的特征评价指标。一般来说,同一个特征评价指标对不同的分类器,其效果不同,由此,一个好的特征评价指标应当考虑分类器的特点。由于朴素贝叶斯分类器简单、高效而且对特征选择很敏感,因此,对用于该种分类器的特征选择方法的研究具有重要的意义。有鉴于此,提出了一种有效的用于贝叶斯分类器的多类别文本特征评价指标:CDM。利用贝叶斯分类器在两个多类别的文本数据集上进行了实验。实验结果表明提出的CDM指标具有比其它特征评价指标更好的特征选择效果。  相似文献   

14.
Feature rankings are often used for supervised dimension reduction especially when discriminating power of each feature is of interest, dimensionality of dataset is extremely high, or computational power is limited to perform more complicated methods. In practice, it is recommended to start dimension reduction via simple methods such as feature rankings before applying more complex approaches. Single variable classifier (SVC) ranking is a feature ranking based on the predictive performance of a classifier built using only a single feature. While benefiting from capabilities of classifiers, this ranking method is not as computationally intensive as wrappers. In this paper, we report the results of an extensive study on the bias and stability of such feature ranking method. We study whether the classifiers influence the SVC rankings or the discriminative power of features themselves has a dominant impact on the final rankings. We show the common intuition of using the same classifier for feature ranking and final classification does not always result in the best prediction performance. We then study if heterogeneous classifiers ensemble approaches provide more unbiased rankings and if they improve final classification performance. Furthermore, we calculate an empirical prediction performance loss for using the same classifier in SVC feature ranking and final classification from the optimal choices.  相似文献   

15.
If one is given two binary classifiers and a set of test data, it should be straightforward to determine which of the two classifiers is the superior. Recent work, however, has called into question many of the methods heretofore accepted as standard for this task. In this paper, we analyze seven ways of determining whether one classifier is better than another, given the same test data. Five of these are long established, and two are relative newcomers. We review and extend work showing that one of these methods is clearly inappropriate and then conduct an empirical analysis with a large number of datasets to evaluate the real-world implications of our theoretical analysis. Both our empirical and theoretical results converge strongly toward one of the newer methods.  相似文献   

16.
Current emotion recognition computational techniques have been successful on associating the emotional changes with the EEG signals, and so they can be identified and classified from EEG signals if appropriate stimuli are applied. However, automatic recognition is usually restricted to a small number of emotions classes mainly due to signal’s features and noise, EEG constraints and subject-dependent issues. In order to address these issues, in this paper a novel feature-based emotion recognition model is proposed for EEG-based Brain–Computer Interfaces. Unlike other approaches, our method explores a wider set of emotion types and incorporates additional features which are relevant for signal pre-processing and recognition classification tasks, based on a dimensional model of emotions: Valenceand Arousal. It aims to improve the accuracy of the emotion classification task by combining mutual information based feature selection methods and kernel classifiers. Experiments using our approach for emotion classification which combines efficient feature selection methods and efficient kernel-based classifiers on standard EEG datasets show the promise of the approach when compared with state-of-the-art computational methods.  相似文献   

17.
This paper investigates feature selection based on rough sets for dimensionality reduction in Case-Based Reasoning classifiers. In order to be useful, Case-Based Reasoning systems should be able to manage imprecise, uncertain and redundant data to retrieve the most relevant information in a potentially overwhelming quantity of data. Rough Set Theory has been shown to be an effective tool for data mining and for uncertainty management. This paper has two central contributions: (1) it develops three strategies for feature selection, and (2) it proposes several measures for estimating attribute relevance based on Rough Set Theory. Although we concentrate on Case-Based Reasoning classifiers, the proposals are general enough to be applicable to a wide range of learning algorithms. We applied these proposals on twenty data sets from the UCI repository and examined the impact of feature selection over classification performance. Our evaluation shows that all three proposals benefit the basic Case-Based Reasoning system. They also present robustness in comparison to well-known feature selection strategies.  相似文献   

18.
19.
Automatic speech recognition (ASR) system plays a vital role in the human–machine interaction. ASR system faces the challenge of performance degradation due to inconsistency between training and testing phases. This occurs due to extraction and representation of erroneous, redundant feature vectors. This paper proposes three different combinations at speech feature vector generation phase and two hybrid classifiers at modeling phase. In feature extraction phase MFCC, RASTA-PLP, and PLP are combined in different ways. In modeling phase, the mean and variance are calculated to generate the inter and intra class feature vectors. These feature vectors are further adopted by optimization algorithm to generate refined feature vectors with traditional statistical technique. This approach uses GA?+?HMM and DE?+?HMM techniques to produce refine model parameters. The experiments are conducted on datasets of large vocabulary isolated Punjabi lexicons. The simulation result shows the performance improvement using MFCC and DE?+?HMM technique when compared with RASTA-PLP, PLP using hybrid HMM classifiers.  相似文献   

20.
满意特征选择及其应用   总被引:2,自引:0,他引:2  
实际应用中的特征选择是一个满意优化问题.针对已有特征选择方法较少考虑特征获取代价和特征集维数的自动确定问题,提出一种满意特征选择方法(SFSM),将样本分类性能、特征集维数和特征提取复杂性等多种因素综合考虑.给出特征满意度和特征集满意度定义,设计出满意度函数,导出满意特征集评价准则,详细描述了特征选择算法.雷达辐射源信号特征选择与识别的实验结果显示,SFSM在计算效率和选出特征的质量方面明显优于顺序前进法、新特征选择法和多目标遗传算法.证实了SFSM的有效性和实用性.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号