首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Discriminant waveletfaces and nearest feature classifiers for face recognition   总被引:19,自引:0,他引:19  
Feature extraction, discriminant analysis, and classification rules are three crucial issues for face recognition. We present hybrid approaches to handle three issues together. For feature extraction, we apply the multiresolution wavelet transform to extract the waveletface. We also perform the linear discriminant analysis on waveletfaces to reinforce discriminant power. During classification, the nearest feature plane (NFP) and nearest feature space (NFS) classifiers are explored for robust decisions in presence of wide facial variations. Their relationships to conventional nearest neighbor and nearest feature line classifiers are demonstrated. In the experiments, the discriminant waveletface incorporated with the NFS classifier achieves the best face recognition performance.  相似文献   

2.
Spasmodic Dysphonia is a voice disorder caused due to spasm of involuntary muscles in the voice box. These spasms can leads to breathy, soundless voice breaks, strangled voice by interrupting the opening of the vocal folds. There is no specific test for the diagnosis of spasmodic dysphonia. The cause of occurrence is unknown, there is no cure for the disorder, but treatments can improve the quality of voice. The main aim and objectives of the study are (i) to diagnose the dysphonia and to have comparative analysis on both continuous speech signal and sustained phonation /a/ by extracting the acoustic features. (ii) to extract the acoustic features by means of semi automated method using PRAAT software and automated method using FFT algorithm (ii) to classify the normal and spasmodic dysphonic patients using different classifiers such as Levenberg Marquardt Back propagation algorithm, K-Nearest Neighbor (KNN) and Support Vector Machine (SVM) based on sensitivity and accuracy. Thirty normal and thirty abnormal patients were considered in the proposed study. The performance of three different classifiers was studied and it was observed that SVM and KNN were 100% accurate, whereas Levinberg BPN network produced an accuracy of about 96.7%. The voice sample of dysphonia patients showed variations from the normal speech samples. Automated analysis method was able to detect dysphonia and provides better results compared to semi automated method.  相似文献   

3.
Supervised learning of classifiers often resorts to the minimization of a quadratic error, even if this criterion is more especially matched to nonlinear regression problems. It is shown that the mapping built by a quadratic error minimization (QEM) tends to output the Bayesian discriminating rules even with nonuniform losses, provided the desired responses are chosen accordingly. This property is for instance shared by the multilayer perceptron (MLP). It is shown that their ultimate performance can be assessed with finite learning sets by establishing links with kernel estimators of density.  相似文献   

4.
This paper presents a novel method for facial expression recognition that employs the combination of two different feature sets in an ensemble approach. A pool of base support vector machine classifiers is created using Gabor filters and Local Binary Patterns. Then a multi-objective genetic algorithm is used to search for the best ensemble using as objective functions the minimization of both the error rate and the size of the ensemble. Experimental results on JAFFE and Cohn-Kanade databases have shown the efficiency of the proposed strategy in finding powerful ensembles, which improves the recognition rates between 5% and 10% over conventional approaches that employ single feature sets and single classifiers.  相似文献   

5.
Combining feature reduction and case selection in building CBR classifiers   总被引:4,自引:0,他引:4  
CBR systems that are built for the classification problems are called CBR classifiers. This paper presents a novel and fast approach to building efficient and competent CBR classifiers that combines both feature reduction (FR) and case selection (CS). It has three central contributions: 1) it develops a fast rough-set method based on relative attribute dependency among features to compute the approximate reduct, 2) it constructs and compares different case selection methods based on the similarity measure and the concepts of case coverage and case reachability, and 3) CBR classifiers built using a combination of the FR and CS processes can reduce the training burden as well as the need to acquire domain knowledge. The overall experimental results demonstrating on four real-life data sets show that the combined FR and CS method can preserve, and may also improve, the solution accuracy while at the same time substantially reducing the storage space. The case retrieval time is also greatly reduced because the use of CBR classifier contains a smaller amount of cases with fewer features. The developed FR and CS combination method is also compared with the kernel PCA and SVMs techniques. Their storage requirement, classification accuracy, and classification speed are presented and discussed.  相似文献   

6.
Classifier ensembles are systems composed of a set of individual classifiers structured in a parallel way and a combination module, which is responsible for providing the final output of the system. In the design of these systems, diversity is considered as one of the main aspects to be taken into account, since there is no gain in combining identical classification methods. One way of increasing diversity is to provide different datasets (patterns and/or attributes) for the individual classifiers. In this context, it is envisaged to use, for instance, feature selection methods in order to select subsets of attributes for the individual classifiers. In this paper, it is investigated the ReinSel method, which is a class-based feature selection method for ensemble systems. This method is inserted into the filter approach of feature selection methods and it chooses only the attributes that are important only for a specific class through the use of a reinforcement procedure.  相似文献   

7.
A remedy has been found for hierarchical classifiers which relieves the tendency toward misclassification and/or ‘reject’ decisions with the Kulkarni-Kanal S-admissible search strategy, when empty bins are present in the histograms derived by discretization of feature ranges.  相似文献   

8.
This paper presents an experimental comparison of the nearest feature classifiers, using an approach based on binomial tests in order to evaluate their strengths and weaknesses. In addition, classification accuracies and the accuracy-dimensionality tradeoff have been considered as comparison criteria. We extend two of the nearest feature classifiers to label the query point by a majority vote of the samples. Comparisons were carried out for face recognition using ORL database. We apply the eigenface representation for feature extraction. Experimental results showed that even though the classification accuracy of k-NFP outperforms k-NFL in some dimensions, these rate differences do not have statistical significance.  相似文献   

9.
Recent research has linked backpropagation (BP) and radial basis function (RBF) network classifiers, trained by minimizing the standard mean square error (MSE), to two main topics in statistical pattern recognition (SPR), namely the Bayes decision theory and discriminant analysis. However, so far, the establishment of these links has resulted in only a few practical applications for training, using, and evaluating these classifiers. The paper aims at providing more of these applications. It first illustrates that while training a linear output BP network, the explicit utilization of the network discriminant capability leads to an improvement in its classification performance. Then, for linear output BP and RBF networks, the paper defines a new generalization measure that provides information about the closeness of the network classification performance to the optimal performance. The estimation procedure of this measure is described and its use as an efficient criterion for terminating the learning algorithm and choosing the network topology is explained. The paper finally proposes an upper bound on the number of hidden units needed by an RBF network classifier to achieve an arbitrary value of the minimized MSE. Experimental results are presented to validate all proposed applications.  相似文献   

10.
以3个主要处理阶段来实现一个高识别率的虹膜识别系统。撷取人眼图像进而分离出虹膜图像,再利用图像处理予以改善,使得虹膜图像更适于后续的识别。接着建立虹膜的特征向量,在虹膜图像展开的过程中,解决了虹膜图像旋转不变性的问题,然后利用直接线性判别分析(D-LDA)的方式进行特征抽取,使得所产生出来的特征向量拥有最大类别间距离与最小类别内距离的特性。最后,探讨多种最近特征分类法与其识别效果,并将上述方法设计完成一套眼虹膜识别系统。实验结果显示,在样本特征向量数较少的情况下识别率有96.47%,如果在每个类别中增加样本特征向量的数量,则系统的识别率可以达到98.50%。  相似文献   

11.
Feature rankings are often used for supervised dimension reduction especially when discriminating power of each feature is of interest, dimensionality of dataset is extremely high, or computational power is limited to perform more complicated methods. In practice, it is recommended to start dimension reduction via simple methods such as feature rankings before applying more complex approaches. Single variable classifier (SVC) ranking is a feature ranking based on the predictive performance of a classifier built using only a single feature. While benefiting from capabilities of classifiers, this ranking method is not as computationally intensive as wrappers. In this paper, we report the results of an extensive study on the bias and stability of such feature ranking method. We study whether the classifiers influence the SVC rankings or the discriminative power of features themselves has a dominant impact on the final rankings. We show the common intuition of using the same classifier for feature ranking and final classification does not always result in the best prediction performance. We then study if heterogeneous classifiers ensemble approaches provide more unbiased rankings and if they improve final classification performance. Furthermore, we calculate an empirical prediction performance loss for using the same classifier in SVC feature ranking and final classification from the optimal choices.  相似文献   

12.
If one is given two binary classifiers and a set of test data, it should be straightforward to determine which of the two classifiers is the superior. Recent work, however, has called into question many of the methods heretofore accepted as standard for this task. In this paper, we analyze seven ways of determining whether one classifier is better than another, given the same test data. Five of these are long established, and two are relative newcomers. We review and extend work showing that one of these methods is clearly inappropriate and then conduct an empirical analysis with a large number of datasets to evaluate the real-world implications of our theoretical analysis. Both our empirical and theoretical results converge strongly toward one of the newer methods.  相似文献   

13.
This paper investigates feature selection based on rough sets for dimensionality reduction in Case-Based Reasoning classifiers. In order to be useful, Case-Based Reasoning systems should be able to manage imprecise, uncertain and redundant data to retrieve the most relevant information in a potentially overwhelming quantity of data. Rough Set Theory has been shown to be an effective tool for data mining and for uncertainty management. This paper has two central contributions: (1) it develops three strategies for feature selection, and (2) it proposes several measures for estimating attribute relevance based on Rough Set Theory. Although we concentrate on Case-Based Reasoning classifiers, the proposals are general enough to be applicable to a wide range of learning algorithms. We applied these proposals on twenty data sets from the UCI repository and examined the impact of feature selection over classification performance. Our evaluation shows that all three proposals benefit the basic Case-Based Reasoning system. They also present robustness in comparison to well-known feature selection strategies.  相似文献   

14.
Automatic speech recognition (ASR) system plays a vital role in the human–machine interaction. ASR system faces the challenge of performance degradation due to inconsistency between training and testing phases. This occurs due to extraction and representation of erroneous, redundant feature vectors. This paper proposes three different combinations at speech feature vector generation phase and two hybrid classifiers at modeling phase. In feature extraction phase MFCC, RASTA-PLP, and PLP are combined in different ways. In modeling phase, the mean and variance are calculated to generate the inter and intra class feature vectors. These feature vectors are further adopted by optimization algorithm to generate refined feature vectors with traditional statistical technique. This approach uses GA?+?HMM and DE?+?HMM techniques to produce refine model parameters. The experiments are conducted on datasets of large vocabulary isolated Punjabi lexicons. The simulation result shows the performance improvement using MFCC and DE?+?HMM technique when compared with RASTA-PLP, PLP using hybrid HMM classifiers.  相似文献   

15.
16.
Human–computer interaction (HCI) lies at the crossroads of many scientific areas including artificial intelligence, computer vision, face recognition, motion tracking, etc. It is argued that to truly achieve effective human–computer intelligent interaction, the computer should be able to interact naturally with the user, similar to the way HCI takes place. In this paper, we discuss training probabilistic classifiers with labeled and unlabeled data for HCI applications. We provide an analysis that shows under what conditions unlabeled data can be used in learning to improve classification performance, and we investigate the implications of this analysis to a specific type of probabilistic classifiers, Bayesian networks. Finally, we show how the resulting algorithms are successfully employed in facial expression recognition, face detection, and skin detection.  相似文献   

17.
Discrete classification problems abound in pattern recognition and data mining applications. One of the most common discrete rules is the discrete histogram rule. This paper presents exact formulas for the computation of bias, variance, and RMS of the resubstitution and leave-one-out error estimators, for the discrete histogram rule. We also describe an algorithm to compute the exact probability distribution of resubstitution and leave-one-out, as well as their deviations from the true error rate. Using a parametric Zipf model, we compute the exact performance of resubstitution and leave-one-out, for varying expected true error, number of samples, and classifier complexity (number of bins). We compare this to approximate performance measures-computed by Monte-Carlo sampling—of 10-repeated 4-fold cross-validation and the 0.632 bootstrap error estimator. Our results show that resubstitution is low-biased but much less variable than leave-one-out, and is effectively the superior error estimator between the two, provided classifier complexity is low. In addition, our results indicate that the overall performance of resubstitution, as measured by the RMS, can be substantially better than the 10-repeated 4-fold cross-validation estimator, and even comparable to the 0.632 bootstrap estimator, provided that classifier complexity is low and the expected error rates are moderate. In addition to the results discussed in the paper, we provide an extensive set of plots that can be accessed on a companion website, at the URL http://ee.tamu.edu/edward/exact_discrete.  相似文献   

18.
Multimedia Tools and Applications - Image feature has been a hot research topic within the field of computer vision, with a wide scope of direct impacts on detection, recognition, image retrieval...  相似文献   

19.
Pattern Analysis and Applications - Classification is one of the most important topics in machine learning. However, most of these works focus on the two-class classification (i.e., classification...  相似文献   

20.
There are a variety of measures to describe classification performance with respect to different criteria and they are often represented by numerical values. Psychologists have commented that human beings can only reasonably manage to process seven or-so items of information at any one time. Hence, selecting the best classifier amongst a number of alternatives whose performances are represented by similar numerical values is a difficult problem faced by end users. To alleviate such difficulty, this paper presents a new method of linguistic evaluation of classifiers performance. In particular, an innovative notion of fuzzy complex numbers (FCNs) is developed in an effort to represent and aggregate different evaluation measures conjunctively without necessarily integrating them. Such an approach well maintains the underlying semantics of different evaluation measures, thereby ensuring that the resulting ranking scores are readily interpretable and the inference easily explainable. The utility and applicability of this research are illustrated by means of an experiment which evaluates the performance of 16 classifiers using different benchmark datasets. The effectiveness of the proposed approach is compared to conventional statistical approach. Experimental results show that the FCN-based performance evaluation provides an intuitively reliable and consistent means in assisting end users to make informed choices of available classifiers.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号