首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Smile or happiness is one of the most universal facial expressions in our daily life. Smile detection in the wild is an important and challenging problem, which has attracted a growing attention from affective computing community. In this paper, we present an efficient approach for smile detection in the wild with deep learning. Different from some previous work which extracted hand-crafted features from face images and trained a classifier to perform smile recognition in a two-step approach, deep learning can effectively combine feature learning and classification into a single model. In this study, we apply the deep convolutional network, a popular deep learning model, to handle this problem. We construct a deep convolutional network called Smile-CNN to perform feature learning and smile detection simultaneously. Experimental results demonstrate that although a deep learning model is generally developed for tackling “big data,” the model can also effectively deal with “small data.” We further investigate into the discriminative power of the learned features, which are taken from the neuron activations of the last hidden layer of our Smile-CNN. By using the learned features to train an SVM or AdaBoost classifier, we show that the learned features have impressive discriminative ability. Experiments conducted on the GENKI4K database demonstrate that our approach can achieve a promising performance in smile detection.  相似文献   

2.
Optimal Bayesian linear classifiers have been studied in the literature for many decades. We demonstrate that all the known results consider only the scenario when the quadratic polynomial has coincident roots. Indeed, we present a complete analysis of the case when the optimal classifier between two normally distributed classes is pairwise and linear. We focus on some special cases of the normal distribution with nonequal covariance matrices. We determine the conditions that the mean vectors and covariance matrices have to satisfy in order to obtain the optimal pairwise linear classifier. As opposed to the state of the art, in all the cases discussed here, the linear classifier is given by a pair of straight lines, which is a particular case of the general equation of second degree. We also provide some empirical results, using synthetic data for the Minsky's paradox case, and demonstrated that the linear classifier achieves very good performance. Finally, we have tested our approach on real life data obtained from the UCI machine learning repository. The empirical results that we obtained show the superiority of our scheme over the traditional Fisher's discriminant classifier  相似文献   

3.
We present a simple and yet effective approach for document classification to incorporate rationales elicited from annotators into the training of any off-the-shelf classifier. We empirically show on several document classification datasets that our classifier-agnostic approach, which makes no assumptions about the underlying classifier, can effectively incorporate rationales into the training of multinomial naïve Bayes, logistic regression, and support vector machines. In addition to being classifier-agnostic, we show that our method has comparable performance to previous classifier-specific approaches developed for incorporating rationales and feature annotations. Additionally, we propose and evaluate an active learning method tailored specifically for the learning with rationales framework.  相似文献   

4.
Recursive Automatic Bias Selection for Classifier Construction   总被引:1,自引:0,他引:1  
Brodley  Carla E. 《Machine Learning》1995,20(1-2):63-94
The results of empirical comparisons of existing learning algorithms illustrate that each algorithm has a selective superiority; each is best for some but not all tasks. Given a data set, it is often not clear beforehand which algorithm will yield the best performance. In this article we present an approach that uses characteristics of the given data set, in the form of feedback from the learning process, to guide a search for a tree-structured hybrid classifier. Heuristic knowledge about the characteristics that indicate one bias is better than another is encoded in the rule base of the Model Class Selection (MCS) system. The approach does not assume that the entire instance space is best learned using a single representation language; for some data sets, choosing to form a hybrid classifier is a better bias, and MCS has the ability to determine these cases. The results of an empirical evaluation illustrate that MCS achieves classification accuracies equal to or higher than the best of its primitive learning components for each data set, demonstrating that the heuristic rules effectively select an appropriate learning bias.  相似文献   

5.
Active learning methods select informative instances to effectively learn a suitable classifier. Uncertainty sampling, a frequently utilized active learning strategy, selects instances about which the model is uncertain but it does not consider the reasons for why the model is uncertain. In this article, we present an evidence-based framework that can uncover the reasons for why a model is uncertain on a given instance. Using the evidence-based framework, we discuss two reasons for uncertainty of a model: a model can be uncertain about an instance because it has strong, but conflicting evidence for both classes or it can be uncertain because it does not have enough evidence for either class. Our empirical evaluations on several real-world datasets show that distinguishing between these two types of uncertainties has a drastic impact on the learning efficiency. We further provide empirical and analytical justifications as to why distinguishing between the two uncertainties matters.  相似文献   

6.
We set out in this study to review a vast amount of recent literature on machine learning (ML) approaches to predicting financial distress (FD), including supervised, unsupervised and hybrid supervised–unsupervised learning algorithms. Four supervised ML models including the traditional support vector machine (SVM), recently developed hybrid associative memory with translation (HACT), hybrid GA-fuzzy clustering and extreme gradient boosting (XGBoost) were compared in prediction performance to the unsupervised classifier deep belief network (DBN) and the hybrid DBN-SVM model, whereby a total of sixteen financial variables were selected from the financial statements of the publicly-listed Taiwanese firms as inputs to the six approaches. Our empirical findings, covering the 2010–2016 sample period, demonstrated that among the four supervised algorithms, the XGBoost provided the most accurate FD prediction. Moreover, the hybrid DBN-SVM model was able to generate more accurate forecasts than the use of either the SVM or the classifier DBN in isolation.  相似文献   

7.
In active learning, the learner is required to measure the importance of unlabeled samples in a large dataset and select the best one iteratively. This sample selection process could be treated as a decision making problem, which evaluates, ranks, and makes choices from a finite set of alternatives. In many decision making problems, it usually applied multiple criteria since the performance is better than using a single criterion. Motivated by these facts, an active learning model based on multi-criteria decision making (MCMD) is proposed in this paper. After the investigation between any two unlabeled samples, a preference preorder is determined for each criterion. The dominated index and the dominating index are then defined and calculated to evaluate the informativeness of unlabeled samples, which provide an effective metric measure for sample selection. On the other hand, under multiple-instance learning (MIL) environment, the instances/samples are grouped into bags, a bag is negative only if all of its instances are negative, and is positive otherwise. Multiple-instance active learning (MIAL) aims to select and label the most informative bags from numerous unlabeled ones, and learn a MIL classifier for accurately predicting unseen bags by requesting as few labels as possible. It adopts a MIL algorithm as the base classifier, and follows an active learning procedure. In order to achieve a balance between learning efficiency and generalization capability, the proposed active learning model is restricted to a specific algorithm under MIL environment. Experimental results demonstrate the effectiveness of the proposed method.  相似文献   

8.
9.
In this work, we study the problem of cross-domain video concept detection, where the distributions of the source and target domains are different. Active learning can be used to iteratively refine a source domain classifier by querying labels for a few samples in the target domain, which could reduce the labeling effort. However, traditional active learning method which often uses a discriminative query strategy that queries the most ambiguous samples to the source domain classifier for labeling would fail, when the distribution difference between two domains is too large. In this paper, we tackle this problem by proposing a joint active learning approach which combines a novel generative query strategy and the existing discriminative one. The approach adaptively fits the distribution difference and shows higher robustness than the ones using single strategy. Experimental results on two synthetic datasets and the TRECVID video concept detection task highlight the effectiveness of our joint active learning approach.  相似文献   

10.
The recently proposed ImageNet dataset consists of several million images, each annotated with a single object category. These annotations may be imperfect, in the sense that many images contain multiple objects belonging to the label vocabulary. In other words, we have a multi-label problem but the annotations include only a single label (which is not necessarily the most prominent). Such a setting motivates the use of a robust evaluation measure, which allows for a limited number of labels to be predicted and, so long as one of the predicted labels is correct, the overall prediction should be considered correct. This is indeed the type of evaluation measure used to assess algorithm performance in a recent competition on ImageNet data. Optimizing such types of performance measures presents several hurdles even with existing structured output learning methods. Indeed, many of the current state-of-the-art methods optimize the prediction of only a single output label, ignoring this ‘structure’ altogether. In this paper, we show how to directly optimize continuous surrogates of such performance measures using structured output learning techniques with latent variables. We use the output of existing binary classifiers as input features in a new learning stage which optimizes the structured loss corresponding to the robust performance measure. We present empirical evidence that this allows us to ‘boost’ the performance of binary classification on a variety of weakly-supervised labeling problems defined on image taxonomies.  相似文献   

11.
集成学习是一种可以有效改善分类系统性能的数据挖掘方法。采用动态分类器集成选择算法对卷烟感官质量进行智能评估。产生包含多个基分类器的分类器池;根据基分类器在被测样本邻域内的表现选择满足要求的分类器;采用被选择的分类器产生最终的预测结果。为了验证该方法的有效性,采用国内某烟草公司提供的卷烟感官评估历史数据集进行了实验比较分析。实验结果表明,与其他方法相比,该方法获得的效果明显改善。  相似文献   

12.
In classification problems, active learning is often adopted to alleviate the laborious human labeling efforts, by finding the most informative samples to query the labels. One of the most popular query strategy is selecting the most uncertain samples for the current classifier. The performance of such an active learning process heavily relies on the learned classifier before each query. Thus, stepwise classifier model/parameter selection is quite critical, which is, however, rarely studied in the literature. In this paper, we propose a novel active learning support vector machine algorithm with adaptive model selection. In this algorithm, before each new query, we trace the full solution path of the base classifier, and then perform efficient model selection using the unlabeled samples. This strategy significantly improves the active learning efficiency with comparatively inexpensive computational cost. Empirical results on both artificial and real world benchmark data sets show the encouraging gains brought by the proposed algorithm in terms of both classification accuracy and computational cost.  相似文献   

13.
Applying quantitative models for forecasting and assisting investment decision making has become more indispensable in business practices than ever before. Improving forecasting especially time series forecasting accuracy is an important yet often difficult task facing forecasters. Both theoretical and empirical findings have indicated that integration of different models can be an effective way of improving upon their predictive performance, especially when the models in the ensemble are quite different. In the literature, several hybrid techniques have been proposed by combining different time series models together, in order to overcome the deficiencies of single models and yield hybrid models that are more accurate. In this paper, in contrast of the traditional hybrid models, a new methodology is proposed in order to construct a new class of hybrid models using a time series model as basis model and a classifier. As classifiers cannot be lonely applied as forecasting model for continuous problems, in the first stage of the proposed model, a forecasting model is used as basis model. Then, the estimated values of the basis model are modified in the second stage, based on the distinguished trend of the residuals of the basis model and the optimum step length, which are respectively calculated by a classifier model and a mathematical programming model. Empirical results with three well-known real data sets indicate that the proposed model can be an effective way in order to construct a more accurate hybrid model than its basis time series model. Therefore, it can be used as an appropriate alternative model for forecasting tasks, especially when higher forecasting accuracy is needed.  相似文献   

14.
Yin  Chuanlong  Zhu  Yuefei  Liu  Shengli  Fei  Jinlong  Zhang  Hetong 《The Journal of supercomputing》2020,76(9):6690-6719

The performance of classifiers has a direct impact on the effectiveness of intrusion detection system. Thus, most researchers aim to improve the detection performance of classifiers. However, classifiers can only get limited useful information from the limited number of labeled training samples, which usually affects the generalization of classifiers. In order to enhance the network intrusion detection classifiers, we resort to adversarial training, and a novel supervised learning framework using generative adversarial network for improving the performance of the classifier is proposed in this paper. The generative model in our framework is utilized to continuously generate other complementary labeled samples for adversarial training and assist the classifier for classification, while the classifier in our framework is used to identify different categories. Meanwhile, the loss function is deduced again, and several empirical training strategies are proposed to improve the stabilization of the supervised learning framework. Experimental results prove that the classifier via adversarial training improves the performance indicators of intrusion detection. The proposed framework provides a feasible method to enhance the performance and generalization of the classifier.

  相似文献   

15.
In this paper, we address the problem of learning a classifier for the classification of spoken character. We present a solution based on Group Method of Data Handling (GMDH) learning paradigm for the development of a robust abductive network classifier. We improve the reliability of the classification process by introducing the concept of multiple abductive network classifier system. We evaluate the performance of the proposed classifier using three different speech datasets including spoken Arabic digit, spoken English letter, and spoken Pashto digit. The performance of the proposed classifier surpasses that reported in the literature for other classification techniques on the same speech datasets.  相似文献   

16.
Deep learning techniques for Sentiment Analysis have become very popular. They provide automatic feature extraction and both richer representation capabilities and better performance than traditional feature based techniques (i.e., surface methods). Traditional surface approaches are based on complex manually extracted features, and this extraction process is a fundamental question in feature driven methods. These long-established approaches can yield strong baselines, and their predictive capabilities can be used in conjunction with the arising deep learning methods. In this paper we seek to improve the performance of deep learning techniques integrating them with traditional surface approaches based on manually extracted features. The contributions of this paper are sixfold. First, we develop a deep learning based sentiment classifier using a word embeddings model and a linear machine learning algorithm. This classifier serves as a baseline to compare to subsequent results. Second, we propose two ensemble techniques which aggregate our baseline classifier with other surface classifiers widely used in Sentiment Analysis. Third, we also propose two models for combining both surface and deep features to merge information from several sources. Fourth, we introduce a taxonomy for classifying the different models found in the literature, as well as the ones we propose. Fifth, we conduct several experiments to compare the performance of these models with the deep learning baseline. For this, we use seven public datasets that were extracted from the microblogging and movie reviews domain. Finally, as a result, a statistical study confirms that the performance of these proposed models surpasses that of our original baseline on F1-Score.  相似文献   

17.
We address the problem of estimating discrete, continuous, and conditional joint densities online, i.e., the algorithm is only provided the current example and its current estimate for its update. The family of proposed online density estimators, estimation of densities online (EDO), uses classifier chains to model dependencies among features, where each classifier in the chain estimates the probability of one particular feature. Because a single chain may not provide a reliable estimate, we also consider ensembles of classifier chains and ensembles of weighted classifier chains. For all density estimators, we provide consistency proofs and propose algorithms to perform certain inference tasks. The empirical evaluation of the estimators is conducted in several experiments and on datasets of up to several millions of instances. In the discrete case, we compare our estimators to density estimates computed by Bayesian structure learners. In the continuous case, we compare them to a state-of-the-art online density estimator. Our experiments demonstrate that, even though designed to work online, EDO delivers estimators of competitive accuracy compared to other density estimators (batch Bayesian structure learners on discrete datasets and the state-of-the-art online density estimator on continuous datasets). Besides achieving similar performance in these cases, EDO is also able to estimate densities with mixed types of variables, i.e., discrete and continuous random variables.  相似文献   

18.
Although several models have been suggested in the literature to describe the relationship between learning and forgetting, this relationship is still not fully understood. This paper proposes the Depletion–Power–Integration–Latency (DPIL) model, which assumes that performing a task repetitively depletes the available encoding resources for that task. The DPIL model fitted five empirical datasets well, reflecting different procedural/episodic learning settings, experimental paradigms (massed/spaced repetition, study time), tests (accuracy, latency), and retention intervals. The model was also fitted to empirical data collected from a quality inspection station at an industrial firm. The DPIL model has the advantage of predicting the length of the final break (interruption) that optimizes performance. This finding is important as it has many industrial engineering applications. The numerical results in this paper show that performance improves as the length of each break preceding the final break increases. This is consistent with empirical findings that moderately short breaks are optimal for performance.  相似文献   

19.
The abundance of unlabelled data alongside limited labelled data has provoked significant interest in semi-supervised learning methods. “Naïve labelling” refers to the following simple strategy for using unlabelled data in on-line classification. A new data point is first labelled by the current classifier and then added to the training set together with the assigned label. The classifier is updated before seeing the subsequent data point. Although the danger of a run-away classifier is obvious, versions of naïve labelling pervade in on-line adaptive learning. We study the asymptotic behaviour of naïve labelling in the case of two Gaussian classes and one variable. The analysis shows that if the classifier model assumes correctly the underlying distribution of the problem, naïve labelling will drive the parameters of the classifier towards their optimal values. However, if the model is not guessed correctly, the benefits are outweighed by the instability of the labelling strategy (run-away behaviour of the classifier). The results are based on exact calculations of the point of convergence, simulations, and experiments with 25 real data sets. The findings in our study are consistent with concerns about general use of unlabelled data, flagged up in the recent literature.  相似文献   

20.
Learning from positive and unlabeled examples (PU learning) is a partially supervised classification that is frequently used in Web and text retrieval system. The merit of PU learning is that it can get good performance with less manual work. Motivated by transfer learning, this paper presents a novel method that transfers the ‘outdated data’ into the process of PU learning. We first propose a way to measure the strength of the features and select the strong features and the weak features according to the strength of the features. Then, we extract the reliable negative examples and the candidate negative examples using the strong and the weak features (Transfer‐1DNF). Finally, we construct a classifier called weighted voting iterative support vector machine (SVM) that is made up of several subclassifiers by applying SVM iteratively, and each subclassifier is assigned a weight in each iteration. We conduct the experiments on two datasets: 20 Newsgroups and Reuters‐21578, and compare our method with three baseline algorithms: positive example‐based learning, weighted voting classifier and SVM. The results show that our proposed method Transfer‐1DNF can extract more reliable negative examples with lower error rates, and our classifier outperforms the baseline algorithms. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号