首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   24篇
  免费   0篇
电工技术   1篇
轻工业   2篇
无线电   2篇
自动化技术   19篇
  2014年   2篇
  2012年   1篇
  2008年   4篇
  2007年   2篇
  2006年   3篇
  2005年   1篇
  2003年   3篇
  2002年   1篇
  2001年   1篇
  2000年   2篇
  1999年   2篇
  1998年   1篇
  1994年   1篇
排序方式: 共有24条查询结果,搜索用时 15 毫秒
1.
Classifier Ensembles with a Random Linear Oracle   总被引:1,自引:0,他引:1  
We propose a combined fusion-selection approach to classifier ensemble design. Each classifier in the ensemble is replaced by a miniensemble of a pair of subclassifiers with a random linear oracle to choose between the two. It is argued that this approach encourages extra diversity in the ensemble while allowing for high accuracy of the individual ensemble members. Experiments were carried out with 35 data sets from UCI and 11 ensemble models. Each ensemble model was examined with and without the oracle. The results showed that all ensemble methods benefited from the new approach, most markedly so random subspace and bagging. A further experiment with seven real medical data sets demonstrates the validity of these findings outside the UCI data collection  相似文献   
2.
Functional magnetic resonance imaging (fMRI) provides a spatially accurate measure of brain activity. Real-time classification allows the use of fMRI in neurofeedback experiments. With limited labelled data available, a fixed pre-trained classifier may be inaccurate. We propose that streaming fMRI data may be classified using a classifier ensemble which is updated through naive labelling. Naive labelling is a protocol where in the absence of ground truth, updates are carried out using the label assigned by the classifier. We perform experiments on three fMRI datasets to demonstrate that naive labelling is able to improve upon a pre-trained initial classifier.  相似文献   
3.
4.
In this paper we propose three variants of a linear feature extraction technique based on Adaboost for two-class classification problems. Unlike other feature extraction techniques, we do not make any assumptions about the distribution of the data. At each boosting step we select from a pool of linear projections the one that minimizes the weighted error. We propose three different variants of the feature extraction algorithm, depending on the way the pool of individual projections is constructed. Using nine real and two artificial data sets of different original dimensionality and sample size we compare the performance of the three proposed techniques with three classical techniques for linear feature extraction: Fisher linear discriminant analysis (FLD), Nonparametric discriminant analysis (NDA) and a recently proposed feature extraction method for heteroscedastic data based on the Chernoff criterion. Our results show that for data sets of relatively low-original dimensionality FLD appears to be both the most accurate and the most economical feature extraction method (giving just one-dimension in the case of two classes). The techniques based on Adaboost fare better than the classical techniques for data sets of large original dimensionality.
David Masip (Corresponding author)Email:
Ludmila I. KunchevaEmail:
  相似文献   
5.
We propose a probabilistic framework for classifier combination, which gives rigorous optimality conditions (minimum classification error) for four combination methods: majority vote, weighted majority vote, recall combiner and the naive Bayes combiner. The framework is based on two assumptions: class-conditional independence of the classifier outputs and an assumption about the individual accuracies. The four combiners are derived subsequently from one another, by progressively relaxing and then eliminating the second assumption. In parallel, the number of the trainable parameters increases from one combiner to the next. Simulation studies reveal that if the parameter estimates are accurate and the first assumption is satisfied, the order of preference of the combiners is: naive Bayes, recall, weighted majority and majority. By inducing label noise, we expose a caveat coming from the stability-plasticity dilemma. Experimental results with 73 benchmark data sets reveal that there is no definitive best combiner among the four candidates, giving a slight preference to naive Bayes. This combiner was better for problems with a large number of fairly balanced classes while weighted majority vote was better for problems with a small number of unbalanced classes.  相似文献   
6.
In the present work, a whole-grain oat substrate was fermented with lactic acid bacteria to obtain a drink, combining the health benefits of a probiotic culture with the oat prebiotic beta-glucan. The levels of several factors, such as starter culture concentration, oat flour and sucrose content, affecting the fermentation process, were established for completing a controlled fermentation for 8 h. The viable cell counts reached at the end of the process were about 7.5 x 10(10) cfu ml(-1). It was found that the addition of sweeteners aspartame, sodium cyclamate, saccharine and Huxol (12% cyclamate and 1.2% saccharine) had no effect on the dynamics of the fermentation process and on the viability of the starter culture during product storage. Beta-glucan content in the drink (0.31-0.36%) remained unchanged both throughout fermentation and storage of the drink. The shelf life of the oat drink was estimated to 21 days under refrigerated storage.  相似文献   
7.
We compare eleven methods for finding prototypes upon which to base the nearest prototype classifier. Four methods for prototype selection are discussed: Wilson+Hart (a condensation+error‐editing method), and three types of combinatorial search—random search, genetic algorithm, and tabu search. Seven methods for prototype extraction are discussed: unsupervised vector quantization, supervised learning vector quantization (with and without training counters), decision surface mapping, a fuzzy version of vector quantization, c‐means clustering, and bootstrap editing. These eleven methods can be usefully divided two other ways: by whether they employ pre‐ or postsupervision; and by whether the number of prototypes found is user‐defined or “automatic.” Generalization error rates of the 11 methods are estimated on two synthetic and two real data sets. Offering the usual disclaimer that these are just a limited set of experiments, we feel confident in asserting that presupervised, extraction methods offer a better chance for success to the casual user than postsupervised, selection schemes. Finally, our calculations do not suggest that methods which find the “best” number of prototypes “automatically” are superior to methods for which the user simply specifies the number of prototypes. © 2001 John Wiley & Sons, Inc.  相似文献   
8.
The abundance of unlabelled data alongside limited labelled data has provoked significant interest in semi-supervised learning methods. “Naïve labelling” refers to the following simple strategy for using unlabelled data in on-line classification. A new data point is first labelled by the current classifier and then added to the training set together with the assigned label. The classifier is updated before seeing the subsequent data point. Although the danger of a run-away classifier is obvious, versions of naïve labelling pervade in on-line adaptive learning. We study the asymptotic behaviour of naïve labelling in the case of two Gaussian classes and one variable. The analysis shows that if the classifier model assumes correctly the underlying distribution of the problem, naïve labelling will drive the parameters of the classifier towards their optimal values. However, if the model is not guessed correctly, the benefits are outweighed by the instability of the labelling strategy (run-away behaviour of the classifier). The results are based on exact calculations of the point of convergence, simulations, and experiments with 25 real data sets. The findings in our study are consistent with concerns about general use of unlabelled data, flagged up in the recent literature.  相似文献   
9.
Functional Magnetic Resonance Imaging serves to identify networks and regions in the brain engaged in various mental activities, represented as a set of voxels in the 3D image. It is important to be able to measure how similar two selected voxel sets are. The major flaw of the currently used correlation-based and overlap-based measures is that they disregard the spatial proximity of the selected voxel sets. Here, we propose a measure for comparing two voxel sets, called Spatial Discrepancy, based upon the average Hausdorff distance. We demonstrate that Spatial Discrepancy can detect genuine similarities and differences where other commonly used measures fail to do so. A simulation experiment was carried out where distorted copies of the same voxel sets were compared, varying the level of distortion. The experiment revealed that the proposed measure correlates better with the level of distortion than any of the other measures. Data from a 10-subject experiment were used to demonstrate the advantages of the Spatial Discrepancy measure in multi-subject studies.  相似文献   
10.
Rotation forest: A new classifier ensemble method   总被引:8,自引:0,他引:8  
We propose a method for generating classifier ensembles based on feature extraction. To create the training data for a base classifier, the feature set is randomly split into K subsets (K is a parameter of the algorithm) and Principal Component Analysis (PCA) is applied to each subset. All principal components are retained in order to preserve the variability information in the data. Thus, K axis rotations take place to form the new features for a base classifier. The idea of the rotation approach is to encourage simultaneously individual accuracy and diversity within the ensemble. Diversity is promoted through the feature extraction for each base classifier. Decision trees were chosen here because they are sensitive to rotation of the feature axes, hence the name "forest.” Accuracy is sought by keeping all principal components and also using the whole data set to train each base classifier. Using WEKA, we examined the Rotation Forest ensemble on a random selection of 33 benchmark data sets from the UCI repository and compared it with Bagging, AdaBoost, and Random Forest. The results were favorable to Rotation Forest and prompted an investigation into diversity-accuracy landscape of the ensemble models. Diversity-error diagrams revealed that Rotation Forest ensembles construct individual classifiers which are more accurate than these in AdaBoost and Random Forest, and more diverse than these in Bagging, sometimes more accurate as well.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号