首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   180篇
  国内免费   11篇
  完全免费   83篇
  自动化技术   274篇
  2019年   1篇
  2018年   4篇
  2017年   8篇
  2016年   14篇
  2015年   17篇
  2014年   29篇
  2013年   27篇
  2012年   29篇
  2011年   29篇
  2010年   14篇
  2009年   17篇
  2008年   14篇
  2007年   5篇
  2006年   14篇
  2005年   12篇
  2004年   8篇
  2003年   9篇
  2002年   8篇
  2001年   2篇
  2000年   5篇
  1999年   3篇
  1998年   2篇
  1997年   2篇
  1995年   1篇
排序方式: 共有274条查询结果,搜索用时 31 毫秒
1.
Methods for voting classification algorithms, such as Bagging and AdaBoost, have been shown to be very successful in improving the accuracy of certain classifiers for artificial and real-world datasets. We review these algorithms and describe a large empirical study comparing several variants in conjunction with a decision tree inducer (three variants) and a Naive-Bayes inducer. The purpose of the study is to improve our understanding of why and when these algorithms, which use perturbation, reweighting, and combination techniques, affect classification error. We provide a bias and variance decomposition of the error to show how different methods and variants influence these two terms. This allowed us to determine that Bagging reduced variance of unstable methods, while boosting methods (AdaBoost and Arc-x4) reduced both the bias and variance of unstable methods but increased the variance for Naive-Bayes, which was very stable. We observed that Arc-x4 behaves differently than AdaBoost if reweighting is used instead of resampling, indicating a fundamental difference. Voting variants, some of which are introduced in this paper, include: pruning versus no pruning, use of probabilistic estimates, weight perturbations (Wagging), and backfitting of data. We found that Bagging improves when probabilistic estimates in conjunction with no-pruning are used, as well as when the data was backfit. We measure tree sizes and show an interesting positive correlation between the increase in the average tree size in AdaBoost trials and its success in reducing the error. We compare the mean-squared error of voting methods to non-voting methods and show that the voting methods lead to large and significant reductions in the mean-squared errors. Practical problems that arise in implementing boosting algorithms are explored, including numerical instabilities and underflows. We use scatterplots that graphically show how AdaBoost reweights instances, emphasizing not only hard areas but also outliers and noise.  相似文献
2.
Bagging and boosting are methods that generate a diverse ensemble of classifiers by manipulating the training data given to a base learning algorithm. Breiman has pointed out that they rely for their effectiveness on the instability of the base learning algorithm. An alternative approach to generating an ensemble is to randomize the internal decisions made by the base algorithm. This general approach has been studied previously by Ali and Pazzani and by Dietterich and Kong. This paper compares the effectiveness of randomization, bagging, and boosting for improving the performance of the decision-tree algorithm C4.5. The experiments show that in situations with little or no classification noise, randomization is competitive with (and perhaps slightly superior to) bagging but not as accurate as boosting. In situations with substantial classification noise, bagging is much better than boosting, and sometimes better than randomization.  相似文献
3.
Constructing support vector machine ensemble   总被引:30,自引:0,他引:30  
Hyun-Chul  Shaoning  Hong-Mo  Daijin  Sung 《Pattern Recognition》2003,36(12):2757-2767
Even the support vector machine (SVM) has been proposed to provide a good generalization performance, the classification result of the practically implemented SVM is often far from the theoretically expected level because their implementations are based on the approximated algorithms due to the high complexity of time and space. To improve the limited classification performance of the real SVM, we propose to use the SVM ensemble with bagging (bootstrap aggregating) or boosting. In bagging, each individual SVM is trained independently using the randomly chosen training samples via a bootstrap technique. In boosting, each individual SVM is trained using the training samples chosen according to the sample's probability distribution that is updated in proportional to the errorness of the sample. In both bagging and boosting, the trained individual SVMs are aggregated to make a collective decision in several ways such as the majority voting, least-squares estimation-based weighting, and the double-layer hierarchical combining. Various simulation results for the IRIS data classification and the hand-written digit recognition, and the fraud detection show that the proposed SVM ensemble with bagging or boosting outperforms a single SVM in terms of classification accuracy greatly.  相似文献
4.
Tree Induction for Probability-Based Ranking   总被引:13,自引:0,他引:13  
Tree induction is one of the most effective and widely used methods for building classification models. However, many applications require cases to be ranked by the probability of class membership. Probability estimation trees (PETs) have the same attractive features as classification trees (e.g., comprehensibility, accuracy and efficiency in high dimensions and on large data sets). Unfortunately, decision trees have been found to provide poor probability estimates. Several techniques have been proposed to build more accurate PETs, but, to our knowledge, there has not been a systematic experimental analysis of which techniques actually improve the probability-based rankings, and by how much. In this paper we first discuss why the decision-tree representation is not intrinsically inadequate for probability estimation. Inaccurate probabilities are partially the result of decision-tree induction algorithms that focus on maximizing classification accuracy and minimizing tree size (for example via reduced-error pruning). Larger trees can be better for probability estimation, even if the extra size is superfluous for accuracy maximization. We then present the results of a comprehensive set of experiments, testing some straightforward methods for improving probability-based rankings. We show that using a simple, common smoothing method—the Laplace correction—uniformly improves probability-based rankings. In addition, bagging substantially improves the rankings, and is even more effective for this purpose than for improving accuracy. We conclude that PETs, with these simple modifications, should be considered when rankings based on class-membership probability are required.  相似文献
5.
关于AdaBoost有效性的分析   总被引:12,自引:1,他引:11  
在机器学习领域,弱学习定理指明只要能够寻找到比随机猜测略好的弱学习算法,则可以通过一定方式,构造出任意误差精度的强学习算法.基于该理论下最常用的方法有AdaBoost和Bagging.AdaBoost和Bagging的误差分析还不统一;AdaBoost使用的训练误差并不是真正的训练误差,而是基于样本权值的一种误差,是否合理需要解释;确保AdaBoost有效的条件也需要有直观的解释以便使用.在调整Bagging错误率并采取加权投票法后,对AdaBoost和Bagging的算法流程和误差分析进行了统一,在基于大数定理对弱学习定理进行解释与证明基础之上,对AdaBoost的有效性进行了分析.指出AdaBoost采取的样本权值调整策略其目的是确保正确分类样本分布的均匀性,其使用的训练误差与真正的训练误差概率是相等的,并指出了为确保AdaBoost的有效性在训练弱学习算法时需要遵循的原则,不仅对AdaBoost的有效性进行了解释,还为构造新集成学习算法提供了方法.还仿照AdaBoost对Bagging的训练集选取策略提出了一些建议.  相似文献
6.
Unsupervised Learning of Human Action Categories Using Spatial-Temporal Words   总被引:11,自引:0,他引:11  
We present a novel unsupervised learning method for human action categories. A video sequence is represented as a collection of spatial-temporal words by extracting space-time interest points. The algorithm automatically learns the probability distributions of the spatial-temporal words and the intermediate topics corresponding to human action categories. This is achieved by using latent topic models such as the probabilistic Latent Semantic Analysis (pLSA) model and Latent Dirichlet Allocation (LDA). Our approach can handle noisy feature points arisen from dynamic background and moving cameras due to the application of the probabilistic models. Given a novel video sequence, the algorithm can categorize and localize the human action(s) contained in the video. We test our algorithm on three challenging datasets: the KTH human motion dataset, the Weizmann human action dataset, and a recent dataset of figure skating actions. Our results reflect the promise of such a simple approach. In addition, our algorithm can recognize and localize multiple actions in long and complex video sequences containing multiple motions.  相似文献
7.
MultiBoosting: A Technique for Combining Boosting and Wagging   总被引:8,自引:0,他引:8  
MultiBoosting is an extension to the highly successful AdaBoost technique for forming decision committees. MultiBoosting can be viewed as combining AdaBoost with wagging. It is able to harness both AdaBoost's high bias and variance reduction with wagging's superior variance reduction. Using C4.5 as the base learning algorithm, MultiBoosting is demonstrated to produce decision committees with lower error than either AdaBoost or wagging significantly more often than the reverse over a large representative cross-section of UCI data sets. It offers the further advantage over AdaBoost of suiting parallel execution.  相似文献
8.
基于可信度的投票法   总被引:7,自引:0,他引:7  
燕继坤  郑辉  王艳  曾立君 《计算机学报》2005,28(8):1308-1313
可信度投票法不仅使用了基分类器输出的类别,还使用了输出的可信度.推导了该方法训练错误率的界以及期望错误率的界.发现为了最小化期望错误率的界,应该使用错误独立的基分类器,如果基分类器的错误率不是很高,这个界以指数级速度随着基分类器错误率的降低而降低,而且这个界随着投票次数的增加也会下降.在最小化训练错误率的界的意义下,得到了一种权值分配方法.把这个方法应用于一种Bagging算法:AB,得到了综合分类算法CAB.使用UCI机器学习数据集中的数据,通过实验验证了CAB的有效性.  相似文献
9.
基于Bag-of-phrases的图像表示方法   总被引:6,自引:3,他引:3       下载免费PDF全文
在过去的几年,将图像内容表示为特定"视觉词"出现次数直方图的Bag-of-words模型,展示了其在图像内容分类方面的强大优势.然而,在这种统计特定"视觉词"出现次数直方图的模型中,"视觉词"之间的相互位置关系几乎被完全丢弃了.本文从分析Bag-of-words模型在文本分类和图像内容分类领域的对应关系的角度出发,提出一种加入"视觉词"之间的相互位置关系的图像表示方法—Bag-of-phrases模型.在标准数据集上验证了该图像表示方法对图像内容分类性能的影响.实验结果显示,本文提出的方法相对于传统的Bag-of-words模型可以达到更好的分类性能.  相似文献
10.
一种新颖的眼部状态识别方法   总被引:5,自引:0,他引:5  
李衡峰  夏利民  叶剑波 《计算机工程》2005,31(6):166-167,170
在疲劳驾驶检测中,对眼部状态的判断是关键的步骤之一.为了对眼部状态进行有效的识别,提出了一种新颖的眼部状态识别方法.该方法用眼部图像中的某些点的纹理单元的N.值作为输入特征值,用径向基函数神经网络(RBF)作为分类器.为了进一步提高分类的准确性,又采用了Bagging方法.试验结果表明,该方法易于实现,准确度高,速度快,不受光照条件的影响,可以应用于实际.  相似文献
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号