首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   912篇
  国内免费   45篇
  完全免费   301篇
  自动化技术   1258篇
  2020年   7篇
  2019年   8篇
  2018年   12篇
  2017年   15篇
  2016年   20篇
  2015年   26篇
  2014年   51篇
  2013年   37篇
  2012年   84篇
  2011年   87篇
  2010年   96篇
  2009年   105篇
  2008年   112篇
  2007年   133篇
  2006年   94篇
  2005年   76篇
  2004年   67篇
  2003年   51篇
  2002年   39篇
  2001年   24篇
  2000年   26篇
  1999年   23篇
  1998年   10篇
  1997年   8篇
  1996年   5篇
  1995年   8篇
  1994年   6篇
  1993年   11篇
  1992年   3篇
  1991年   3篇
  1990年   1篇
  1989年   5篇
  1986年   2篇
  1983年   1篇
  1978年   1篇
  1977年   1篇
排序方式: 共有1258条查询结果,搜索用时 31 毫秒
1.
Induction of decision trees   总被引:322,自引:2,他引:320  
The technology for building knowledge-based systems by inductive inference from examples has been demonstrated successfully in several practical applications. This paper summarizes an approach to synthesizing decision trees that has been used in a variety of systems, and it describes one such system, ID3, in detail. Results from recent studies show ways in which the methodology can be modified to deal with information that is noisy and/or incomplete. A reported shortcoming of the basic algorithm is discussed and two means of overcoming it are compared. The paper concludes with illustrations of current research directions.  相似文献
2.
一种基于Rough Set理论的属性约简及规则提取方法   总被引:127,自引:1,他引:126       下载免费PDF全文
常犁云  王国胤  吴渝 《软件学报》1999,10(11):1206-1211
该文针对Rough Set理论中属性约简和值约简这两个重要问题进行了研究,提出了一种借助于可辨识矩阵(discernibility matrix)和数学逻辑运算得到最佳属性约简的新方法.同时,借助该矩阵还可以方便地构造基于Rough Set理论的多变量决策树.另外,对目前广泛采用的一种值约简策略进行了改进,最终使得到的规则进一步简化.  相似文献
3.
基于粗糙集的多变量决策树构造方法   总被引:78,自引:2,他引:76       下载免费PDF全文
苗夺谦  王珏 《软件学报》1997,8(6):425-431
本文利用粗糙集理论中条件属性相对于决策属性的核,解决多变量检验中属性的选择问题.另外,定义了2个等价关系相对泛化的概念,并将它用于解决多变量检验的构造问题.通过一个例子,对本文提出的多变量决策树方法与著名的单变量决策树(ID3)方法进行了比较,结果表明前者比后者更简单.同时,对几种多变量决策树方法做了初步的对比分析.  相似文献
4.
决策树的优化算法   总被引:76,自引:1,他引:75       下载免费PDF全文
刘小虎  李 生 《软件学报》1998,9(10):797-800
决策树的优化是决策树学习算法中十分重要的分支.以ID3为基础,提出了改进的优化算法.每当选择一个新的属性时,算法不是仅仅考虑该属性带来的信息增益,而是考虑到选择该属性后继续选择的属性带来的信息增益,即同时考虑树的两层结点.提出的改进算法的时间复杂性与ID3相同,对于逻辑表达式的归纳,改进算法明显优于ID3.  相似文献
5.
一种新的决策树归纳学习算法   总被引:76,自引:1,他引:75  
本文不示例学习的重要分枝--决策树归纳学习进行了分析探讨,从示例学习最优化的角度分析了决策树归纳学习的优化原则,指出了以往的以ID3为代表的归纳学习算法所固有的缺陷,并提出了一种新的基于概率的决策树归纳学习算法PID,PID在扩展属性的选择上仍采用基于信息增益率的方法,但在树上的扩展过程中,采用属性聚类的方法进行树的支合并。PID得到的决策树在树的规模和分类精度上都优于ID3。  相似文献
6.
数据挖掘中决策树算法的探讨   总被引:50,自引:1,他引:49  
决策树算法是DM的一个活跃的研究领域,首先给出了DM中决策树算法的基本思想,然后讨论了决策树算法中的难点问题,提出了利用熵与加权和的思想来选择取值的算法。  相似文献
7.
Improved Boosting Algorithms Using Confidence-rated Predictions   总被引:45,自引:0,他引:45  
We describe several improvements to Freund and Schapire's AdaBoost boosting algorithm, particularly in a setting in which hypotheses may assign confidences to each of their predictions. We give a simplified analysis of AdaBoost in this setting, and we show how this analysis can be used to find improved parameter settings as well as a refined criterion for training weak hypotheses. We give a specific method for assigning confidences to the predictions of decision trees, a method closely related to one used by Quinlan. This method also suggests a technique for growing decision trees which turns out to be identical to one proposed by Kearns and Mansour. We focus next on how to apply the new boosting algorithms to multiclass classification problems, particularly to the multi-label case in which each example may belong to more than one class. We give two boosting methods for this problem, plus a third method based on output coding. One of these leads to a new method for handling the single-label case which is simpler but as effective as techniques suggested by Freund and Schapire. Finally, we give some experimental results comparing a few of the algorithms discussed in this paper.  相似文献
8.
利用神经网络发现分类规则   总被引:39,自引:0,他引:39  
神经网络是目前公认的高精度分类器,但它的分类过程却令人难以理解,被称为“黑箱”,从而降低了它的可信度,且使其结果不易应用到其它相关领域。本文提出应用决策树模拟神经网络的隐节点和输出节点的决策过程的思想,在分类精度基本不变的前提下,将训练后的前馈神经网络利用遗传算法进行剪枝,再将其转换为含有若干棵树的森林结构,然后将每棵树看作一个分类器,利用决策树模拟每个分类器的分类过程,最后将决策树结构转化为若干  相似文献
9.
ID3算法的一种改进算法   总被引:37,自引:4,他引:33  
决策树是归纳学习和数据挖掘的重要方法,通常用来形成分类器和预测模型。ID3算法是决策树中的核心算法,文章针对ID3算法倾向于取值较多的属性的缺点,引进用户兴趣度对ID3算法作了改进,并通过实验对改进前后的算法进行了比较,实验表明,改进后的算法是有效的。  相似文献
10.
Methods for voting classification algorithms, such as Bagging and AdaBoost, have been shown to be very successful in improving the accuracy of certain classifiers for artificial and real-world datasets. We review these algorithms and describe a large empirical study comparing several variants in conjunction with a decision tree inducer (three variants) and a Naive-Bayes inducer. The purpose of the study is to improve our understanding of why and when these algorithms, which use perturbation, reweighting, and combination techniques, affect classification error. We provide a bias and variance decomposition of the error to show how different methods and variants influence these two terms. This allowed us to determine that Bagging reduced variance of unstable methods, while boosting methods (AdaBoost and Arc-x4) reduced both the bias and variance of unstable methods but increased the variance for Naive-Bayes, which was very stable. We observed that Arc-x4 behaves differently than AdaBoost if reweighting is used instead of resampling, indicating a fundamental difference. Voting variants, some of which are introduced in this paper, include: pruning versus no pruning, use of probabilistic estimates, weight perturbations (Wagging), and backfitting of data. We found that Bagging improves when probabilistic estimates in conjunction with no-pruning are used, as well as when the data was backfit. We measure tree sizes and show an interesting positive correlation between the increase in the average tree size in AdaBoost trials and its success in reducing the error. We compare the mean-squared error of voting methods to non-voting methods and show that the voting methods lead to large and significant reductions in the mean-squared errors. Practical problems that arise in implementing boosting algorithms are explored, including numerical instabilities and underflows. We use scatterplots that graphically show how AdaBoost reweights instances, emphasizing not only hard areas but also outliers and noise.  相似文献
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号