首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   159篇
  国内免费   5篇
  完全免费   71篇
  自动化技术   235篇
  2019年   1篇
  2018年   1篇
  2017年   6篇
  2016年   6篇
  2015年   8篇
  2014年   12篇
  2013年   8篇
  2012年   16篇
  2011年   20篇
  2010年   19篇
  2009年   23篇
  2008年   17篇
  2007年   16篇
  2006年   14篇
  2005年   15篇
  2004年   5篇
  2003年   8篇
  2002年   7篇
  2001年   7篇
  2000年   2篇
  1999年   3篇
  1998年   3篇
  1997年   1篇
  1996年   3篇
  1995年   3篇
  1994年   1篇
  1993年   1篇
  1991年   2篇
  1989年   2篇
  1988年   2篇
  1987年   2篇
  1986年   1篇
排序方式: 共有235条查询结果,搜索用时 31 毫秒
1.
一种SVM增量学习算法α-ISVM   总被引:53,自引:0,他引:53       下载免费PDF全文
萧嵘  王继成  孙正兴  张福炎 《软件学报》2001,12(12):1818-1824
基于SVM(support vector machine)理论的分类算法,由于其完善的理论基础和良好的试验结果,目前已逐渐引起国内外研究者的关注.深入分析了SVM理论中SV(support vector,支持向量)集的特点,给出一种简单的SVM增量学习算法.在此基础上,进一步提出了一种基于遗忘因子α的SVM增量学习改进算法α-ISVM.该算法通过在增量学习中逐步积累样本的空间分布知识,使得对样本进行有选择地遗忘成为可能.理论分析和实验结果表明,该算法能在保证分类精度的同时,有效地提高训练速度并降低存储空间的占用.  相似文献
2.
Knowledge Acquisition Via Incremental Conceptual Clustering   总被引:49,自引:22,他引:27  
Conceptual clustering is an important way of summarizing and explaining data. However, the recent formulation of this paradigm has allowed little exploration of conceptual clustering as a means of improving performance. Furthermore, previous work in conceptual clustering has not explicitly dealt with constraints imposed by real world environments. This article presents COBWEB, a conceptual clustering system that organizes data so as to maximize inference ability. Additionally, COBWEB is incremental and computationally economical, and thus can be flexibly applied in a variety of domains.  相似文献
3.
一种增量贝叶斯分类模型   总被引:33,自引:0,他引:33  
分类一直是机器学习,模型识别和数据挖掘研究的核心问题,从海量数据中学习分类知识,尤其是当获得大量的带有类别标注的样本代价较高时,增量学习是解决该问题的有效途径,该文将简单贝叶期方法应用于增量分类中,提出了一种增量贝叶斯学习模型,给出了增量贝叶斯推理过程,包括增量地修正分类器参数和增量地分类测试样本,实验结果表明,该算法是可行的和有效。  相似文献
4.
Incremental Induction of Decision Trees   总被引:33,自引:10,他引:23  
This article presents an incremental algorithm for inducing decision trees equivalent to those formed by Quinlan's nonincremental ID3 algorithm, given the same training instances. The new algorithm, named ID5R, lets one apply the ID3 induction process to learning tasks in which training instances are presented serially. Although the basic tree-building algorithms differ only in how the decision trees are constructed, experiments show that incremental training makes it possible to select training instances more carefully, which can result in smaller decision trees. The ID3 algorithm and its variants are compared in terms of theoretical complexity and empirical behavior.  相似文献
5.
Instance-Based Learning Algorithms   总被引:31,自引:0,他引:31  
Storing and using specific instances improves the performance of several supervised learning algorithms. These include algorithms that learn decision trees, classification rules, and distributed networks. However, no investigation has analyzed algorithms that use only specific instances to solve incremental learning tasks. In this paper, we describe a framework and methodology, called instance-based learning, that generates classification predictions using only specific instances. Instance-based learning algorithms do not maintain a set of abstractions derived from specific instances. This approach extends the nearest neighbor algorithm, which has large storage requirements. We describe how storage requirements can be significantly reduced with, at most, minor sacrifices in learning rate and classification accuracy. While the storage-reducing algorithm performs well on several real-world databases, its performance degrades rapidly with the level of attribute noise in training instances. Therefore, we extended it with a significance test to distinguish noisy instances. This extended algorithm's performance degrades gracefully with increasing noise levels and compares favorably with a noise-tolerant decision tree algorithm.  相似文献
6.
Valiant (1984) and others have studied the problem of learning various classes of Boolean functions from examples. Here we discuss incremental learning of these functions. We consider a setting in which the learner responds to each example according to a current hypothesis. Then the learner updates the hypothesis, if necessary, based on the correct classification of the example. One natural measure of the quality of learning in this setting is the number of mistakes the learner makes. For suitable classes of functions, learning algorithms are available that make a bounded number of mistakes, with the bound independent of the number of examples seen by the learner. We present one such algorithm that learns disjunctive Boolean functions, along with variants for learning other classes of Boolean functions. The basic method can be expressed as a linear-threshold algorithm. A primary advantage of this algorithm is that the number of mistakes grows only logarithmically with the number of irrelevant attributes in the examples. At the same time, the algorithm is computationally efficient in both time and space.  相似文献
7.
CMAC算法收敛性分析及泛化能力研究   总被引:24,自引:0,他引:24  
何超  徐立新  张宇河 《控制与决策》2001,16(5):523-529,534
利用矩阵理论和线性方程组迭代收敛的一般性原理,在不附加特殊条件折情况下,证明了CMAC算法在批量和增量两种学习方式下的收敛定理,对在关联矩阵正定条件下得出的结论进行推广和改进。在此基础上提出了一种学习率自寻优的CMAC改进算法,并提出一种简单可行的评价CMAC网络整体泛化性能的指标,通过计算仿真验证了收敛定量的正确性和改进算法的优越性,并研究得出了CMAC网络各个参数对其泛化性能影响的相关结论。  相似文献
8.
The theory of concept (or Galois) lattices provides a simple and formal approach to conceptual clustering. In this paper we present GALOIS, a system that automates and applies this theory. The algorithm utilized by GALOIS to build a concept lattice is incremental and efficient, each update being done in time at most quadratic in the number of objects in the lattice. Also, the algorithm may incorporate background information into the lattice, and through clustering, extend the scope of the theory. The application we present is concerned with information retrieval via browsing, for which we argue that concept lattices may represent major support structures. We describe a prototype user interface for browsing through the concept lattice of a document-term relation, possibly enriched with a thesaurus of terms. An experimental evaluation of the system performed on a medium-sized bibliographic database shows good retrieval performance and a significant improvement after the introduction of background knowledge.  相似文献
9.
Learning to predict by the methods of temporal differences   总被引:19,自引:2,他引:17  
This article introduces a class of incremental learning procedures specialized for prediction – that is, for using past experience with an incompletely known system to predict its future behavior. Whereas conventional prediction-learning methods assign credit by means of the difference between predicted and actual outcomes, the new methods assign credit by means of the difference between temporally successive predictions. Although such temporal-difference methods have been used in Samuel's checker player, Holland's bucket brigade, and the author's Adaptive Heuristic Critic, they have remained poorly understood. Here we prove their convergence and optimality for special cases and relate them to supervised-learning methods. For most real-world prediction problems, temporal-difference methods require less memory and less peak computation than conventional methods and they produce more accurate predictions. We argue that most problems to which supervised learning is currently applied are really prediction problems of the sort to which temporal-difference methods can be applied to advantage.  相似文献
10.
一种基于粗糙集的网页分类方法   总被引:18,自引:2,他引:16  
Internet的迅速发展带来了一个新的问题,如何有效,迅速地从浩瀚的Web网页中找到所需要的信息,机器学习的发展给这个问题的解决提供了一个新的方向,本文将粗糙集理论应用于网页分类,提出了一种基于粗糙集的决策表约简的增量式学习算法,并利用该算法实现了一个Web网页的分类器,实验结果表明该分类器具有良好的性能。  相似文献
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号