全文获取类型
收费全文 | 10890篇 |
国内免费 | 734篇 |
完全免费 | 4320篇 |
专业分类
自动化技术 | 15944篇 |
出版年
2022年 | 945篇 |
2021年 | 846篇 |
2020年 | 443篇 |
2019年 | 531篇 |
2018年 | 524篇 |
2017年 | 444篇 |
2016年 | 428篇 |
2015年 | 526篇 |
2014年 | 697篇 |
2013年 | 838篇 |
2012年 | 931篇 |
2011年 | 1083篇 |
2010年 | 841篇 |
2009年 | 907篇 |
2008年 | 913篇 |
2007年 | 824篇 |
2006年 | 595篇 |
2005年 | 506篇 |
2004年 | 387篇 |
2003年 | 371篇 |
2002年 | 349篇 |
2001年 | 286篇 |
2000年 | 272篇 |
1999年 | 209篇 |
1998年 | 245篇 |
1997年 | 195篇 |
1996年 | 143篇 |
1995年 | 132篇 |
1994年 | 102篇 |
1993年 | 95篇 |
1992年 | 77篇 |
1991年 | 44篇 |
1990年 | 39篇 |
1989年 | 40篇 |
1988年 | 25篇 |
1987年 | 25篇 |
1986年 | 16篇 |
1985年 | 5篇 |
1984年 | 6篇 |
1983年 | 8篇 |
1982年 | 10篇 |
1981年 | 4篇 |
1980年 | 6篇 |
1979年 | 8篇 |
1978年 | 5篇 |
1977年 | 6篇 |
1976年 | 8篇 |
1975年 | 1篇 |
1974年 | 1篇 |
1973年 | 2篇 |
排序方式: 共有15944条查询结果,搜索用时 218 毫秒
1.
Rough sets 总被引:1327,自引:0,他引:1327
Zdzisław Pawlak 《International journal of parallel programming》1982,11(5):341-356
We investigate in this paper approximate operations on sets, approximate equality of sets, and approximate inclusion of sets. The presented approach may be considered as an alternative to fuzzy sets theory and tolerance theory. Some applications are outlined. 相似文献
2.
模式识别、函数拟合及概率密度估计等都属于基于数据学习的问题,现有方法的重
要基础是传统的统计学,前提是有足够多样本,当样本数目有限时难以取得理想的效果.统计
学习理论(SLT)是由Vapnik等人提出的一种小样本统计理论,着重研究在小样本情况下的
统计规律及学习方法性质.SLT为机器学习问题建立了一个较好的理论框架,也发展了一种
新的通用学习算法--支持向量机(SVM),能够较好的解决小样本学习问题.目前,SLT和
SVM已成为国际上机器学习领域新的研究热点.本文是一篇综述,旨在介绍SLT和SVM的
基本思想、特点和研究发展现状,以引起国内学者的进一步关注. 相似文献
3.
A Tutorial on Support Vector Machines for Pattern Recognition 总被引:731,自引:4,他引:727
Christopher J.C. Burges 《Data mining and knowledge discovery》1998,2(2):121-167
The tutorial starts with an overview of the concepts of VC dimension and structural risk minimization. We then describe linear Support Vector Machines (SVMs) for separable and non-separable data, working through a non-trivial example in detail. We describe a mechanical analogy, and discuss when SVM solutions are unique and when they are global. We describe how support vector training can be practically implemented, and discuss in detail the kernel mapping technique which is used to construct SVM solutions which are nonlinear in the data. We show how Support Vector machines can have very large (even infinite) VC dimension by computing the VC dimension for homogeneous polynomial and Gaussian radial basis function kernels. While very high VC dimension would normally bode ill for generalization performance, and while at present there exists no theory which shows that good generalization performance is guaranteed for SVMs, there are several arguments which support the observed high accuracy of SVMs, which we review. Results of some experiments which were inspired by these arguments are also presented. We give numerous examples and proofs of most of the key theorems. There is new material, and I hope that the reader will find that even old material is cast in a fresh light. 相似文献
4.
Support-Vector Networks 总被引:699,自引:0,他引:699
Thesupport-vector network is a new learning machine for two-group classification problems. The machine conceptually implements the following idea: input vectors are non-linearly mapped to a very high-dimension feature space. In this feature space a linear decision surface is constructed. Special properties of the decision surface ensures high generalization ability of the learning machine. The idea behind the support-vector network was previously implemented for the restricted case where the training data can be separated without errors. We here extend this result to non-separable training data.High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated. We also compare the performance of the support-vector network to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition. 相似文献
5.
6.
7.
The Strength of Weak Learnability 总被引:134,自引:0,他引:134
Robert E. Schapire 《Machine Learning》1990,5(2):197-227
This paper addresses the problem of improving the accuracy of an hypothesis output by a learning algorithm in the distribution-free (PAC) learning model. A concept class is learnable (or strongly learnable) if, given access to a source of examples of the unknown concept, the learner with high probability is able to output an hypothesis that is correct on all but an arbitrarily small fraction of the instances. The concept class is weakly learnable if the learner can produce an hypothesis that performs only slightly better than random guessing. In this paper, it is shown that these two notions of learnability are equivalent.A method is described for converting a weak learning algorithm into one that achieves arbitrarily high accuracy. This construction may have practical applications as a tool for efficiently converting a mediocre learning algorithm into one that performs extremely well. In addition, the construction has some interesting theoretical consequences, including a set of general upper bounds on the complexity of any strong learning algorithm as a function of the allowed error . 相似文献
8.
9.
A Bayesian Method for the Induction of Probabilistic Networks from Data 总被引:111,自引:3,他引:108
This paper presents a Bayesian method for constructing probabilistic networks from databases. In particular, we focus on constructing Bayesian belief networks. Potential applications include computer-assisted hypothesis testing, automated scientific discovery, and automated construction of probabilistic expert systems. We extend the basic method to handle missing data and hidden (latent) variables. We show how to perform probabilistic inference by averaging over the inferences of multiple belief networks. Results are presented of a preliminary evaluation of an algorithm for constructing a belief network from a database of cases. Finally, we relate the methods in this paper to previous work, and we discuss open problems. 相似文献
10.