全文获取类型
收费全文 | 1699422篇 |
国内免费 | 56409篇 |
完全免费 | 151045篇 |
专业分类
自动化技术 | 1906876篇 |
出版年
2023年 | 797篇 |
2022年 | 14861篇 |
2021年 | 26905篇 |
2020年 | 19956篇 |
2019年 | 21672篇 |
2018年 | 24323篇 |
2017年 | 31242篇 |
2016年 | 35283篇 |
2015年 | 43326篇 |
2014年 | 91056篇 |
2013年 | 100216篇 |
2012年 | 93024篇 |
2011年 | 96779篇 |
2010年 | 101898篇 |
2009年 | 115362篇 |
2008年 | 120991篇 |
2007年 | 106079篇 |
2006年 | 105001篇 |
2005年 | 97254篇 |
2004年 | 92540篇 |
2003年 | 80152篇 |
2002年 | 66027篇 |
2001年 | 60500篇 |
2000年 | 54781篇 |
1999年 | 45536篇 |
1998年 | 35898篇 |
1997年 | 31753篇 |
1996年 | 23623篇 |
1995年 | 19612篇 |
1994年 | 17095篇 |
1993年 | 12425篇 |
1992年 | 11683篇 |
1991年 | 10935篇 |
1990年 | 10680篇 |
1989年 | 10223篇 |
1988年 | 6533篇 |
1987年 | 6880篇 |
1986年 | 6378篇 |
1985年 | 7285篇 |
1984年 | 6824篇 |
1983年 | 5888篇 |
1982年 | 5311篇 |
1981年 | 4574篇 |
1980年 | 4042篇 |
1979年 | 3527篇 |
1978年 | 2874篇 |
1977年 | 2363篇 |
1976年 | 2279篇 |
1975年 | 2176篇 |
1974年 | 1646篇 |
1973年 | 1533篇 |
1972年 | 1247篇 |
1971年 | 1142篇 |
1970年 | 779篇 |
1969年 | 579篇 |
1968年 | 612篇 |
1967年 | 560篇 |
1966年 | 490篇 |
1965年 | 536篇 |
1964年 | 365篇 |
1963年 | 332篇 |
1962年 | 171篇 |
1961年 | 186篇 |
1960年 | 157篇 |
1959年 | 30篇 |
1958年 | 25篇 |
1957年 | 39篇 |
1956年 | 14篇 |
1932年 | 11篇 |
排序方式: 共有1906876条查询结果,搜索用时 156 毫秒
1.
Rough sets 总被引:1327,自引:0,他引:1327
Zdzisław Pawlak 《International journal of parallel programming》1982,11(5):341-356
We investigate in this paper approximate operations on sets, approximate equality of sets, and approximate inclusion of sets. The presented approach may be considered as an alternative to fuzzy sets theory and tolerance theory. Some applications are outlined. 相似文献
2.
模式识别、函数拟合及概率密度估计等都属于基于数据学习的问题,现有方法的重
要基础是传统的统计学,前提是有足够多样本,当样本数目有限时难以取得理想的效果.统计
学习理论(SLT)是由Vapnik等人提出的一种小样本统计理论,着重研究在小样本情况下的
统计规律及学习方法性质.SLT为机器学习问题建立了一个较好的理论框架,也发展了一种
新的通用学习算法--支持向量机(SVM),能够较好的解决小样本学习问题.目前,SLT和
SVM已成为国际上机器学习领域新的研究热点.本文是一篇综述,旨在介绍SLT和SVM的
基本思想、特点和研究发展现状,以引起国内学者的进一步关注. 相似文献
3.
4.
A Tutorial on Support Vector Machines for Pattern Recognition 总被引:731,自引:4,他引:727
Christopher J.C. Burges 《Data mining and knowledge discovery》1998,2(2):121-167
The tutorial starts with an overview of the concepts of VC dimension and structural risk minimization. We then describe linear Support Vector Machines (SVMs) for separable and non-separable data, working through a non-trivial example in detail. We describe a mechanical analogy, and discuss when SVM solutions are unique and when they are global. We describe how support vector training can be practically implemented, and discuss in detail the kernel mapping technique which is used to construct SVM solutions which are nonlinear in the data. We show how Support Vector machines can have very large (even infinite) VC dimension by computing the VC dimension for homogeneous polynomial and Gaussian radial basis function kernels. While very high VC dimension would normally bode ill for generalization performance, and while at present there exists no theory which shows that good generalization performance is guaranteed for SVMs, there are several arguments which support the observed high accuracy of SVMs, which we review. Results of some experiments which were inspired by these arguments are also presented. We give numerous examples and proofs of most of the key theorems. There is new material, and I hope that the reader will find that even old material is cast in a fresh light. 相似文献
5.
Support-Vector Networks 总被引:703,自引:0,他引:703
Thesupport-vector network is a new learning machine for two-group classification problems. The machine conceptually implements the following idea: input vectors are non-linearly mapped to a very high-dimension feature space. In this feature space a linear decision surface is constructed. Special properties of the decision surface ensures high generalization ability of the learning machine. The idea behind the support-vector network was previously implemented for the restricted case where the training data can be separated without errors. We here extend this result to non-separable training data.High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated. We also compare the performance of the support-vector network to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition. 相似文献
6.
Distinctive Image Features from Scale-Invariant Keypoints 总被引:499,自引:6,他引:493
This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance. 相似文献
7.
8.
无线传感器网络研究进展 总被引:395,自引:21,他引:374
无线传感器网络因其巨大的应用前景而受到学术界和工业界越来越广泛的重视.介绍了无线传感器网络的基本概念以及具有应用代表性的研究项目,总结提出了网络协议体系结构框架并简要介绍了各主要研究方向的最新进展,本文还针对最受关注的数据链路层协议、网络层路由协议、协议栈优化、能耗管理以及网络仿真技术等几个研究热点做了比较详细的研究进展综述. 相似文献
9.
隶属云和隶属云发生器 总被引:394,自引:4,他引:390
本文针对模糊集理论基石的隶属函数,提出了隶属云的新思想,给出了用数学特征描述隶属云的方法和正态隶属云的数学模型,探讨了隶属云发生器的实现技术及应用场合,从而为社会和自然科学中的诸多问题用定性和定量相结合的处理方法奠定了基础。 相似文献
10.
Least Squares Support Vector Machine Classifiers 总被引:393,自引:1,他引:392
In this letter we discuss a least squares version for support vector machine (SVM) classifiers. Due to equality type constraints in the formulation, the solution follows from solving a set of linear equations, instead of quadratic programming for classical SVM's. The approach is illustrated on a two-spiral benchmark classification problem. 相似文献