首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   36076篇
  免费   1449篇
  国内免费   60篇
电工技术   369篇
综合类   29篇
化学工业   7107篇
金属工艺   725篇
机械仪表   738篇
建筑科学   1966篇
矿业工程   114篇
能源动力   1054篇
轻工业   2883篇
水利工程   432篇
石油天然气   117篇
武器工业   5篇
无线电   2469篇
一般工业技术   6126篇
冶金工业   6613篇
原子能技术   268篇
自动化技术   6570篇
  2023年   201篇
  2022年   318篇
  2021年   680篇
  2020年   461篇
  2019年   617篇
  2018年   779篇
  2017年   693篇
  2016年   832篇
  2015年   754篇
  2014年   1040篇
  2013年   2382篇
  2012年   1680篇
  2011年   2093篇
  2010年   1652篇
  2009年   1547篇
  2008年   1801篇
  2007年   1775篇
  2006年   1593篇
  2005年   1440篇
  2004年   1174篇
  2003年   1123篇
  2002年   1051篇
  2001年   703篇
  2000年   549篇
  1999年   598篇
  1998年   584篇
  1997年   576篇
  1996年   550篇
  1995年   574篇
  1994年   526篇
  1993年   513篇
  1992年   501篇
  1991年   289篇
  1990年   418篇
  1989年   389篇
  1988年   319篇
  1987年   355篇
  1986年   311篇
  1985年   418篇
  1984年   416篇
  1983年   318篇
  1982年   295篇
  1981年   283篇
  1980年   269篇
  1979年   270篇
  1978年   247篇
  1977年   225篇
  1976年   207篇
  1975年   194篇
  1974年   173篇
排序方式: 共有10000条查询结果,搜索用时 31 毫秒
991.
This paper considers the estimation of Kendall's tau for bivariate data (X,Y) when only Y is subject to right-censoring. Although τ is estimable under weak regularity conditions, the estimators proposed by Brown et al. [1974. Nonparametric tests of independence for censored data, with applications to heart transplant studies. Reliability and Biometry, 327-354], Weier and Basu [1980. An investigation of Kendall's τ modified for censored data with applications. J. Statist. Plann. Inference 4, 381-390] and Oakes [1982. A concordance test for independence in the presence of censoring. Biometrics 38, 451-455], which are standard in this context, fail to be consistent when τ≠0 because they only use information from the marginal distributions. An exception is the renormalized estimator of Oakes [2006. On consistency of Kendall's tau under censoring. Technical Report, Department of Biostatistics and Computational Biology, University of Rochester, Rochester, NY], whose consistency has been established for all possible values of τ, but only in the context of the gamma frailty model. Wang and Wells [2000. Estimation of Kendall's tau under censoring. Statist. Sinica 10, 1199-1215] were the first to propose an estimator which accounts for joint information. Four more are developed here: the first three extend the methods of Brown et al. [1974. Nonparametric tests of independence for censored data, with applications to heart transplant studies. Reliability and Biometry, 327-354], Weier and Basu [1980, An investigation of Kendall's τ modified for censored data with applications. J. Statist. Plann. Inference 4, 381-390] and Oakes [1982, A concordance test for independence in the presence of censoring. Biometrics 38, 451-455] to account for information provided by X, while the fourth estimator inverts an estimation of Pr(Yi?y|Xi=xi,Yi>ci) to get an imputation of the value of Yi censored at Ci=ci. Following Lim [2006. Permutation procedures with censored data. Comput. Statist. Data Anal. 50, 332-345], a nonparametric estimator is also considered which averages the obtained from a large number of possible configurations of the observed data (X1,Z1),…,(Xn,Zn), where Zi=min(Yi,Ci). Simulations are presented which compare these various estimators of Kendall's tau. An illustration involving the well-known Stanford heart transplant data is also presented.  相似文献   
992.
The problem of fitting a straight line to a finite collection of points in the plane is an important problem in statistical estimation. Robust estimators are widely used because of their lack of sensitivity to outlying data points. The least median-of-squares (LMS) regression line estimator is among the best known robust estimators. Given a set of n points in the plane, it is defined to be the line that minimizes the median squared residual or, more generally, the line that minimizes the residual of any given quantile q, where 0<q?1. This problem is equivalent to finding the strip defined by two parallel lines of minimum vertical separation that encloses at least half of the points.The best known exact algorithm for this problem runs in O(n2) time. We consider two types of approximations, a residual approximation, which approximates the vertical height of the strip to within a given error bound εr?0, and a quantile approximation, which approximates the fraction of points that lie within the strip to within a given error bound εq?0. We present two randomized approximation algorithms for the LMS line estimator. The first is a conceptually simple quantile approximation algorithm, which given fixed q and εq>0 runs in O(nlogn) time. The second is a practical algorithm, which can solve both types of approximation problems or be used as an exact algorithm. We prove that when used as a quantile approximation, this algorithm's expected running time is . We present empirical evidence that the latter algorithm is quite efficient for a wide variety of input distributions, even when used as an exact algorithm.  相似文献   
993.
On classification with incomplete data   总被引:4,自引:0,他引:4  
We address the incomplete-data problem in which feature vectors to be classified are missing data (features). A (supervised) logistic regression algorithm for the classification of incomplete data is developed. Single or multiple imputation for the missing data is avoided by performing analytic integration with an estimated conditional density function (conditioned on the observed data). Conditional density functions are estimated using a Gaussian mixture model (GMM), with parameter estimation performed using both expectation-maximization (EM) and variational Bayesian EM (VB-EM). The proposed supervised algorithm is then extended to the semisupervised case by incorporating graph-based regularization. The semisupervised algorithm utilizes all available data-both incomplete and complete, as well as labeled and unlabeled. Experimental results of the proposed classification algorithms are shown  相似文献   
994.
This paper develops an unsupervised discriminant projection (UDP) technique for dimensionality reduction of high-dimensional data in small sample size cases. UDP can be seen as a linear approximation of a multimanifolds-based learning framework which takes into account both the local and nonlocal quantities. UDP characterizes the local scatter as well as the nonlocal scatter, seeking to find a projection that simultaneously maximizes the nonlocal scatter and minimizes the local scatter. This characteristic makes UDP more intuitive and more powerful than the most up-to-date method, locality preserving projection (LPP), which considers only the local scatter for clustering or classification tasks. The proposed method is applied to face and palm biometrics and is examined using the Yale, FERET, and AR face image databases and the PolyU palmprint database. The experimental results show that UDP consistently outperforms LPP and PCA and outperforms LDA when the training sample size per class is small. This demonstrates that UDP is a good choice for real-world biometrics applications  相似文献   
995.
Formal translations constitute a suitable framework for dealing with many problems in pattern recognition and computational linguistics. The application of formal transducers to these areas requires a stochastic extension for dealing with noisy, distorted patterns with high variability. In this paper, some estimation criteria are proposed and developed for the parameter estimation of regular syntax-directed translation schemata. These criteria are: maximum likelihood estimation, minimum conditional entropy estimation and conditional maximum likelihood estimation. The last two criteria were proposed in order to deal with situations when training data is sparse. These criteria take into account the possibility of ambiguity in the translations: i.e., there can be different output strings for a single input string. In this case, the final goal of the stochastic framework is to find the highest probability translation of a given input string. These criteria were tested on a translation task which has a high degree of ambiguity.  相似文献   
996.
Experience with the growing number of large-scale and long-term case-based reasoning (CBR) applications has led to increasing recognition of the importance of maintaining existing CBR systems. Recent research has focused on case-base maintenance (CBM), addressing such issues as maintaining consistency, preserving competence, and controlling case-base growth. A set of dimensions for case-base maintenance, proposed by Leake and Wilson, provides a framework for understanding and expanding CBM research. However, it also has been recognized that other knowledge containers can be equally important maintenance targets. Multiple researchers have addressed pieces of this more general maintenance problem, considering such issues as how to refine similarity criteria and adaptation knowledge. As with case-base maintenance, a framework of dimensions for characterizing more general maintenance activity, within and across knowledge containers, is desirable to unify and understand the state of the art, as well as to suggest new avenues of exploration by identifying points along the dimensions that have not yet been studied. This article presents such a framework by (1) refining and updating the earlier framework of dimensions for case-base maintenance, (2) applying the refined dimensions to the entire range of knowledge containers, and (3) extending the theory to include coordinated cross-container maintenance. The result is a framework for understanding the general problem of case-based reasoner maintenance (CBRM). Taking the new framework as a starting point, the article explores key issues for future CBRM research.  相似文献   
997.
Empirical Evaluation of User Models and User-Adapted Systems   总被引:5,自引:0,他引:5  
Empirical evaluations are needed to determine which users are helped or hindered by user-adapted interaction in user modeling systems. A review of past UMUAI articles reveals insufficient empirical evaluations, but an encouraging upward trend. Rules of thumb for experimental design, useful tests for covariates, and common threats to experimental validity are presented. Reporting standards including effect size and power are proposed.  相似文献   
998.
999.
Many economic and social systems are essentially large multi-agent systems.By means of computational modeling, the complicated behavior of such systemscan be investigated. Modeling a multi-agent system as an evolutionary agentsystem, several important choices have to be made for evolutionary operators.Especially, it is to be expected that evolutionary dynamics substantiallydepend on the selection scheme. We therefore investigate the influence ofevolutionary selection mechanisms on a fundamental problem: the iteratedprisoner's dilemma (IPD), an elegant model for the emergence of cooperationin a multi-agent system. We observe various types of behavior, cooperationlevel, and stability, depending on the selection mechanism and the selectionintensity. Hence, our results are important for (1) the proper choice andapplication of selection schemes when modeling real economic situations and(2) assessing the validity of the conclusions drawn from computer experimentswith these models. We also conclude that the role of selection in theevolution of multi-agent systems should be investigated further, for instanceusing more detailed and complex agent interaction models.  相似文献   
1000.
We present an approach for controlling robotic interactions with objects, using synthetic images generated by morphing shapes. In particular, we attempt the problem of positioning an eye-in-hand robotic system with respect to objects in the workspace for grasping and manipulation. In our formulation, the grasp position (and consequently the approach trajectory of the manipulator), varies with each object. The proposed solution to the problem consists of two parts. First, based on a model-based object recognition framework, images of the objects taken at the desired grasp pose are stored in a database. The recognition and identification of the grasp position for an unknown input object (selected from the family of recognizable objects) occurs by morphing its contour to the templates in the database and using the virtual energy spent during the morph as a dissimilarity measure. In the second step, the images synthesized during the morph are used to guide the eye-in-hand system and execute the grasp. The proposed method requires minimal calibration of the system. Furthermore, it conjoins techniques from shape recognition, computer graphics, and vision-based robot control in a unified engineering amework. Potential applications range from recognition and positioning with respect to partially-occluded or deformable objects to planning robotic grasping based on human demonstration.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号