首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   36008篇
  免费   1449篇
  国内免费   59篇
电工技术   368篇
综合类   29篇
化学工业   7049篇
金属工艺   725篇
机械仪表   738篇
建筑科学   1963篇
矿业工程   114篇
能源动力   1055篇
轻工业   2882篇
水利工程   429篇
石油天然气   117篇
武器工业   5篇
无线电   2473篇
一般工业技术   6116篇
冶金工业   6609篇
原子能技术   269篇
自动化技术   6575篇
  2023年   201篇
  2022年   246篇
  2021年   675篇
  2020年   462篇
  2019年   617篇
  2018年   780篇
  2017年   693篇
  2016年   832篇
  2015年   754篇
  2014年   1041篇
  2013年   2375篇
  2012年   1681篇
  2011年   2091篇
  2010年   1651篇
  2009年   1548篇
  2008年   1800篇
  2007年   1775篇
  2006年   1591篇
  2005年   1439篇
  2004年   1174篇
  2003年   1123篇
  2002年   1052篇
  2001年   705篇
  2000年   551篇
  1999年   597篇
  1998年   590篇
  1997年   576篇
  1996年   551篇
  1995年   574篇
  1994年   527篇
  1993年   510篇
  1992年   501篇
  1991年   289篇
  1990年   418篇
  1989年   390篇
  1988年   319篇
  1987年   355篇
  1986年   310篇
  1985年   418篇
  1984年   417篇
  1983年   319篇
  1982年   295篇
  1981年   282篇
  1980年   269篇
  1979年   271篇
  1978年   249篇
  1977年   229篇
  1976年   208篇
  1975年   194篇
  1974年   173篇
排序方式: 共有10000条查询结果,搜索用时 31 毫秒
991.
992.
Polarized light imaging (PLI) is a method to image fiber orientation in gross histological brain sections based on the birefringent properties of the myelin sheaths. The method uses the transmission of polarized light to quantitatively estimate the fiber orientation and inclination angles at every point of the imaged section. Multiple sections can be assembled into a 3D volume, from which the 3D extent of fiber tracts can be extracted. This article describes the physical principles of PLI and describes two major applications of the method: the imaging of white matter orientation of the rat brain and the generation of fiber orientation maps of the human brain in white and gray matter. The strengths and weaknesses of the method are set out.  相似文献   
993.
Cheung SS  Westwood DA  Knox MK 《Ergonomics》2007,50(2):275-288
Many contemporary workers are routinely exposed to mild cold stress, which may compromise mental function and lead to accidents. A study investigated the effect of mild body cooling of 1.0 degree C rectal temperature (Tre) on vigilance (i.e. sustained attention) and the orienting of spatial attention (i.e. spatially selective processing of visual information). Vigilance and spatial attention tests were administered to 14 healthy males and six females at four stages (pre-immersion, deltaTre = 0, -0.5 and - 1.0 degree C ) of a gradual, head-out immersion cooling session (18-25 deltaC water), and in four time-matched stages of a contrast session, in which participants sat in an empty tub and no cooling took place. In the spatial attention test, target discrimination times were similar for all stages of the contrast session, but increased significantly in the cooling phase upon immersion (deltaTre = 0 degrees C), with no further increases at deltaTre = -0.5 and - 1.0 degree C. Despite global response slowing, cooling did not affect the normal pattern of spatial orienting. In the vigilance test, the variability of detection time was adversely affected in the cooling but not the contrast trials: variability increased at immersion but did not increase further with additional cooling. These findings suggest that attentional impairments are more closely linked to the distracting effects of cold skin temperature than decreases in body core temperature.  相似文献   
994.
OBJECTIVE: To investigate the effect of optic flow on gait behavior during treadmill walking using an immersive virtual reality (VR) setup and compare it with conventional treadmill walking (TW) and overground walking (OW). BACKGROUND: Previous research comparing TW with OW speculated that a lack of optic flow (relative visual movement between a walker and the environment) during TW may have led to perceptual cue conflicts, resulting in differences in gait behavior, as compared with OW. METHOD: Participants walked under three locomotion conditions (OW, TW, and TW with VR [TWVR]) under three walking constraint conditions (no constraint, a temporal/pacing constraint, and a spatial/path-following constraint). Presence questionnaires (PQs) were administered at the close of the TWVR trials. Trials were subjected to video analysis to determine spatiotemporal and kinematics variables used for comparison of locomotion conditions. RESULTS: ANOVA revealed gait behavior during TWVR to be between that of OW and TW. Speed and cadence during TWVR were significantly different from those of TW, whereas knee angle was comparable to that of OW. Correlation analysis of PQ scores with gait measures revealed a positive linear association of the distraction subfactor of the PQ with walking speed during TWVR, suggesting an increase in the sense of presence in the virtual environment led to increases in walking speed. CONCLUSION: The results demonstrate that providing optic flow during TW through VR has an impact on gait behavior. APPLICATION: This study provides a basis for developing simple VR locomotion interface setups for gait research.  相似文献   
995.
We consider the problem of matching images to tell whether they come from the same scene viewed under different lighting conditions. We show that the surface characteristics determine the type of image comparison method that should be used. Previous work has shown the effectiveness of comparing the image gradient direction for surfaces with material properties that change rapidly in one direction. We show analytically that two other widely used methods, normalized correlation of small windows and comparison of multiscale oriented filters, essentially compute the same thing. Then, we show that for surfaces whose properties change more slowly, comparison of the output of whitening filters is most effective. This suggests that a combination of these strategies should be employed to compare general objects. We discuss indications that Gabor jets use such a mixed strategy effectively, and we propose a new mixed strategy. We validate our results on synthetic and real images  相似文献   
996.
This paper considers the estimation of Kendall's tau for bivariate data (X,Y) when only Y is subject to right-censoring. Although τ is estimable under weak regularity conditions, the estimators proposed by Brown et al. [1974. Nonparametric tests of independence for censored data, with applications to heart transplant studies. Reliability and Biometry, 327-354], Weier and Basu [1980. An investigation of Kendall's τ modified for censored data with applications. J. Statist. Plann. Inference 4, 381-390] and Oakes [1982. A concordance test for independence in the presence of censoring. Biometrics 38, 451-455], which are standard in this context, fail to be consistent when τ≠0 because they only use information from the marginal distributions. An exception is the renormalized estimator of Oakes [2006. On consistency of Kendall's tau under censoring. Technical Report, Department of Biostatistics and Computational Biology, University of Rochester, Rochester, NY], whose consistency has been established for all possible values of τ, but only in the context of the gamma frailty model. Wang and Wells [2000. Estimation of Kendall's tau under censoring. Statist. Sinica 10, 1199-1215] were the first to propose an estimator which accounts for joint information. Four more are developed here: the first three extend the methods of Brown et al. [1974. Nonparametric tests of independence for censored data, with applications to heart transplant studies. Reliability and Biometry, 327-354], Weier and Basu [1980, An investigation of Kendall's τ modified for censored data with applications. J. Statist. Plann. Inference 4, 381-390] and Oakes [1982, A concordance test for independence in the presence of censoring. Biometrics 38, 451-455] to account for information provided by X, while the fourth estimator inverts an estimation of Pr(Yi?y|Xi=xi,Yi>ci) to get an imputation of the value of Yi censored at Ci=ci. Following Lim [2006. Permutation procedures with censored data. Comput. Statist. Data Anal. 50, 332-345], a nonparametric estimator is also considered which averages the obtained from a large number of possible configurations of the observed data (X1,Z1),…,(Xn,Zn), where Zi=min(Yi,Ci). Simulations are presented which compare these various estimators of Kendall's tau. An illustration involving the well-known Stanford heart transplant data is also presented.  相似文献   
997.
The problem of fitting a straight line to a finite collection of points in the plane is an important problem in statistical estimation. Robust estimators are widely used because of their lack of sensitivity to outlying data points. The least median-of-squares (LMS) regression line estimator is among the best known robust estimators. Given a set of n points in the plane, it is defined to be the line that minimizes the median squared residual or, more generally, the line that minimizes the residual of any given quantile q, where 0<q?1. This problem is equivalent to finding the strip defined by two parallel lines of minimum vertical separation that encloses at least half of the points.The best known exact algorithm for this problem runs in O(n2) time. We consider two types of approximations, a residual approximation, which approximates the vertical height of the strip to within a given error bound εr?0, and a quantile approximation, which approximates the fraction of points that lie within the strip to within a given error bound εq?0. We present two randomized approximation algorithms for the LMS line estimator. The first is a conceptually simple quantile approximation algorithm, which given fixed q and εq>0 runs in O(nlogn) time. The second is a practical algorithm, which can solve both types of approximation problems or be used as an exact algorithm. We prove that when used as a quantile approximation, this algorithm's expected running time is . We present empirical evidence that the latter algorithm is quite efficient for a wide variety of input distributions, even when used as an exact algorithm.  相似文献   
998.
On classification with incomplete data   总被引:4,自引:0,他引:4  
We address the incomplete-data problem in which feature vectors to be classified are missing data (features). A (supervised) logistic regression algorithm for the classification of incomplete data is developed. Single or multiple imputation for the missing data is avoided by performing analytic integration with an estimated conditional density function (conditioned on the observed data). Conditional density functions are estimated using a Gaussian mixture model (GMM), with parameter estimation performed using both expectation-maximization (EM) and variational Bayesian EM (VB-EM). The proposed supervised algorithm is then extended to the semisupervised case by incorporating graph-based regularization. The semisupervised algorithm utilizes all available data-both incomplete and complete, as well as labeled and unlabeled. Experimental results of the proposed classification algorithms are shown  相似文献   
999.
This paper develops an unsupervised discriminant projection (UDP) technique for dimensionality reduction of high-dimensional data in small sample size cases. UDP can be seen as a linear approximation of a multimanifolds-based learning framework which takes into account both the local and nonlocal quantities. UDP characterizes the local scatter as well as the nonlocal scatter, seeking to find a projection that simultaneously maximizes the nonlocal scatter and minimizes the local scatter. This characteristic makes UDP more intuitive and more powerful than the most up-to-date method, locality preserving projection (LPP), which considers only the local scatter for clustering or classification tasks. The proposed method is applied to face and palm biometrics and is examined using the Yale, FERET, and AR face image databases and the PolyU palmprint database. The experimental results show that UDP consistently outperforms LPP and PCA and outperforms LDA when the training sample size per class is small. This demonstrates that UDP is a good choice for real-world biometrics applications  相似文献   
1000.
Formal translations constitute a suitable framework for dealing with many problems in pattern recognition and computational linguistics. The application of formal transducers to these areas requires a stochastic extension for dealing with noisy, distorted patterns with high variability. In this paper, some estimation criteria are proposed and developed for the parameter estimation of regular syntax-directed translation schemata. These criteria are: maximum likelihood estimation, minimum conditional entropy estimation and conditional maximum likelihood estimation. The last two criteria were proposed in order to deal with situations when training data is sparse. These criteria take into account the possibility of ambiguity in the translations: i.e., there can be different output strings for a single input string. In this case, the final goal of the stochastic framework is to find the highest probability translation of a given input string. These criteria were tested on a translation task which has a high degree of ambiguity.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号