首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1826篇
  免费   89篇
电工技术   8篇
综合类   2篇
化学工业   405篇
金属工艺   26篇
机械仪表   14篇
建筑科学   126篇
矿业工程   2篇
能源动力   48篇
轻工业   184篇
水利工程   13篇
石油天然气   11篇
无线电   78篇
一般工业技术   394篇
冶金工业   304篇
原子能技术   12篇
自动化技术   288篇
  2023年   12篇
  2022年   20篇
  2021年   26篇
  2020年   24篇
  2019年   20篇
  2018年   35篇
  2017年   43篇
  2016年   54篇
  2015年   41篇
  2014年   47篇
  2013年   105篇
  2012年   88篇
  2011年   114篇
  2010年   71篇
  2009年   79篇
  2008年   98篇
  2007年   74篇
  2006年   62篇
  2005年   60篇
  2004年   50篇
  2003年   55篇
  2002年   47篇
  2001年   33篇
  2000年   29篇
  1999年   30篇
  1998年   29篇
  1997年   28篇
  1996年   41篇
  1995年   31篇
  1994年   32篇
  1993年   31篇
  1992年   22篇
  1991年   10篇
  1990年   17篇
  1989年   22篇
  1988年   20篇
  1987年   15篇
  1986年   18篇
  1985年   39篇
  1984年   26篇
  1983年   14篇
  1982年   30篇
  1981年   28篇
  1980年   26篇
  1979年   24篇
  1978年   16篇
  1977年   20篇
  1975年   14篇
  1974年   6篇
  1973年   17篇
排序方式: 共有1915条查询结果,搜索用时 15 毫秒
51.
Although recent years have seen significant advances in the spatial resolution possible in the transmission electron microscope (TEM), the temporal resolution of most microscopes is limited to video rate at best. This lack of temporal resolution means that our understanding of dynamic processes in materials is extremely limited. High temporal resolution in the TEM can be achieved, however, by replacing the normal thermionic or field emission source with a photoemission source. In this case the temporal resolution is limited only by the ability to create a short pulse of photoexcited electrons in the source, and this can be as short as a few femtoseconds. The operation of the photo-emission source and the control of the subsequent pulse of electrons (containing as many as 5 x 10(7) electrons) create significant challenges for a standard microscope column that is designed to operate with a single electron in the column at any one time. In this paper, the generation and control of electron pulses in the TEM to obtain a temporal resolution <10(-6)s will be described and the effect of the pulse duration and current density on the spatial resolution of the instrument will be examined. The potential of these levels of temporal and spatial resolution for the study of dynamic materials processes will also be discussed.  相似文献   
52.
Diagnosing cardiovascular system (CVS) diseases from clinically measured data is difficult, due to the complexity of the hemodynamic and autonomic nervous system (ANS) interactions. Physiological models could describe these interactions to enable simulation of a variety of diseases, and could be combined with parameter estimation algorithms to help clinicians diagnose CVS dysfunctions. This paper presents modifications to an existing CVS model to include a minimal physiological model of ANS activation. A minimal model is used so as to minimise the number of parameters required to specify ANS activation, enabling the effects of each parameter on hemodynamics to be easily understood. The combined CVS and ANS model is verified by simulating a variety of CVS diseases, and comparing simulation results with common physiological understanding of ANS function and the characteristic hemodynamics seen in these diseases. The model of ANS activation is required to simulate hemodynamic effects such as increased cardiac output in septic shock, elevated pulmonary artery pressure in left ventricular infarction, and elevated filling pressures in pericardial tamponade. This is the first known example of a minimal CVS model that includes a generic model of ANS activation and is shown to simulate diseases from throughout the CVS.  相似文献   
53.
Information visualisation is about gaining insight into data through a visual representation. This data is often multivariate and increasingly, the datasets are very large. To help us explore all this data, numerous visualisation applications, both commercial and research prototypes, have been designed using a variety of techniques and algorithms. Whether they are dedicated to geo-spatial data or skewed hierarchical data, most of the visualisations need to adopt strategies for dealing with overcrowded displays, brought about by too much data to fit in too small a display space. This paper analyses a large number of these clutter reduction methods, classifying them both in terms of how they deal with clutter reduction and more importantly, in terms of the benefits and losses. The aim of the resulting taxonomy is to act as a guide to match techniques to problems where different criteria may have different importance, and more importantly as a means to critique and hence develop existing and new techniques.  相似文献   
54.
This paper describes models and algorithms for the real-time segmentation of foreground from background layers in stereo video sequences. Automatic separation of layers from color/contrast or from stereo alone is known to be error-prone. Here, color, contrast, and stereo matching information are fused to infer layers accurately and efficiently. The first algorithm, layered dynamic programming (LDP), solves stereo in an extended six-state space that represents both foreground/background layers and occluded regions. The stereo-match likelihood is then fused with a contrast-sensitive color model that is learned on-the-fly and stereo disparities are obtained by dynamic programming. The second algorithm, layered graph cut (LGC), does not directly solve stereo. Instead, the stereo match likelihood is marginalized over disparities to evaluate foreground and background hypotheses and then fused with a contrast-sensitive color model like the one used in LDP. Segmentation is solved efficiently by ternary graph cut. Both algorithms are evaluated with respect to ground truth data and found to have similar performance, substantially better than either stereo or color/contrast alone. However, their characteristics with respect to computational efficiency are rather different. The algorithms are demonstrated in the application of background substitution and shown to give good quality composite video output.  相似文献   
55.
Hyperglycaemia is prevalent in critical illness and increases the risk of further complications and mortality, while tight control can reduce mortality up to 43%. Adaptive control methods are capable of highly accurate, targeted blood glucose regulation using limited numbers of manual measurements due to patient discomfort and labour intensity. Therefore, the option to obtain greater data density using emerging continuous glucose sensing devices is attractive. However, the few such systems currently available can have errors in excess of 20-30%. In contrast, typical bedside testing kits have errors of approximately 7-10%. Despite greater measurement frequency larger errors significantly impact the resulting glucose and patient specific parameter estimates, and thus the control actions determined creating an important safety and performance issue. This paper models the impact of the continuous glucose monitoring system (CGMS, Medtronic, Northridge, CA) on model-based parameter identification and glucose prediction. An integral-based fitting and filtering method is developed to reduce the effect of these errors. A noise model is developed based on CGMS data reported in the literature, and is slightly conservative with a mean Clarke Error Grid (CEG) correlation of R=0.81 (range: 0.68-0.88) as compared to a reported value of R=0.82 in a critical care study. Using 17 virtual patient profiles developed from retrospective clinical data, this noise model was used to test the methods developed. Monte-Carlo simulation for each patient resulted in an average absolute 1-h glucose prediction error of 6.20% (range: 4.97-8.06%) with an average standard deviation per patient of 5.22% (range: 3.26-8.55%). Note that all the methods and results are generalizable to similar applications outside of critical care, such as less acute wards and eventually ambulatory individuals. Clinically, the results show one possible computational method for managing the larger errors encountered in emerging continuous blood glucose sensors, thus enabling their more effective use in clinical glucose regulation studies.  相似文献   
56.
In radiotherapy treatment planning, tumor volumes and anatomical structures are manually contoured for dose calculation, which takes time for clinicians. This study examines the use of semi-automated segmentation of CT images. A few high curvature points are manually drawn on a CT slice. Then Fourier interpolation is used to complete the contour. Consequently, optical flow, a deformable image registration method, is used to map the original contour to other slices. This technique has been applied successfully to contour anatomical structures and tumors. The maximum difference between the mapped contours and manually drawn contours was 6 pixels, which is similar in magnitude to difference one would see in manually drawn contours by different clinicians. The technique fails when the region to contour is topologically different between two slices. A solution is recommended to manually delineate contours on a sparse subset of slices and then map in both directions to fill the remaining slices.  相似文献   
57.
The aim of the present study was to investigate the effect of social networking sites (SNSs) engagement on cognitive and social skills. We investigated the use of Facebook, Twitter, and YouTube in a group of young adults and tested their working memory, attentional skills, and reported levels of social connectedness. Results showed that certain activities in Facebook (such as checking friends’ status updates) and YouTube (telling a friend to watch a video) predicted working memory test performance. The findings also indicated that Active and Passive SNS users had qualitatively different profiles of attentional control. The Active SNS users were more accurate and had fewer misses of the target stimuli in the first block of trials. They also did not discriminate their attentional resources exclusively to the target stimuli and were less likely to ignore distractor stimuli. Their engagement with SNS appeared to be exploratory and they assigned similar weight to incoming streams of information. With respect to social connectedness, participants’ self-reports were significantly related to Facebook use, but not Twitter or YouTube use, possibly as the result of greater opportunity to share personal content in the former SNS.  相似文献   
58.
We present a new learning algorithm for Boltzmann machines that contain many layers of hidden variables. Data-dependent statistics are estimated using a variational approximation that tends to focus on a single mode, and data-independent statistics are estimated using persistent Markov chains. The use of two quite different techniques for estimating the two types of statistic that enter into the gradient of the log likelihood makes it practical to learn Boltzmann machines with multiple hidden layers and millions of parameters. The learning can be made more efficient by using a layer-by-layer pretraining phase that initializes the weights sensibly. The pretraining also allows the variational inference to be initialized sensibly with a single bottom-up pass. We present results on the MNIST and NORB data sets showing that deep Boltzmann machines learn very good generative models of handwritten digits and 3D objects. We also show that the features discovered by deep Boltzmann machines are a very effective way to initialize the hidden layers of feedforward neural nets, which are then discriminatively fine-tuned.  相似文献   
59.
Ohshiro, Hussain, and Weliky (2011) recently showed that ferrets reared with exposure to flickering spot stimuli, in the absence of oriented visual experience, develop oriented receptive fields. They interpreted this as refutation of efficient coding models, which require oriented input in order to develop oriented receptive fields. Here we show that these data are compatible with the efficient coding hypothesis if the influence of spontaneous retinal waves is considered. We demonstrate that independent component analysis learns predominantly oriented receptive fields when trained on a mixture of spot stimuli and spontaneous retinal waves. Further, we show that the efficient coding hypothesis provides a compelling explanation for the contrast between the lack of receptive field changes seen in animals reared with spot stimuli and the significant cortical reorganisation observed in stripe-reared animals.  相似文献   
60.
Semi-naive Bayesian techniques seek to improve the accuracy of naive Bayes (NB) by relaxing the attribute independence assumption. We present a new type of semi-naive Bayesian operation, Subsumption Resolution (SR), which efficiently identifies occurrences of the specialization-generalization relationship and eliminates generalizations at classification time. We extend SR to Near-Subsumption Resolution (NSR) to delete near–generalizations in addition to generalizations. We develop two versions of SR: one that performs SR during training, called eager SR (ESR), and another that performs SR during testing, called lazy SR (LSR). We investigate the effect of ESR, LSR, NSR and conventional attribute elimination (BSE) on NB and Averaged One-Dependence Estimators (AODE), a powerful alternative to NB. BSE imposes very high training time overheads on NB and AODE accompanied by varying decreases in classification time overheads. ESR, LSR and NSR impose high training time and test time overheads on NB. However, LSR imposes no extra training time overheads and only modest test time overheads on AODE, while ESR and NSR impose modest training and test time overheads on AODE. Our extensive experimental comparison on sixty UCI data sets shows that applying BSE, LSR or NSR to NB significantly improves both zero-one loss and RMSE, while applying BSE, ESR or NSR to AODE significantly improves zero-one loss and RMSE and applying LSR to AODE significantly improves zero-one loss. The Friedman test and Nemenyi test show that AODE with ESR or NSR have a significant zero-one loss and RMSE advantage over Logistic Regression and a zero-one loss advantage over Weka’s LibSVM implementation with a grid parameter search on categorical data. AODE with LSR has a zero-one loss advantage over Logistic Regression and comparable zero-one loss with LibSVM. Finally, we examine the circumstances under which the elimination of near-generalizations proves beneficial.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号