首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   12438篇
  免费   216篇
  国内免费   9篇
电工技术   194篇
综合类   13篇
化学工业   3019篇
金属工艺   174篇
机械仪表   312篇
建筑科学   582篇
矿业工程   35篇
能源动力   282篇
轻工业   2002篇
水利工程   147篇
石油天然气   37篇
无线电   808篇
一般工业技术   1785篇
冶金工业   1166篇
原子能技术   93篇
自动化技术   2014篇
  2024年   146篇
  2023年   152篇
  2022年   163篇
  2021年   382篇
  2020年   318篇
  2019年   300篇
  2018年   417篇
  2017年   424篇
  2016年   498篇
  2015年   383篇
  2014年   501篇
  2013年   923篇
  2012年   801篇
  2011年   805篇
  2010年   522篇
  2009年   518篇
  2008年   527篇
  2007年   492篇
  2006年   395篇
  2005年   301篇
  2004年   276篇
  2003年   241篇
  2002年   240篇
  2001年   136篇
  2000年   117篇
  1999年   121篇
  1998年   186篇
  1997年   161篇
  1996年   116篇
  1995年   122篇
  1994年   96篇
  1993年   94篇
  1992年   71篇
  1991年   66篇
  1990年   71篇
  1989年   75篇
  1988年   65篇
  1987年   65篇
  1986年   61篇
  1985年   78篇
  1984年   73篇
  1983年   69篇
  1982年   60篇
  1981年   55篇
  1979年   55篇
  1978年   64篇
  1977年   62篇
  1976年   73篇
  1975年   57篇
  1973年   59篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
31.
Faster methods for the detection of foodborne microbial pathogens are needed. The polymerase chain reaction (PCR) can amplify specific segments of DNA and is used to detect and identify bacterial genes responsible for causing diseases in humans. The major features and requirements for the PCR are described along with a number of important variations. A considerable number of PCR‐based assays have been developed, but they have been applied most often to clinical and environmental samples and more rarely for the detection of foodborne microorganisms. Much of the difficulty in implementing PCR for the analysis of food samples lies in the problems encountered during the preparation of template DNAs from food matrices; a variety of approaches and considerations are examined. PCR methods developed for the detection and identification of particular bacteria, viruses, and parasites found in foods are described and discussed, and the major features of these reactions are summarized.  相似文献   
32.
Brito  Claúdia  Esteves  Marisa  Peixoto  Hugo  Abelha  António  Machado  José 《Wireless Networks》2022,28(3):1269-1277
Wireless Networks - Continuous ambulatory peritoneal dialysis (CAPD) is a treatment used by patients in the end-stage of chronic kidney diseases. Those patients need to be monitored using blood...  相似文献   
33.
Lethal effect of electric fields on isolated ventricular myocytes   总被引:1,自引:0,他引:1  
Defibrillator-type shocks may cause electric and contractile dysfunction. In this study, we determined the relationship between probability of lethal injury and electric field intensity ($E$ ) in isolated rat ventricular myocytes, with emphasis on field orientation and stimulus waveform. This relationship was sigmoidal with irreversible injury for $E ≫ hbox{50 V/cm}$ . During both threshold and lethal stimulation, cells were twofold more sensitive to the field when it was applied longitudinally (versus transversally) to the cell major axis. For a given $E$, the estimated maximum variation of transmembrane potential ( $Delta V_{max }$) was greater for longitudinal stimuli, which might account for the greater sensitivity to the field. Cell death, however, occurred at lower maximum $Delta V_{max }$ values for transversal shocks. This might be explained by a less steep spatial decay of transmembrane potential predicted for transversal stimulation, which would possibly result in occurrence of electroporation in a larger membrane area. For the same stimulus duration, cells were less sensitive to field-induced injury when shocks were biphasic (versus monophasic). Ours results indicate that, although significant myocyte death may occur in the $E$ range expected during clinical defibrillation, biphasic shocks are less likely to produce irreversible cell injury.   相似文献   
34.
To date, researchers have measured net efficiencies of energy conversion using data from animals in energy chambers. The expense of this approach prevents the establishment of a large data base for quantitative studies. Our purpose was to investigate models that would enable us to use data collectable in normal field conditions to compare dairy cattle for their net energetic efficiency. Data from 357 Holstein cows in seven herds and in various parities consisted of daily measures of DM intake, net energy intake, milk production, biweekly measures of milk components, and bimonthly BW. Eighteen alternative multiple regression models were fitted to each of the cows to estimate simultaneously net efficiency of energy conversion for maintenance, lactation, pregnancy, and BW change during positive energy balance period, negative energy balance period, and whole lactation. Results from several fitted models approximated closely literature results based on data from cows in energy chambers. These comparative results suggest that it is possible to estimate efficiency of energy conversion on individual cows using data obtained from normal animal management situations.  相似文献   
35.
Fitting a causal dynamic model to an image is a fundamental problem in image processing, pattern recognition, and computer vision. In image restoration, for instance, the goal is to recover an estimate of the true image, preferably in the form of a parametric model, given an image that has been degraded by a combination of blur and additive white Gaussian noise. In texture analysis, on the other hand, a model of a particular texture image can serve as a tool for simulating texture patterns. Finally, in image enhancement one computes a model of the true image and the residuals between the image and the modeled image can be interpreted as the result of applying a de-noising filter. There are numerous other applications within the field of image processing that require a causal dynamic model. Such is the case in scene analysis, machined parts inspection, and biometric analysis, to name only a few. There are many types of causal dynamic models that have been proposed in the literature, among which the autoregressive moving average and state-space models (i.e., Kalman filter) are the most commonly used. In this paper we introduce a 2-D stochastic state-space system identification algorithm for fitting a quarter plane causal dynamic Roesser model to an image. The algorithm constructs a causal, recursive, and separable-in-denominator 2-D Kalman filter model. The algorithm is tested with three real images and the quality of the estimated images are assessed.  相似文献   
36.
37.
Zusammenfassung Zur quantitativen Bestimmung von Coffein in biologischem Material wird ein kombiniertes Verfahren aus Dünnschichtchromatographie und Densitometrie beschrieben. Das Verfahren läßt Bestimmungen im Nanogramm-Bereich zu. Das Probenvolumen liegt unter 100 l.Die Proben — Capillarblut — werden zunächst mit dem gleichen Volumen Chloroform extrahiert. Anschließend wird das Coffein mittels Dünnschichtchromatographie von Begleitstoffen und störenden Substanzen abgetrennt. Es werden Kieselgel-60-Fertigplatten und Chlorofom/Aceton (9 + 1; v/v) als Fließmittel verwendet, dabei beträgt die Laufzeit 30 min.Die quantitative densitometrische Auswertung erfolgt durch Remissionsmessung bei 273 nm. Im Bereich von 10–60 ng Coffein/Fleck verläuft die Eichkurve linear. 1 mg/I Coffein kann noch sicher quantitativ erfaßt werden. Die Nachweisgrenze liegt bei 0,1 mg/1.
A quantitative micromethod for the caffeine determination
Summary A combined procedure with thin-layer-chromatography and densitometry is described for the quantitative estimation of caffeine in biological material. This method ist applicable in the nanogram range. Test samples of less than 100 l may be used. The samples (capillary-blood) are extracted with the same volume of chloroform. Caffeine is separated from interfering compounds by thin-layer-chromatography. Commercial silica-60-plates with chloroform/acetone (9 + 1; v/v) as solvent are used. The running time is about 30 min. The quantitative densitometric determinations are performed in the remission mode at 273 nm. In the range from 10 to 60 ng/spot the calibration curve is linear. Accurate quantitative data will be obtained even at concentrations of 1 mg/1 caffeine. The detection limit is at about 0.1 mg/1.
  相似文献   
38.
The success of III-nitride optoelectronic devices paths the way towards emerging devices in microelectronics. These devices are currently at the threshold to commercialization, therefore reliability considerations are becoming increasingly important. This paper reviews the material and process technology of III-nitride microelectronic devices in the scope of reliability. Since statistical reliability data are lacking in the current state of research the review starts with a summary of how reliability can be designed into process modules being relevant for microelectronic devices. This includes a discussion of the most important issues of material growth, metallization, implantation, dry etching and surface passivation. The subsequent chapter focuses to microelectronic devices and highlights technological challenges that have to be met in order to obtain reliable devices. Finally, results of lifetime experiments (thermal aging) demonstrate that III-nitride devices have the potential for reliable operation even at elevated temperatures up to 400°C.  相似文献   
39.
The synthesis and microstructural characterization, by means of selected area electron diffraction (SAED) and high resolution electron microscopy (HREM), of the solid solution LaBa(x)Sr(1-x)CuGaO5 (0.1 < or = 5 x < or = 0.9) is reported. Although an average brownmillerite Ima2 structure is proposed for the whole compositional range, SAED and HREM clearly show that structural defects appear as barium content increases.  相似文献   
40.
Interconnect imperfections have become an important issue in modern nanometer technologies. Some of them cause Small Delay Defects (SDDs) which are difficult to detect. Those SDDs not detected during testing may pose a reliability problem. Furthermore, nanometer issues (e.g. process variations, spatial correlations) represent important challenges for traditional delay test methods. In this paper, a methodology to compute the Detection Probability (DP) of resistive open and bridge defects using a statistical timing framework that takes into account process variations and other nanometer issues is proposed. The DP gives the sensitivity of the circuit performance to a given resistance range of the defect. The efficiency issue when analyzing large circuits is alleviated using stratified sampling techniques to reduce the space of possible analyzed defect locations This methodology is applied to some ISCAS benchmark circuits. The obtained results show the feasibility of the proposed methodology. Measures can be taken for those circuits presenting non-acceptable DP in order to improve the test quality.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号