首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   4712篇
  免费   306篇
  国内免费   8篇
电工技术   58篇
综合类   8篇
化学工业   936篇
金属工艺   55篇
机械仪表   75篇
建筑科学   176篇
矿业工程   5篇
能源动力   163篇
轻工业   332篇
水利工程   53篇
石油天然气   35篇
无线电   390篇
一般工业技术   1189篇
冶金工业   609篇
原子能技术   13篇
自动化技术   929篇
  2024年   13篇
  2023年   60篇
  2022年   96篇
  2021年   140篇
  2020年   113篇
  2019年   122篇
  2018年   127篇
  2017年   139篇
  2016年   166篇
  2015年   139篇
  2014年   190篇
  2013年   333篇
  2012年   310篇
  2011年   386篇
  2010年   297篇
  2009年   282篇
  2008年   307篇
  2007年   244篇
  2006年   214篇
  2005年   186篇
  2004年   152篇
  2003年   133篇
  2002年   130篇
  2001年   66篇
  2000年   60篇
  1999年   61篇
  1998年   50篇
  1997年   49篇
  1996年   47篇
  1995年   42篇
  1994年   39篇
  1993年   36篇
  1992年   38篇
  1991年   20篇
  1990年   27篇
  1989年   15篇
  1988年   13篇
  1987年   28篇
  1986年   15篇
  1985年   31篇
  1984年   13篇
  1983年   6篇
  1982年   13篇
  1981年   20篇
  1977年   5篇
  1976年   7篇
  1975年   7篇
  1974年   5篇
  1973年   6篇
  1964年   4篇
排序方式: 共有5026条查询结果,搜索用时 15 毫秒
131.
In this work a two step approach to efficiently carrying out hyper parameter optimisation, required for building kriging and gradient enhanced kriging metamodels, is presented. The suggested approach makes use of an initial line search along the hyper-diagonal of the design space in order to find a suitable starting point for a subsequent gradient based optimisation algorithm. During the optimisation an upper bound constraint is imposed on the condition number of the correlation matrix in order to keep it from being ill conditioned. Partial derivatives of both the condensed log likelihood function and the condition number are obtained using the adjoint method, the latter has been derived in this work. The approach is tested on a number of analytical examples and comparisons are made to other optimisation approaches. Finally the approach is used to construct metamodels for a finite element model of an aircraft wing box comprising of 126 thickness design variables and is then compared with a sub-set of the other optimisation approaches.  相似文献   
132.
In the field of heuristic search it is usually assumed that admissible heuristics are consistent, implying that consistency is a desirable attribute. The term “inconsistent heuristic” has, at times, been portrayed negatively, as something to be avoided. Part of this is historical: early research discovered that inconsistency can lead to poor performance for A? (nodes might be re-expanded many times). However, the issue has never been fully investigated, and was not re-considered after the invention of IDA?.This paper shows that many of the preconceived notions about inconsistent heuristics are outdated. The worst-case exponential time of inconsistent heuristics is shown to only occur on contrived graphs with edge weights that are exponential in the size of the graph. Furthermore, the paper shows that rather than being something to be avoided, inconsistent heuristics often add a diversity of heuristic values into a search which can lead to a reduction in the number of node expansions. Inconsistent heuristics are easy to create, contrary to the common perception in the AI literature. To demonstrate this, a number of methods for achieving effective inconsistent heuristics are presented.Pathmax is a way of propagating inconsistent heuristic values in the search from parent to children. This technique is generalized into bidirectional pathmax (BPMX) which propagates values from a parent to a child node, and vice versa. BPMX can be integrated into IDA? and A?. When inconsistent heuristics are used with BPMX, experimental results show a large reduction in the search effort required by IDA?. Positive results are also presented for A? searches.  相似文献   
133.
Practical and financial constraints associated with traditional field-based lithological mapping are often responsible for the generation of maps with insufficient detail and inaccurately located contacts. In arid areas with well exposed rocks and soils, high-resolution multi- and hyperspectral imagery is a valuable mapping aid as lithological units can be readily discriminated and mapped by automatically matching image pixel spectra to a set of reference spectra. However, the use of spectral imagery in all but the most barren terrain is problematic because just small amounts of vegetation cover can obscure or mask the spectra of underlying geological substrates. The use of ancillary information may help to improve lithological discrimination, especially where geobotanical relationships are absent or where distinct lithologies exhibit inherent spectral similarity. This study assesses the efficacy of airborne multispectral imagery for detailed lithological mapping in a vegetated section of the Troodos ophiolite (Cyprus), and investigates whether the mapping performance can be enhanced through the integration of LiDAR-derived topographic data. In each case, a number of algorithms involving different combinations of input variables and classification routine were employed to maximise the mapping performance. Despite the potential problems posed by vegetation cover, geobotanical associations aided the generation of a lithological map - with a satisfactory overall accuracy of 65.5% and Kappa of 0.54 - using only spectral information. Moreover, owing to the correlation between topography and lithology in the study area, the integration of LiDAR-derived topographic variables led to significant improvements of up to 22.5% in the overall mapping accuracy compared to spectral-only approaches. The improvements were found to be considerably greater for algorithms involving classification with an artificial neural network (the Kohonen Self-Organizing Map) than the parametric Maximum Likelihood Classifier. The results of this study demonstrate the enhanced capability of data integration for detailed lithological mapping in areas where spectral discrimination is complicated by the presence of vegetation or inherent spectral similarities.  相似文献   
134.
This paper compares three methods for estimating renal function, as tested in rats. Acute renal failure (ARF) was induced via a 60-min bilateral renal artery clamp in 8 Sprague-Dawley rats and renal function was monitored for 1 week post-surgery. A two-compartment model was developed for estimating glomerular filtration via a bolus injection of a radio-labelled inulin tracer, and was compared with an estimated creatinine clearance method, modified using the Cockcroft-Gault equation for rats. These two methods were compared with selected ion flow tube-mass spectrometry (SIFT-MS) monitoring of breath analytes. Determination of renal function via SIFT-MS is desirable since results are available non-invasively and in real time. Relative decreases in renal function show very good correlation between all 3 methods (R2 = 0.84, 0.91 and 0.72 for breath-inulin, inulin-creatinine, and breath-creatinine correlations, respectively), and indicate good promise for fast, non-invasive determination of renal function via breath testing.  相似文献   
135.
This paper presents a blind watermarking approach to protecting vector geo-spatial data from illegal use. By taking into account usability, invisibility, robustness, and blindness, the approach firstly determines three feature layers of the geo-spatial data and selects the key points from each layer as watermark embedding positions. Then it shuffles the watermark and embeds it in the least significant bits (LSBs) of the coordinates of the key points. A similar process for selecting the feature layers and the key points in the watermark embedding process is carried out to detect the watermark followed by obtaining the embedded watermark from the LSBs of the coordinates of the key points. Finally, the similarity degrees of three versions of the watermark from three feature layers are calculated to check if the data contains the watermark. Our experiments show that the method is rarely affected by data format change, random noise, similarity transformation of the data, and data editing.  相似文献   
136.
We investigate the effects of precision on the efficiency of various local search algorithms on 1-D unimodal functions. We present a (1+1)-EA with adaptive step size which finds the optimum in O(log n) steps, where n is the number of points used. We then consider binary (base-2) and reflected Gray code representations with single bit mutations. The standard binary method does not guarantee locating the optimum, whereas using the reflected Gray code does so in Θ((log n)2) steps. A(1+1)-EA with a fixed mutation probability distribution is then presented which also runs in O((log n)2). Moreover, a recent result shows that this is optimal (up to some constant scaling factor), in that there exist unimodal functions for which a lower bound of Ω((log n)2) holds regardless of the choice of mutation distribution. For continuous multimodal functions, the algorithm also locates the global optimum in O((log n)2). Finally, we show that it is not possible for a black box algorithm to efficiently optimise unimodal functions for two or more dimensions (in terms of the precision used).  相似文献   
137.
Extending IP to low-power, wireless personal area networks (LoWPANs) was once considered impractical because these networks are highly constrained and must operate unattended for multiyear lifetimes on modest batteries. Many vendors embraced proprietary protocols, assuming that IP was too resource-intensive to be scaled down to operate on the microcontrollers and low-power wireless links used in LoWPAN settings. However, 6LoWPAN radically alters the calculation by introducing an adaptation layer that enables efficient IPv6 communication over IEEE 802.15.4 LoWPAN links.  相似文献   
138.
The accurate measurement of the execution time of Java bytecode is one factor that is important in order to estimate the total execution time of a Java application running on a Java Virtual Machine. In this paper we document the difficulties and solutions for the accurate timing of Java bytecode. We also identify trends across the execution times recorded for all imperative Java bytecodes. These trends would suggest that knowing the execution times of a small subset of the Java bytecode instructions would be sufficient to model the execution times of the remainder. We first review a statistical approach for achieving high precision timing results for Java bytecode using low precision timers and then present a more suitable technique using homogeneous bytecode sequences for recording such information. We finally compare instruction execution times acquired using this platform independent technique against execution times recorded using the read time stamp counter assembly instruction. In particular our results show the existence of a strong linear correlation between both techniques.  相似文献   
139.
The development, dissemination, and proliferation of multimedia and media convergent texts raise a number of pressing questions for literacy scholars in general and compositionists in particular. What kinds of literacy practices are students developing through their use and composition of multimodal and new media texts? What genres are used in the creation of such texts, and why? Are there particular genres that are favored? How are older genres remediated or recast through media convergence? What research methodology challenges are posed when attempting to study multimodal and new media texts? How might compositionists use media convergence to teach students about academic literacies, about research, about the changing nature of “writing?” What might media convergence look like in the future? Perhaps most immediately, the phrase itself—“media convergence”—begs a question: what, exactly, is converging? This special issue of Computers and Composition on “Media Convergence” poses answers—sometimes tentative, sometimes provocative—to these questions.  相似文献   
140.
Defibrillators are a critical tool for treating heart disease; however, the mechanisms by which they halt fibrillation are still not fully understood and are the subject of ongoing research. Clinical defibrillators do not provide the precise control of shock timing, duration, and voltage or other features needed for detailed scientific inquiry, and there are few, if any, commercially available units designed for research applications. For this reason, we have developed a high-voltage, programmable, capacitive-discharge stimulator optimized to deliver defibrillation shocks with precise timing and voltage control to an isolated animal heart, either in air or in a bath. This stimulator is capable of delivering voltages of up to 500 V and energies of nearly 100 J with timing accuracy of a few microseconds and with rise and fall times of 5 micros or less and is controlled only by two external timing pulses and a control computer that sets the stimulation parameters via a LABVIEW interface. Most importantly, the stimulator has circuits to protect the high-voltage circuitry and the operator from programming and input-output errors. This device has been tested and used successfully in field shock experiments on rabbit hearts as well as other protocols requiring high voltage.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号