首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1886篇
  免费   91篇
  国内免费   5篇
电工技术   24篇
综合类   18篇
化学工业   575篇
金属工艺   30篇
机械仪表   33篇
建筑科学   139篇
矿业工程   4篇
能源动力   27篇
轻工业   169篇
水利工程   11篇
石油天然气   3篇
无线电   119篇
一般工业技术   375篇
冶金工业   80篇
原子能技术   17篇
自动化技术   358篇
  2023年   29篇
  2022年   50篇
  2021年   65篇
  2020年   36篇
  2019年   49篇
  2018年   44篇
  2017年   41篇
  2016年   51篇
  2015年   47篇
  2014年   73篇
  2013年   98篇
  2012年   90篇
  2011年   157篇
  2010年   100篇
  2009年   113篇
  2008年   102篇
  2007年   85篇
  2006年   87篇
  2005年   74篇
  2004年   68篇
  2003年   55篇
  2002年   51篇
  2001年   44篇
  2000年   43篇
  1999年   43篇
  1998年   31篇
  1997年   30篇
  1996年   18篇
  1995年   20篇
  1994年   27篇
  1993年   19篇
  1992年   11篇
  1991年   24篇
  1990年   13篇
  1989年   14篇
  1988年   7篇
  1987年   7篇
  1986年   4篇
  1985年   8篇
  1984年   10篇
  1983年   8篇
  1981年   3篇
  1980年   4篇
  1978年   4篇
  1977年   5篇
  1976年   5篇
  1975年   2篇
  1974年   2篇
  1967年   2篇
  1931年   3篇
排序方式: 共有1982条查询结果,搜索用时 15 毫秒
131.
Most methods of change detection require a considerable amount of effort and expertise. The procedures of change detection are visual-, classification-, object- or vector-based. The target of this research was to develop an automated and generally unsupervised combination of methods to quantify deforestation on a per pixel basis. The study area was the Gutu district in Zimbabwe. In the first step, Landsat Thematic Mapper (TM) scenes were spectrally unmixed by the Spectral Mixture Analysis (SMA). The calculation of the necessary endmembers was performed by means of the N-FINDR algorithm. After the unmixing process, the data were analysed with change vector analysis (CVA) utilizing spherical statistics. Thereafter, a combination of constraints, including a Bayesian threshold and spherical angles, was applied to identify deforestation. The combination of these methods provided an accurate idea of the state of deforestation and enabled attribution to ‘fire-induced’ and ‘non fire-induced’ classes.  相似文献   
132.
133.
Metabolite identification is of central importance to metabolomics as it provides the route to new knowledge. Automated identification of the thousands of peaks detected by high resolution mass spectrometry is currently not possible, largely due to the finite mass accuracy of the spectrometer and the complexity that one peak can be assigned to one or more empirical formula(e) and each formula maps to one or more metabolites. Biological samples are not, however, composed of random metabolite mixtures, but instead comprise of thousands of compounds related through specific chemical transformations. Here we evaluate if prior biological knowledge of these transformations can improve metabolite identification accuracy.Our identification algorithm - which uses metabolite interconnectivity from the KEGG database to putatively identify metabolites by name - is based on mapping an experimentally-derived empirical formula difference for a pair of peaks to a known empirical formula difference between substrate-product pairs derived from KEGG, termed transformation mapping (TM). To maximize identification accuracy, we also developed a novel semi-automated method to calculate a mass error surface associated with experimental peak-pair differences. The TM algorithm with mass error surface has been extensively validated using simulated and experimental datasets by calculating false positive and false negative rates of metabolite identification. Compared to the traditional identification method of database searching accurate masses on a single-peak-by-peak basis, the TM algorithm reduces the false positive rate of identification by > 4-fold, while maintaining a minimal false negative rate. The mass error surface, putative identification of metabolite names, and calculation of false positive and false negative rates collectively advance and improve upon related previous research on this topic [1, 2]. We conclude that inclusion of prior biological knowledge in the form of metabolic pathways provides one route to more accurate metabolite identification.  相似文献   
134.
It has been shown earlier that hypergravity slows down inner ear otolith growth in developing fish as an adaptation towards increased environmental gravity. Suggesting that otolith growth is regulated by the central nervous system, thus adjusting otolithic weight to produce a test mass, applying functional weightlessness should yield an opposite effect, i.e. larger than normal otoliths. Therefore, larval siblings of cichlid fish (Oreochromis mossambicus) were housed for 7 days in a submersed, two-dimensional clinostat, which provided a residual gravity of approximately 0.007g. After the experiment, otoliths were dissected and their size (area grown during the experiment) was determined. Maintenance in the clinostat resulted in significantly larger utricular otoliths (lapilli, involved in graviperception). There were no statistical significant differences regarding saccular otoliths obtained (sagittae, involved in transmitting linear acceleration and, especially, in the hearing process). These results indicated, that the animals had in fact received functional weightlessness. In line and contrasting results on the otoliths of other teleost species kept at actual microgravity (spaceflight) or within rotating wall vessels are discussed.  相似文献   
135.
136.
Augmentation     
A semantic and pragmatic interpreter that combines automatic and interactive disambiguation is described. This augmentor has an interactive disambiguation component that is called upon to aid automatic disambiguation when automated strategies prove inadequate. In addition to interactive disambiguation, the augmentor also provides the user interface for the KBMT-89 project.  相似文献   
137.
We consider the problem of collectively locating a set of points within a set of disjoint polygonal regions when neither for points nor for regions preprocessing is allowed. This problem arises in geometric database systems. More specifically it is equivalent to computing theinside join of geo-relational algebra, a conceptual model for geo-data management. We describe efficient algorithms for solving this problem based on plane-sweep and divide-and-conquer, requiringO(n(logn) +t) andO(n(log2 n) +t) time, respectively, andO(n) space, wheren is the total number of points and edges, and (is the number of reported (point, region) pairs. Since the algorithms are meant to be practically useful we consider as well as the internal versions-running completely in main memory-versions that run internally but use much less than linear space and versions that run externally, that is, require only a constant amount of internal memory regardless of the amount of data to be processed. Comparing plane-sweep and divide-and-conquer, it turns out that divide-and-conquer can be expected to perform much better in the external case even though it has a higher internal asymptotic worst-case complexity. An interesting theoretical by-product is a new general technique for handling arbitrarily large sets of objects clustered on a singlex-coordinate within a planar divide-and-conquer algorithm and a proof that the resulting “unbalanced” dividing does not lead to a more than logarithmic height of the tree of recursive calls.  相似文献   
138.
139.
140.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号