首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1255篇
  免费   39篇
电工技术   24篇
综合类   4篇
化学工业   290篇
金属工艺   17篇
机械仪表   13篇
建筑科学   51篇
矿业工程   1篇
能源动力   19篇
轻工业   166篇
水利工程   3篇
石油天然气   4篇
无线电   74篇
一般工业技术   249篇
冶金工业   193篇
原子能技术   4篇
自动化技术   182篇
  2022年   13篇
  2021年   30篇
  2020年   24篇
  2019年   31篇
  2018年   25篇
  2017年   17篇
  2016年   37篇
  2015年   22篇
  2014年   36篇
  2013年   40篇
  2012年   64篇
  2011年   83篇
  2010年   39篇
  2009年   41篇
  2008年   43篇
  2007年   44篇
  2006年   48篇
  2005年   31篇
  2004年   29篇
  2003年   26篇
  2002年   31篇
  2001年   19篇
  2000年   31篇
  1999年   31篇
  1998年   72篇
  1997年   36篇
  1996年   24篇
  1995年   18篇
  1994年   27篇
  1993年   29篇
  1992年   19篇
  1991年   17篇
  1990年   12篇
  1989年   12篇
  1988年   13篇
  1987年   17篇
  1986年   13篇
  1985年   14篇
  1984年   12篇
  1983年   8篇
  1982年   7篇
  1981年   10篇
  1980年   6篇
  1979年   8篇
  1978年   9篇
  1977年   7篇
  1976年   17篇
  1974年   5篇
  1973年   6篇
  1970年   6篇
排序方式: 共有1294条查询结果,搜索用时 0 毫秒
21.
固体有害废物安全填埋场基础弱渗透岩石的渗透性,大多数情况下都由实验室和野外方法测定.测定方法的适用性取决于岩石种类、岩体状态和渗透性数量级别.在同一实验段内采用不同的实验方法,一情况下导致产生两个数量级的误差,而更大的不可靠性与使用不同的评价方法有关.因为可靠并可引用的渗透性测定值在废物安全填埋场建设中对地质屏障的评价具有重大意义,如果各地能同时列出科研规划,共同研究对所有可能存在的地质屏障岩石进行全面分析评价,确定不同实验方法之间相关关系,以及使评价方法尽可能统一,是十分必要的  相似文献   
22.
Domino Reactions in Organic Synthesis   总被引:1,自引:0,他引:1  
Tietze LF 《Chemical reviews》1996,96(1):115-136
  相似文献   
23.
Medium-sized, open-participation Open Source Software (OSS) projects do not usually perform explicit software process improvement on any routine basis. It would be useful to understand how to get such a project to accept a process improvement proposal and hence to perform process innovation. We want to determine an effective and feasible qualitative research method for studying the above question. We present (narratively) a case study of how we worked towards and eventually found such a research method. The case involves four attempts at collecting suitable data about innovation episodes (direct participation (twice), polling developers for episodes, manually finding episodes in mailing list archives) and the adaptation of the Grounded Theory data analysis methodology. Direct participation allows gathering rather rich data, but does not allow for observing a sufficiently large number of innovation episodes. Polling developers for episodes did not prove to be useful. Using mailing list archives to find data to be analyzed is both feasible and effective. We also describe how the data thus found can be analyzed based on the Grounded Theory Method with suitable adjustments. By-and-large, our findings ought to apply to studying various phenomena in OSS development processes that are similarly heavyweight and infrequent. However, specific details may block this possibility and we cannot predict which details that might be. The amount of effort involved in direct participation approaches to qualitative research can easily be underestimated. Also, survey approaches are not well-suited for many process issues in OSS, because too few developers are sufficiently process-conscious. An approach based on passive observation is a viable alternative in the OSS context due to the availability of large amounts of fairly complete archival data.  相似文献   
24.
Model-based performance evaluation methods for software architectures can help architects to assess design alternatives and save costs for late life-cycle performance fixes. A recent trend is component-based performance modelling, which aims at creating reusable performance models; a number of such methods have been proposed during the last decade. Their accuracy and the needed effort for modelling are heavily influenced by human factors, which are so far hardly understood empirically. Do component-based methods allow to make performance predictions with a comparable accuracy while saving effort in a reuse scenario? We examined three monolithic methods (SPE, umlPSI, Capacity Planning (CP)) and one component-based performance evaluation method (PCM) with regard to their accuracy and effort from the viewpoint of method users. We conducted a series of three experiments (with different levels of control) involving 47 computer science students. In the first experiment, we compared the applicability of the monolithic methods in order to choose one of them for comparison. In the second experiment, we compared the accuracy and effort of this monolithic and the component-based method for the model creation case. In the third, we studied the effort reduction from reusing component-based models. Data were collected based on the resulting artefacts, questionnaires and screen recording. They were analysed using hypothesis testing, linear models, and analysis of variance. For the monolithic methods, we found that using SPE and CP resulted in accurate predictions, while umlPSI produced over-estimates. Comparing the component-based method PCM with SPE, we found that creating reusable models using PCM takes more (but not drastically more) time than using SPE and that participants can create accurate models with both techniques. Finally, we found that reusing PCM models can save time, because effort to reuse can be explained by a model that is independent of the inner complexity of a component. The tasks performed in our experiments reflect only a subset of the actual activities when applying model-based performance evaluation methods in a software development process. Our results indicate that sufficient prediction accuracy can be achieved with both monolithic and component-based methods, and that the higher effort for component-based performance modelling will indeed pay off when the component models incorporate and hide a sufficient amount of complexity.  相似文献   
25.
The construction of a new generation of MEMS which includes micro-assembly steps in the current microfabrication process is a big challenge. It is necessary to develop new production means named micromanufacturing systems in order to perform these new assembly steps. The classical approach called “top-down” which consists in a functional analysis and a definition of the tasks sequences is insufficient for micromanufacturing systems. Indeed, the technical and physical constraints of the microworld (e.g. the adhesion phenomenon) must be taken into account in order to design reliable micromanufacturing systems. A new method of designing micromanufacturing systems is presented in this paper. Our approach combines the general “top-down” approach with a “bottom-up” approach which takes into account technical constraints. The method enables to build a modular architecture for micromanufacturing systems. In order to obtain this modular architecture, we have devised an original identification technique of modules and an association technique of modules. This work has been used to design the controller of an experimental robotic micro-assembly station.  相似文献   
26.
For a number of programming languages, among them Eiffel, C, Java, and Ruby, Hoare-style logics and dynamic logics have been developed. In these logics, pre- and postconditions are typically formulated using potentially effectful programs. In order to ensure that these pre- and postconditions behave like logical formulae (that is, enjoy some kind of referential transparency), a notion of purity is needed. Here, we introduce a generic framework for reasoning about purity and effects. Effects are modelled abstractly and axiomatically, using Moggi’s idea of encapsulation of effects as monads. We introduce a dynamic logic (from which, as usual, a Hoare logic can be derived) whose logical formulae are pure programs in a strong sense. We formulate a set of proof rules for this logic, and prove it to be complete with respect to a categorical semantics. Using dynamic logic, we then develop a relaxed notion of purity which allows for observationally neutral effects such writing on newly allocated memory.  相似文献   
27.
Standard path control laws of autonomous vehicles use the shortest distance between the vehicle’s position and the path as a control error. In order to determine this distance, the projection point onto the path needs to be determined continuously. This requires fast algorithms that feature high numerical reliability in the field of vehicle application.This paper presents two different observer-based approaches for the projection problem. The identity observer reconstructs all states of interest for path control. The second one, a reduced observer, only possesses the curve parameter as a state and calculates the other values by algebraic formulas. Both algorithms consider the continuous movement of the vehicle, the run of the curve, and work without any approximation of the curve. Furthermore, they are applicable for arbitrary parameterized smooth curves, guarantee the required numerical stability, have short calculating time, and show good statistical properties. The performance is shown in several simulations as well as under real conditions.  相似文献   
28.
Minimum size of a graph or digraph of given radius   总被引:1,自引:0,他引:1  
In this paper we show that a connected graph of order n, radius r and minimum degree δ has at least edges, for n large enough, and this bound is sharp. We also present a similar result for digraphs.  相似文献   
29.
This paper discusses approaches for the isolation of deep high aspect ratio through silicon vias (TSV) with respect to a Via Last approach for micro-electro-mechanical systems (MEMS). Selected TSV samples have depths in the range of 170…270 µm and a diameter of 50 µm. The investigations comprise the deposition of different layer stacks by means of subatmospheric and plasma enhanced chemical vapour deposition (PECVD) of tetraethyl orthosilicate; Si(OC2H5)4 (TEOS). Moreover, an etch-back approach and the selective deposition on SiN were also included in the investigations. With respect to the Via Last approach, the contact opening at the TSV bottom by means of a specific spacer-etching method have been addressed within this paper. Step coverage values of up to 74 % were achieved for the best of those approaches. As an alternative to the SiO2-isolation liners a polymer coating based on the CVD of Parylene F was investigated, which yields even higher step coverage in the range of 80 % at the lower TSV sidewall for a surface film thickness of about 1000 nm. Leakage current measurements were performed and values below 0.1 nA/cm2 at 10 kV/cm were determined for the Parylene F films which represents a promising result for the aspired application to Via Last MEMS-TSV.  相似文献   
30.
A new approach is introduced for turbidite modeling, leveraging the potential of computational fluid dynamics methods to simulate the flow processes that led to turbidite formation. The practical use of numerical flow simulation for the purpose of turbidite modeling so far is hindered by the need to specify parameters and initial flow conditions that are a priori unknown. The present study proposes a method to determine optimal simulation parameters via an automated optimization process. An iterative procedure matches deposit predictions from successive flow simulations against available localized reference data, as in practice may be obtained from well logs, and aims at convergence towards the best-fit scenario. The final result is a prediction of the entire deposit thickness and local grain size distribution. The optimization strategy is based on a derivative-free, surrogate-based technique. Direct numerical simulations are performed to compute the flow dynamics. A proof of concept is successfully conducted for the simple test case of a two-dimensional lock-exchange turbidity current. The optimization approach is demonstrated to accurately retrieve the initial conditions used in a reference calculation.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号