首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1487篇
  免费   38篇
电工技术   24篇
综合类   4篇
化学工业   319篇
金属工艺   18篇
机械仪表   13篇
建筑科学   70篇
矿业工程   1篇
能源动力   19篇
轻工业   211篇
水利工程   8篇
石油天然气   4篇
无线电   76篇
一般工业技术   273篇
冶金工业   278篇
原子能技术   4篇
自动化技术   203篇
  2023年   10篇
  2022年   19篇
  2021年   37篇
  2020年   30篇
  2019年   39篇
  2018年   29篇
  2017年   19篇
  2016年   40篇
  2015年   24篇
  2014年   41篇
  2013年   50篇
  2012年   75篇
  2011年   90篇
  2010年   48篇
  2009年   53篇
  2008年   58篇
  2007年   55篇
  2006年   55篇
  2005年   45篇
  2004年   34篇
  2003年   34篇
  2002年   38篇
  2001年   29篇
  2000年   36篇
  1999年   37篇
  1998年   74篇
  1997年   43篇
  1996年   28篇
  1995年   20篇
  1994年   30篇
  1993年   30篇
  1992年   19篇
  1991年   17篇
  1990年   13篇
  1989年   13篇
  1988年   14篇
  1987年   17篇
  1986年   13篇
  1985年   16篇
  1984年   14篇
  1983年   8篇
  1982年   11篇
  1981年   11篇
  1980年   7篇
  1979年   10篇
  1978年   10篇
  1977年   7篇
  1976年   17篇
  1973年   6篇
  1970年   9篇
排序方式: 共有1525条查询结果,搜索用时 15 毫秒
21.
Medium-sized, open-participation Open Source Software (OSS) projects do not usually perform explicit software process improvement on any routine basis. It would be useful to understand how to get such a project to accept a process improvement proposal and hence to perform process innovation. We want to determine an effective and feasible qualitative research method for studying the above question. We present (narratively) a case study of how we worked towards and eventually found such a research method. The case involves four attempts at collecting suitable data about innovation episodes (direct participation (twice), polling developers for episodes, manually finding episodes in mailing list archives) and the adaptation of the Grounded Theory data analysis methodology. Direct participation allows gathering rather rich data, but does not allow for observing a sufficiently large number of innovation episodes. Polling developers for episodes did not prove to be useful. Using mailing list archives to find data to be analyzed is both feasible and effective. We also describe how the data thus found can be analyzed based on the Grounded Theory Method with suitable adjustments. By-and-large, our findings ought to apply to studying various phenomena in OSS development processes that are similarly heavyweight and infrequent. However, specific details may block this possibility and we cannot predict which details that might be. The amount of effort involved in direct participation approaches to qualitative research can easily be underestimated. Also, survey approaches are not well-suited for many process issues in OSS, because too few developers are sufficiently process-conscious. An approach based on passive observation is a viable alternative in the OSS context due to the availability of large amounts of fairly complete archival data.  相似文献   
22.
Model-based performance evaluation methods for software architectures can help architects to assess design alternatives and save costs for late life-cycle performance fixes. A recent trend is component-based performance modelling, which aims at creating reusable performance models; a number of such methods have been proposed during the last decade. Their accuracy and the needed effort for modelling are heavily influenced by human factors, which are so far hardly understood empirically. Do component-based methods allow to make performance predictions with a comparable accuracy while saving effort in a reuse scenario? We examined three monolithic methods (SPE, umlPSI, Capacity Planning (CP)) and one component-based performance evaluation method (PCM) with regard to their accuracy and effort from the viewpoint of method users. We conducted a series of three experiments (with different levels of control) involving 47 computer science students. In the first experiment, we compared the applicability of the monolithic methods in order to choose one of them for comparison. In the second experiment, we compared the accuracy and effort of this monolithic and the component-based method for the model creation case. In the third, we studied the effort reduction from reusing component-based models. Data were collected based on the resulting artefacts, questionnaires and screen recording. They were analysed using hypothesis testing, linear models, and analysis of variance. For the monolithic methods, we found that using SPE and CP resulted in accurate predictions, while umlPSI produced over-estimates. Comparing the component-based method PCM with SPE, we found that creating reusable models using PCM takes more (but not drastically more) time than using SPE and that participants can create accurate models with both techniques. Finally, we found that reusing PCM models can save time, because effort to reuse can be explained by a model that is independent of the inner complexity of a component. The tasks performed in our experiments reflect only a subset of the actual activities when applying model-based performance evaluation methods in a software development process. Our results indicate that sufficient prediction accuracy can be achieved with both monolithic and component-based methods, and that the higher effort for component-based performance modelling will indeed pay off when the component models incorporate and hide a sufficient amount of complexity.  相似文献   
23.
The construction of a new generation of MEMS which includes micro-assembly steps in the current microfabrication process is a big challenge. It is necessary to develop new production means named micromanufacturing systems in order to perform these new assembly steps. The classical approach called “top-down” which consists in a functional analysis and a definition of the tasks sequences is insufficient for micromanufacturing systems. Indeed, the technical and physical constraints of the microworld (e.g. the adhesion phenomenon) must be taken into account in order to design reliable micromanufacturing systems. A new method of designing micromanufacturing systems is presented in this paper. Our approach combines the general “top-down” approach with a “bottom-up” approach which takes into account technical constraints. The method enables to build a modular architecture for micromanufacturing systems. In order to obtain this modular architecture, we have devised an original identification technique of modules and an association technique of modules. This work has been used to design the controller of an experimental robotic micro-assembly station.  相似文献   
24.
For a number of programming languages, among them Eiffel, C, Java, and Ruby, Hoare-style logics and dynamic logics have been developed. In these logics, pre- and postconditions are typically formulated using potentially effectful programs. In order to ensure that these pre- and postconditions behave like logical formulae (that is, enjoy some kind of referential transparency), a notion of purity is needed. Here, we introduce a generic framework for reasoning about purity and effects. Effects are modelled abstractly and axiomatically, using Moggi’s idea of encapsulation of effects as monads. We introduce a dynamic logic (from which, as usual, a Hoare logic can be derived) whose logical formulae are pure programs in a strong sense. We formulate a set of proof rules for this logic, and prove it to be complete with respect to a categorical semantics. Using dynamic logic, we then develop a relaxed notion of purity which allows for observationally neutral effects such writing on newly allocated memory.  相似文献   
25.
Standard path control laws of autonomous vehicles use the shortest distance between the vehicle’s position and the path as a control error. In order to determine this distance, the projection point onto the path needs to be determined continuously. This requires fast algorithms that feature high numerical reliability in the field of vehicle application.This paper presents two different observer-based approaches for the projection problem. The identity observer reconstructs all states of interest for path control. The second one, a reduced observer, only possesses the curve parameter as a state and calculates the other values by algebraic formulas. Both algorithms consider the continuous movement of the vehicle, the run of the curve, and work without any approximation of the curve. Furthermore, they are applicable for arbitrary parameterized smooth curves, guarantee the required numerical stability, have short calculating time, and show good statistical properties. The performance is shown in several simulations as well as under real conditions.  相似文献   
26.
Minimum size of a graph or digraph of given radius   总被引:1,自引:0,他引:1  
In this paper we show that a connected graph of order n, radius r and minimum degree δ has at least edges, for n large enough, and this bound is sharp. We also present a similar result for digraphs.  相似文献   
27.
This paper discusses approaches for the isolation of deep high aspect ratio through silicon vias (TSV) with respect to a Via Last approach for micro-electro-mechanical systems (MEMS). Selected TSV samples have depths in the range of 170…270 µm and a diameter of 50 µm. The investigations comprise the deposition of different layer stacks by means of subatmospheric and plasma enhanced chemical vapour deposition (PECVD) of tetraethyl orthosilicate; Si(OC2H5)4 (TEOS). Moreover, an etch-back approach and the selective deposition on SiN were also included in the investigations. With respect to the Via Last approach, the contact opening at the TSV bottom by means of a specific spacer-etching method have been addressed within this paper. Step coverage values of up to 74 % were achieved for the best of those approaches. As an alternative to the SiO2-isolation liners a polymer coating based on the CVD of Parylene F was investigated, which yields even higher step coverage in the range of 80 % at the lower TSV sidewall for a surface film thickness of about 1000 nm. Leakage current measurements were performed and values below 0.1 nA/cm2 at 10 kV/cm were determined for the Parylene F films which represents a promising result for the aspired application to Via Last MEMS-TSV.  相似文献   
28.
A new approach is introduced for turbidite modeling, leveraging the potential of computational fluid dynamics methods to simulate the flow processes that led to turbidite formation. The practical use of numerical flow simulation for the purpose of turbidite modeling so far is hindered by the need to specify parameters and initial flow conditions that are a priori unknown. The present study proposes a method to determine optimal simulation parameters via an automated optimization process. An iterative procedure matches deposit predictions from successive flow simulations against available localized reference data, as in practice may be obtained from well logs, and aims at convergence towards the best-fit scenario. The final result is a prediction of the entire deposit thickness and local grain size distribution. The optimization strategy is based on a derivative-free, surrogate-based technique. Direct numerical simulations are performed to compute the flow dynamics. A proof of concept is successfully conducted for the simple test case of a two-dimensional lock-exchange turbidity current. The optimization approach is demonstrated to accurately retrieve the initial conditions used in a reference calculation.  相似文献   
29.
Anatomical structure modeling from medical images   总被引:2,自引:0,他引:2  
Some clinical applications, such as surgical planning, require volumetric models of anatomical structures represented as a set of tetrahedra. A practical method of constructing anatomical models from medical images is presented. The method starts with a set of contours segmented from the medical images by a clinician and produces a model that has high fidelity with the contours. Unlike most modeling methods, the contours are not restricted to lie on parallel planes. The main steps are a 3D Delaunay tetrahedralization, culling of non-object tetrahedra, and refinement of the tetrahedral mesh. The result is a high-quality set of tetrahedra whose surface points are guaranteed to match the original contours. The key is to use the distance map and bit volume structures that were created along with the contours. The method is demonstrated on computed tomography, MRI and 3D ultrasound data. Models of 170,000 tetrahedra are constructed on a standard workstation in approximately 10s. A comparison with related methods is also provided.  相似文献   
30.
Summary First the problem is solved how one can decide whether an arbitrary finite semigroup H is linearly A-realizable, i.e., whether there exists a linearly realizable finite automaton having a semigroup isomorphic to H. This leads to a question about the existence of certain generating subsets of H. The determination of these subsets is rather complicated in case H-HH=Ø and very simple in case H-HH#Ø. But in the first case we are able to clear up completely the structure of the semigroups which are linearly A-realizable: These are exactly the finite right groups which have maximal subgroups of the type described by Ecker in [4]. In the second case we get only necessary structure conditions. Among other things we shall see: If a semigroup H is linearly A-realizable one can define a congruence relation on it having the property, that H is isomorphic to a semigroup of a strongly connected and linearly realizable automaton iff the so-called index of H equals the index of H/. Developing these results about semigroups we obtain at the same time many structure theorems about linearly realizable automata.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号