全文获取类型
收费全文 | 4489篇 |
免费 | 124篇 |
国内免费 | 2篇 |
专业分类
电工技术 | 77篇 |
综合类 | 6篇 |
化学工业 | 890篇 |
金属工艺 | 90篇 |
机械仪表 | 61篇 |
建筑科学 | 162篇 |
矿业工程 | 22篇 |
能源动力 | 92篇 |
轻工业 | 445篇 |
水利工程 | 26篇 |
石油天然气 | 6篇 |
无线电 | 455篇 |
一般工业技术 | 743篇 |
冶金工业 | 1032篇 |
原子能技术 | 61篇 |
自动化技术 | 447篇 |
出版年
2023年 | 28篇 |
2022年 | 39篇 |
2021年 | 70篇 |
2020年 | 51篇 |
2019年 | 70篇 |
2018年 | 64篇 |
2017年 | 58篇 |
2016年 | 90篇 |
2015年 | 80篇 |
2014年 | 90篇 |
2013年 | 155篇 |
2012年 | 153篇 |
2011年 | 220篇 |
2010年 | 119篇 |
2009年 | 148篇 |
2008年 | 165篇 |
2007年 | 123篇 |
2006年 | 132篇 |
2005年 | 96篇 |
2004年 | 91篇 |
2003年 | 100篇 |
2002年 | 112篇 |
2001年 | 88篇 |
2000年 | 95篇 |
1999年 | 105篇 |
1998年 | 338篇 |
1997年 | 217篇 |
1996年 | 143篇 |
1995年 | 112篇 |
1994年 | 115篇 |
1993年 | 115篇 |
1992年 | 70篇 |
1991年 | 90篇 |
1990年 | 58篇 |
1989年 | 58篇 |
1988年 | 58篇 |
1987年 | 54篇 |
1986年 | 40篇 |
1985年 | 60篇 |
1984年 | 52篇 |
1983年 | 39篇 |
1982年 | 27篇 |
1981年 | 33篇 |
1980年 | 30篇 |
1979年 | 27篇 |
1978年 | 25篇 |
1977年 | 35篇 |
1976年 | 72篇 |
1975年 | 23篇 |
1973年 | 23篇 |
排序方式: 共有4615条查询结果,搜索用时 15 毫秒
61.
Medium-sized, open-participation Open Source Software (OSS) projects do not usually perform explicit software process improvement
on any routine basis. It would be useful to understand how to get such a project to accept a process improvement proposal
and hence to perform process innovation. We want to determine an effective and feasible qualitative research method for studying
the above question. We present (narratively) a case study of how we worked towards and eventually found such a research method.
The case involves four attempts at collecting suitable data about innovation episodes (direct participation (twice), polling
developers for episodes, manually finding episodes in mailing list archives) and the adaptation of the Grounded Theory data
analysis methodology. Direct participation allows gathering rather rich data, but does not allow for observing a sufficiently
large number of innovation episodes. Polling developers for episodes did not prove to be useful. Using mailing list archives
to find data to be analyzed is both feasible and effective. We also describe how the data thus found can be analyzed based
on the Grounded Theory Method with suitable adjustments. By-and-large, our findings ought to apply to studying various phenomena
in OSS development processes that are similarly heavyweight and infrequent. However, specific details may block this possibility
and we cannot predict which details that might be. The amount of effort involved in direct participation approaches to qualitative
research can easily be underestimated. Also, survey approaches are not well-suited for many process issues in OSS, because
too few developers are sufficiently process-conscious. An approach based on passive observation is a viable alternative in
the OSS context due to the availability of large amounts of fairly complete archival data. 相似文献
62.
Anne Martens Heiko Koziolek Lutz Prechelt Ralf Reussner 《Empirical Software Engineering》2011,16(5):587-622
Model-based performance evaluation methods for software architectures can help architects to assess design alternatives and save costs for late life-cycle performance fixes. A recent trend is component-based performance modelling, which aims at creating reusable performance models; a number of such methods have been proposed during the last decade. Their accuracy and the needed effort for modelling are heavily influenced by human factors, which are so far hardly understood empirically. Do component-based methods allow to make performance predictions with a comparable accuracy while saving effort in a reuse scenario? We examined three monolithic methods (SPE, umlPSI, Capacity Planning (CP)) and one component-based performance evaluation method (PCM) with regard to their accuracy and effort from the viewpoint of method users. We conducted a series of three experiments (with different levels of control) involving 47 computer science students. In the first experiment, we compared the applicability of the monolithic methods in order to choose one of them for comparison. In the second experiment, we compared the accuracy and effort of this monolithic and the component-based method for the model creation case. In the third, we studied the effort reduction from reusing component-based models. Data were collected based on the resulting artefacts, questionnaires and screen recording. They were analysed using hypothesis testing, linear models, and analysis of variance. For the monolithic methods, we found that using SPE and CP resulted in accurate predictions, while umlPSI produced over-estimates. Comparing the component-based method PCM with SPE, we found that creating reusable models using PCM takes more (but not drastically more) time than using SPE and that participants can create accurate models with both techniques. Finally, we found that reusing PCM models can save time, because effort to reuse can be explained by a model that is independent of the inner complexity of a component. The tasks performed in our experiments reflect only a subset of the actual activities when applying model-based performance evaluation methods in a software development process. Our results indicate that sufficient prediction accuracy can be achieved with both monolithic and component-based methods, and that the higher effort for component-based performance modelling will indeed pay off when the component models incorporate and hide a sufficient amount of complexity. 相似文献
63.
The construction of a new generation of MEMS which includes micro-assembly steps in the current microfabrication process is a big challenge. It is necessary to develop new production means named micromanufacturing systems in order to perform these new assembly steps. The classical approach called “top-down” which consists in a functional analysis and a definition of the tasks sequences is insufficient for micromanufacturing systems. Indeed, the technical and physical constraints of the microworld (e.g. the adhesion phenomenon) must be taken into account in order to design reliable micromanufacturing systems. A new method of designing micromanufacturing systems is presented in this paper. Our approach combines the general “top-down” approach with a “bottom-up” approach which takes into account technical constraints. The method enables to build a modular architecture for micromanufacturing systems. In order to obtain this modular architecture, we have devised an original identification technique of modules and an association technique of modules. This work has been used to design the controller of an experimental robotic micro-assembly station. 相似文献
64.
For a number of programming languages, among them Eiffel, C, Java, and Ruby, Hoare-style logics and dynamic logics have been developed. In these logics, pre- and postconditions are typically formulated using potentially effectful programs. In order to ensure that these pre- and postconditions behave like logical formulae (that is, enjoy some kind of referential transparency), a notion of purity is needed. Here, we introduce a generic framework for reasoning about purity and effects. Effects are modelled abstractly and axiomatically, using Moggi’s idea of encapsulation of effects as monads. We introduce a dynamic logic (from which, as usual, a Hoare logic can be derived) whose logical formulae are pure programs in a strong sense. We formulate a set of proof rules for this logic, and prove it to be complete with respect to a categorical semantics. Using dynamic logic, we then develop a relaxed notion of purity which allows for observationally neutral effects such writing on newly allocated memory. 相似文献
65.
Standard path control laws of autonomous vehicles use the shortest distance between the vehicle’s position and the path as a control error. In order to determine this distance, the projection point onto the path needs to be determined continuously. This requires fast algorithms that feature high numerical reliability in the field of vehicle application.This paper presents two different observer-based approaches for the projection problem. The identity observer reconstructs all states of interest for path control. The second one, a reduced observer, only possesses the curve parameter as a state and calculates the other values by algebraic formulas. Both algorithms consider the continuous movement of the vehicle, the run of the curve, and work without any approximation of the curve. Furthermore, they are applicable for arbitrary parameterized smooth curves, guarantee the required numerical stability, have short calculating time, and show good statistical properties. The performance is shown in several simulations as well as under real conditions. 相似文献
66.
In a 2008 paper, Walmsley argued that the explanations employed in the dynamical approach to cognitive science, as exemplified
by the Haken, Kelso and Bunz model of rhythmic finger movement, and the model of infant preservative reaching developed by
Esther Thelen and her colleagues, conform to Carl Hempel and Paul Oppenheim’s deductive-nomological model of explanation (also
known as the covering law model). Although we think Walmsley’s approach is methodologically sound in that it starts with an
analysis of scientific practice rather than a general philosophical framework, we nevertheless feel that there are two problems
with his paper. First, he focuses only on the deductivenomological model and so neglects the important fact that explanations
are causal. Second, the explanations offered by the dynamical approach do not take the deductive-nomological format, because
they do not deduce the explananda from exceptionless laws. Because of these two points, Walmsley makes the dynamical explanations
in cognitive science appear problematic, while in fact they are not. 相似文献
67.
Minimum size of a graph or digraph of given radius 总被引:1,自引:0,他引:1
In this paper we show that a connected graph of order n, radius r and minimum degree δ has at least edges, for n large enough, and this bound is sharp. We also present a similar result for digraphs. 相似文献
68.
Lutz Hofmann Tobias Fischer Thomas Werner Franz Selbmann Michael Rennau Ramona Ecke Stefan E. Schulz Thomas Geßner 《Microsystem Technologies》2016,22(7):1665-1677
This paper discusses approaches for the isolation of deep high aspect ratio through silicon vias (TSV) with respect to a Via Last approach for micro-electro-mechanical systems (MEMS). Selected TSV samples have depths in the range of 170…270 µm and a diameter of 50 µm. The investigations comprise the deposition of different layer stacks by means of subatmospheric and plasma enhanced chemical vapour deposition (PECVD) of tetraethyl orthosilicate; Si(OC2H5)4 (TEOS). Moreover, an etch-back approach and the selective deposition on SiN were also included in the investigations. With respect to the Via Last approach, the contact opening at the TSV bottom by means of a specific spacer-etching method have been addressed within this paper. Step coverage values of up to 74 % were achieved for the best of those approaches. As an alternative to the SiO2-isolation liners a polymer coating based on the CVD of Parylene F was investigated, which yields even higher step coverage in the range of 80 % at the lower TSV sidewall for a surface film thickness of about 1000 nm. Leakage current measurements were performed and values below 0.1 nA/cm2 at 10 kV/cm were determined for the Parylene F films which represents a promising result for the aspired application to Via Last MEMS-TSV. 相似文献
69.
Universal Access in the Information Society - Helping blind people to build cognitive maps of an environment is one of the aims of several assistive systems. In order to evaluate such assistive... 相似文献
70.
A new approach is introduced for turbidite modeling, leveraging the potential of computational fluid dynamics methods to simulate the flow processes that led to turbidite formation. The practical use of numerical flow simulation for the purpose of turbidite modeling so far is hindered by the need to specify parameters and initial flow conditions that are a priori unknown. The present study proposes a method to determine optimal simulation parameters via an automated optimization process. An iterative procedure matches deposit predictions from successive flow simulations against available localized reference data, as in practice may be obtained from well logs, and aims at convergence towards the best-fit scenario. The final result is a prediction of the entire deposit thickness and local grain size distribution. The optimization strategy is based on a derivative-free, surrogate-based technique. Direct numerical simulations are performed to compute the flow dynamics. A proof of concept is successfully conducted for the simple test case of a two-dimensional lock-exchange turbidity current. The optimization approach is demonstrated to accurately retrieve the initial conditions used in a reference calculation. 相似文献