全文获取类型
收费全文 | 1837篇 |
免费 | 90篇 |
国内免费 | 5篇 |
专业分类
电工技术 | 24篇 |
综合类 | 16篇 |
化学工业 | 564篇 |
金属工艺 | 30篇 |
机械仪表 | 31篇 |
建筑科学 | 138篇 |
矿业工程 | 3篇 |
能源动力 | 27篇 |
轻工业 | 169篇 |
水利工程 | 11篇 |
石油天然气 | 3篇 |
无线电 | 116篇 |
一般工业技术 | 373篇 |
冶金工业 | 65篇 |
原子能技术 | 16篇 |
自动化技术 | 346篇 |
出版年
2023年 | 27篇 |
2022年 | 49篇 |
2021年 | 65篇 |
2020年 | 36篇 |
2019年 | 46篇 |
2018年 | 43篇 |
2017年 | 41篇 |
2016年 | 51篇 |
2015年 | 47篇 |
2014年 | 73篇 |
2013年 | 97篇 |
2012年 | 89篇 |
2011年 | 157篇 |
2010年 | 99篇 |
2009年 | 111篇 |
2008年 | 102篇 |
2007年 | 84篇 |
2006年 | 85篇 |
2005年 | 73篇 |
2004年 | 68篇 |
2003年 | 53篇 |
2002年 | 51篇 |
2001年 | 44篇 |
2000年 | 42篇 |
1999年 | 41篇 |
1998年 | 27篇 |
1997年 | 27篇 |
1996年 | 17篇 |
1995年 | 20篇 |
1994年 | 26篇 |
1993年 | 14篇 |
1992年 | 10篇 |
1991年 | 23篇 |
1990年 | 13篇 |
1989年 | 14篇 |
1988年 | 7篇 |
1987年 | 6篇 |
1986年 | 4篇 |
1985年 | 6篇 |
1984年 | 10篇 |
1983年 | 8篇 |
1981年 | 2篇 |
1980年 | 2篇 |
1978年 | 4篇 |
1977年 | 4篇 |
1976年 | 2篇 |
1974年 | 2篇 |
1967年 | 2篇 |
1939年 | 1篇 |
1931年 | 3篇 |
排序方式: 共有1932条查询结果,搜索用时 0 毫秒
11.
12.
Nesterov A König K Felgenhauer T Lindenstruth V Trunk U Fernandez S Hausmann M Bischoff FR Breitling F Stadler V 《The Review of scientific instruments》2008,79(3):035106
We examined the high precision deposition of toner and polymer microparticles with a typical size of approximately 10 microm on electrode arrays with electrodes of 100 microm and below using custom-made microelectronic chips. Selective desorption of redundant particles was employed to obtain a given particle pattern from preadsorbed particle layers. Microparticle desorption was regulated by dielectrophoretic attracting forces generated by individual pixel electrodes, tangential detaching forces of an air flow, and adhesion forces on the microchip surface. A theoretical consideration of the acting forces showed that without pixel voltage, the tangential force applied for particle detachment exceeded the particle adhesion force. When the pixel voltage was switched on, however, the sum of attracting forces was larger than the tangential detaching force, which was crucial for desorption efficiency. In our experiments, appropriately large dielectrophoretic forces were achieved by applying high voltages of up to 100 V on the pixel electrodes. In addition, electrode geometries on the chip's surface as well as particle size influenced the desorption quality. We further demonstrated the compatibility of this procedure to complementary metal oxide semiconductor chip technology, which should allow for an easy technical implementation with respect to high-resolution microparticle deposition. 相似文献
13.
Daniele Colombo Slah Drira Ralf Frotscher Manfred Staat 《International journal for numerical methods in engineering》2023,124(2):402-433
Edge-based and face-based smoothed finite element methods (ES-FEM and FS-FEM, respectively) are modified versions of the finite element method allowing to achieve more accurate results and to reduce sensitivity to mesh distortion, at least for linear elements. These properties make the two methods very attractive. However, their implementation in a standard finite element code is nontrivial because it requires heavy and extensive modifications to the code architecture. In this article, we present an element-based formulation of ES-FEM and FS-FEM methods allowing to implement the two methods in a standard finite element code with no modifications to its architecture. Moreover, the element-based formulation permits to easily manage any type of element, especially in 3D models where, to the best of the authors' knowledge, only tetrahedral elements are used in FS-FEM applications found in the literature. Shape functions for non-simplex 3D elements are proposed in order to apply FS-FEM to any standard finite element. 相似文献
14.
Anne Martens Heiko Koziolek Lutz Prechelt Ralf Reussner 《Empirical Software Engineering》2011,16(5):587-622
Model-based performance evaluation methods for software architectures can help architects to assess design alternatives and save costs for late life-cycle performance fixes. A recent trend is component-based performance modelling, which aims at creating reusable performance models; a number of such methods have been proposed during the last decade. Their accuracy and the needed effort for modelling are heavily influenced by human factors, which are so far hardly understood empirically. Do component-based methods allow to make performance predictions with a comparable accuracy while saving effort in a reuse scenario? We examined three monolithic methods (SPE, umlPSI, Capacity Planning (CP)) and one component-based performance evaluation method (PCM) with regard to their accuracy and effort from the viewpoint of method users. We conducted a series of three experiments (with different levels of control) involving 47 computer science students. In the first experiment, we compared the applicability of the monolithic methods in order to choose one of them for comparison. In the second experiment, we compared the accuracy and effort of this monolithic and the component-based method for the model creation case. In the third, we studied the effort reduction from reusing component-based models. Data were collected based on the resulting artefacts, questionnaires and screen recording. They were analysed using hypothesis testing, linear models, and analysis of variance. For the monolithic methods, we found that using SPE and CP resulted in accurate predictions, while umlPSI produced over-estimates. Comparing the component-based method PCM with SPE, we found that creating reusable models using PCM takes more (but not drastically more) time than using SPE and that participants can create accurate models with both techniques. Finally, we found that reusing PCM models can save time, because effort to reuse can be explained by a model that is independent of the inner complexity of a component. The tasks performed in our experiments reflect only a subset of the actual activities when applying model-based performance evaluation methods in a software development process. Our results indicate that sufficient prediction accuracy can be achieved with both monolithic and component-based methods, and that the higher effort for component-based performance modelling will indeed pay off when the component models incorporate and hide a sufficient amount of complexity. 相似文献
15.
Jörg Becker Daniel F. Beverungen Ralf Knackstedt 《Information Systems and E-Business Management》2010,8(1):33-66
Confronted with decreasing margins and a rising customer demand for integrated solutions, manufacturing companies integrate
complementary services into their portfolio. Offering value bundles (consisting of services and physical goods) takes place
in integrated product–service systems, spanning the coordinated design and delivery of services and physical goods for customers.
Conceptual Modeling is an established approach to support and guide such efforts. Using a framework for the design and delivery
of value bundles as an analytical lens, this study evaluates the current support of reference models and modeling languages
for setting up conceptual models for an integrated design and delivery of value bundles. Consecutively, designing modeling
languages and reference models to fit the requirements of conceptual models in product–service systems are presented as upcoming
challenges in Service Research. To guide further research, first steps are proposed by exemplarily integrating reference models
and modeling languages stemming from the service and manufacturing domains. 相似文献
16.
A two-layer architecture for dynamic real-time optimization (or nonlinear modelpredictive control (NMPC) with an economic objective) is presented, where the solution of the dynamic optimization problem is computed on two time-scales. On the upper layer, a rigorous optimization problem is solved with an economic objective function at a slow time-scale, which captures slow trends in process uncertainties. On the lower layer, a fast neighboring-extremal controller is tracking the trajectory in order to deal with fast disturbances acting on the process. Compared to a single-layer architecture, the two-layer architecture is able to address control systems with complex models leading to high computational load, since the rigorous optimization problem can be solved at a slower rate than the process sampling time. Furthermore, solving a new rigorous optimization problem is not necessary at each sampling time if the process has rather slow dynamics compared to the disturbance dynamics. The two-layer control strategy is illustrated with a simulated case study of an industrial polymerization process. 相似文献
17.
Luis M. Torres Ramiro Torres Ralf Borndörfer Marc E. Pfetsch 《International Transactions in Operational Research》2011,18(4):455-472
We discuss an optimization model for the line planning problem in public transport in order to minimize operation costs while guaranteeing a certain level of quality of service, in terms of available transport capacity. We analyze the computational complexity of this problem for tree network topologies as well as several categories of line operations that are important for the Quito Trolebús system. In practice, these instances can be solved quite well, and significant optimization potentials can be demonstrated. 相似文献
18.
Wjatscheslaw Missal Jaroslaw KitaEberhard Wappler Frieder GoraAnnette Kipka Thomas BartnitzekFranz Bechtold Dirk SchabbelBeate Pawlowski Ralf Moos 《Sensors and actuators. A, Physical》2011,172(1):21-26
A miniaturized ceramic differential scanning calorimeter (MC-DSC) with integrated oven and crucible is presented. Despite its small size of only 11 mm × 39 mm × 1.5 mm, all functions of a conventional DSC apparatus are integrated in this novel device - including the oven. The MC-DSC is fully manufactured in thick-film and green glass ceramic tape-based low temperature co-fired ceramics (LTCC) technology. Therefore, production costs are considered to be low. Initial results using indium as a sample material show a good dynamic performance of the MC-DSC. Full width at half maximum of the melting peak is 2.4 °C (sample mass approx. 11 mg, heating rate approx. 50 °C/min). Repeatability of the indium melting point is within ±0.02 °C. The melting peak area increases linearly with the sample mass up to at least 26 mg. Simulations of a strongly simplified finite element model of the MC-DSC are in a good agreement with measurement results allowing a model-based prediction of its basic characteristics. 相似文献
19.
Thomas Neumann Matthias Bender Sebastian Michel Ralf Schenkel Peter Triantafillou Gerhard Weikum 《Distributed and Parallel Databases》2009,26(1):3-27
Top-k query processing is a fundamental building block for efficient ranking in a large number of applications. Efficiency is a
central issue, especially for distributed settings, when the data is spread across different nodes in a network. This paper
introduces novel optimization methods for top-k aggregation queries in such distributed environments. The optimizations can be applied to all algorithms that fall into the
frameworks of the prior TPUT and KLEE methods. The optimizations address three degrees of freedom: 1) hierarchically grouping
input lists into top-k operator trees and optimizing the tree structure, 2) computing data-adaptive scan depths for different input sources, and
3) data-adaptive sampling of a small subset of input sources in scenarios with hundreds or thousands of query-relevant network
nodes. All optimizations are based on a statistical cost model that utilizes local synopses, e.g., in the form of histograms,
efficiently computed convolutions, and estimators based on order statistics. The paper presents comprehensive experiments,
with three different real-life datasets and using the ns-2 network simulator for a packet-level simulation of a large Internet-style
network. 相似文献
20.
Geno-mathematical identification of the multi-layer perceptron 总被引:1,自引:0,他引:1
Ralf Östermark 《Neural computing & applications》2009,18(4):331-344
In this paper, we will focus on the use of the three-layer backpropagation network in vector-valued time series estimation
problems. The neural network provides a framework for noncomplex calculations to solve the estimation problem, yet the search
for optimal or even feasible neural networks for stochastic processes is both time consuming and uncertain. The backpropagation
algorithm—written in strict ANSI C—has been implemented as a standalone support library for the genetic hybrid algorithm (GHA)
running on any sequential or parallel main frame computer. In order to cope with ill-conditioned time series problems, we
extended the original backpropagation algorithm to a K nearest neighbors algorithm (K-NARX), where the number K is determined genetically along with a set of key parameters. In the K-NARX algorithm, the terminal solution at instant t can be used as a starting point for the next t, which tends to stabilize the optimization process when dealing with autocorrelated time series vectors. This possibility
has proved to be especially useful in difficult time series problems. Following the prevailing research directions, we use
a genetic algorithm to determine optimal parameterizations for the network, including the lag structure for the nonlinear
vector time series system, the net structure with one or two hidden layers and the corresponding number of nodes, type of
activation function (currently the standard logistic sigmoid, a bipolar transformation, the hyperbolic tangent, an exponential
function and the sine function), the type of minimization algorithm, the number K of nearest neighbors in the K-NARX procedure, the initial value of the Levenberg–Marquardt damping parameter and the value of the neural learning (stabilization)
coefficient α. We have focused on a flexible structure allowing addition of, e.g., new minimization algorithms and activation
functions in the future. We demonstrate the power of the genetically trimmed K-NARX algorithm on a representative data set. 相似文献