全文获取类型
收费全文 | 1869篇 |
免费 | 90篇 |
国内免费 | 5篇 |
专业分类
电工技术 | 24篇 |
综合类 | 16篇 |
化学工业 | 570篇 |
金属工艺 | 31篇 |
机械仪表 | 31篇 |
建筑科学 | 139篇 |
矿业工程 | 3篇 |
能源动力 | 27篇 |
轻工业 | 170篇 |
水利工程 | 11篇 |
石油天然气 | 3篇 |
无线电 | 117篇 |
一般工业技术 | 375篇 |
冶金工业 | 84篇 |
原子能技术 | 16篇 |
自动化技术 | 347篇 |
出版年
2023年 | 27篇 |
2022年 | 51篇 |
2021年 | 66篇 |
2020年 | 36篇 |
2019年 | 46篇 |
2018年 | 43篇 |
2017年 | 41篇 |
2016年 | 51篇 |
2015年 | 47篇 |
2014年 | 74篇 |
2013年 | 99篇 |
2012年 | 90篇 |
2011年 | 158篇 |
2010年 | 100篇 |
2009年 | 112篇 |
2008年 | 102篇 |
2007年 | 84篇 |
2006年 | 85篇 |
2005年 | 73篇 |
2004年 | 68篇 |
2003年 | 53篇 |
2002年 | 51篇 |
2001年 | 45篇 |
2000年 | 42篇 |
1999年 | 41篇 |
1998年 | 30篇 |
1997年 | 27篇 |
1996年 | 20篇 |
1995年 | 21篇 |
1994年 | 28篇 |
1993年 | 15篇 |
1992年 | 13篇 |
1991年 | 23篇 |
1990年 | 16篇 |
1989年 | 14篇 |
1988年 | 7篇 |
1987年 | 6篇 |
1986年 | 4篇 |
1985年 | 7篇 |
1984年 | 12篇 |
1983年 | 8篇 |
1981年 | 2篇 |
1980年 | 2篇 |
1978年 | 4篇 |
1977年 | 4篇 |
1976年 | 4篇 |
1974年 | 2篇 |
1967年 | 2篇 |
1939年 | 1篇 |
1931年 | 3篇 |
排序方式: 共有1964条查询结果,搜索用时 0 毫秒
11.
Anne Martens Heiko Koziolek Lutz Prechelt Ralf Reussner 《Empirical Software Engineering》2011,16(5):587-622
Model-based performance evaluation methods for software architectures can help architects to assess design alternatives and save costs for late life-cycle performance fixes. A recent trend is component-based performance modelling, which aims at creating reusable performance models; a number of such methods have been proposed during the last decade. Their accuracy and the needed effort for modelling are heavily influenced by human factors, which are so far hardly understood empirically. Do component-based methods allow to make performance predictions with a comparable accuracy while saving effort in a reuse scenario? We examined three monolithic methods (SPE, umlPSI, Capacity Planning (CP)) and one component-based performance evaluation method (PCM) with regard to their accuracy and effort from the viewpoint of method users. We conducted a series of three experiments (with different levels of control) involving 47 computer science students. In the first experiment, we compared the applicability of the monolithic methods in order to choose one of them for comparison. In the second experiment, we compared the accuracy and effort of this monolithic and the component-based method for the model creation case. In the third, we studied the effort reduction from reusing component-based models. Data were collected based on the resulting artefacts, questionnaires and screen recording. They were analysed using hypothesis testing, linear models, and analysis of variance. For the monolithic methods, we found that using SPE and CP resulted in accurate predictions, while umlPSI produced over-estimates. Comparing the component-based method PCM with SPE, we found that creating reusable models using PCM takes more (but not drastically more) time than using SPE and that participants can create accurate models with both techniques. Finally, we found that reusing PCM models can save time, because effort to reuse can be explained by a model that is independent of the inner complexity of a component. The tasks performed in our experiments reflect only a subset of the actual activities when applying model-based performance evaluation methods in a software development process. Our results indicate that sufficient prediction accuracy can be achieved with both monolithic and component-based methods, and that the higher effort for component-based performance modelling will indeed pay off when the component models incorporate and hide a sufficient amount of complexity. 相似文献
12.
Jörg Becker Daniel F. Beverungen Ralf Knackstedt 《Information Systems and E-Business Management》2010,8(1):33-66
Confronted with decreasing margins and a rising customer demand for integrated solutions, manufacturing companies integrate
complementary services into their portfolio. Offering value bundles (consisting of services and physical goods) takes place
in integrated product–service systems, spanning the coordinated design and delivery of services and physical goods for customers.
Conceptual Modeling is an established approach to support and guide such efforts. Using a framework for the design and delivery
of value bundles as an analytical lens, this study evaluates the current support of reference models and modeling languages
for setting up conceptual models for an integrated design and delivery of value bundles. Consecutively, designing modeling
languages and reference models to fit the requirements of conceptual models in product–service systems are presented as upcoming
challenges in Service Research. To guide further research, first steps are proposed by exemplarily integrating reference models
and modeling languages stemming from the service and manufacturing domains. 相似文献
13.
A two-layer architecture for dynamic real-time optimization (or nonlinear modelpredictive control (NMPC) with an economic objective) is presented, where the solution of the dynamic optimization problem is computed on two time-scales. On the upper layer, a rigorous optimization problem is solved with an economic objective function at a slow time-scale, which captures slow trends in process uncertainties. On the lower layer, a fast neighboring-extremal controller is tracking the trajectory in order to deal with fast disturbances acting on the process. Compared to a single-layer architecture, the two-layer architecture is able to address control systems with complex models leading to high computational load, since the rigorous optimization problem can be solved at a slower rate than the process sampling time. Furthermore, solving a new rigorous optimization problem is not necessary at each sampling time if the process has rather slow dynamics compared to the disturbance dynamics. The two-layer control strategy is illustrated with a simulated case study of an industrial polymerization process. 相似文献
14.
Luis M. Torres Ramiro Torres Ralf Borndörfer Marc E. Pfetsch 《International Transactions in Operational Research》2011,18(4):455-472
We discuss an optimization model for the line planning problem in public transport in order to minimize operation costs while guaranteeing a certain level of quality of service, in terms of available transport capacity. We analyze the computational complexity of this problem for tree network topologies as well as several categories of line operations that are important for the Quito Trolebús system. In practice, these instances can be solved quite well, and significant optimization potentials can be demonstrated. 相似文献
15.
Wjatscheslaw Missal Jaroslaw KitaEberhard Wappler Frieder GoraAnnette Kipka Thomas BartnitzekFranz Bechtold Dirk SchabbelBeate Pawlowski Ralf Moos 《Sensors and actuators. A, Physical》2011,172(1):21-26
A miniaturized ceramic differential scanning calorimeter (MC-DSC) with integrated oven and crucible is presented. Despite its small size of only 11 mm × 39 mm × 1.5 mm, all functions of a conventional DSC apparatus are integrated in this novel device - including the oven. The MC-DSC is fully manufactured in thick-film and green glass ceramic tape-based low temperature co-fired ceramics (LTCC) technology. Therefore, production costs are considered to be low. Initial results using indium as a sample material show a good dynamic performance of the MC-DSC. Full width at half maximum of the melting peak is 2.4 °C (sample mass approx. 11 mg, heating rate approx. 50 °C/min). Repeatability of the indium melting point is within ±0.02 °C. The melting peak area increases linearly with the sample mass up to at least 26 mg. Simulations of a strongly simplified finite element model of the MC-DSC are in a good agreement with measurement results allowing a model-based prediction of its basic characteristics. 相似文献
16.
Thomas Neumann Matthias Bender Sebastian Michel Ralf Schenkel Peter Triantafillou Gerhard Weikum 《Distributed and Parallel Databases》2009,26(1):3-27
Top-k query processing is a fundamental building block for efficient ranking in a large number of applications. Efficiency is a
central issue, especially for distributed settings, when the data is spread across different nodes in a network. This paper
introduces novel optimization methods for top-k aggregation queries in such distributed environments. The optimizations can be applied to all algorithms that fall into the
frameworks of the prior TPUT and KLEE methods. The optimizations address three degrees of freedom: 1) hierarchically grouping
input lists into top-k operator trees and optimizing the tree structure, 2) computing data-adaptive scan depths for different input sources, and
3) data-adaptive sampling of a small subset of input sources in scenarios with hundreds or thousands of query-relevant network
nodes. All optimizations are based on a statistical cost model that utilizes local synopses, e.g., in the form of histograms,
efficiently computed convolutions, and estimators based on order statistics. The paper presents comprehensive experiments,
with three different real-life datasets and using the ns-2 network simulator for a packet-level simulation of a large Internet-style
network. 相似文献
17.
Geno-mathematical identification of the multi-layer perceptron 总被引:1,自引:0,他引:1
Ralf Östermark 《Neural computing & applications》2009,18(4):331-344
In this paper, we will focus on the use of the three-layer backpropagation network in vector-valued time series estimation
problems. The neural network provides a framework for noncomplex calculations to solve the estimation problem, yet the search
for optimal or even feasible neural networks for stochastic processes is both time consuming and uncertain. The backpropagation
algorithm—written in strict ANSI C—has been implemented as a standalone support library for the genetic hybrid algorithm (GHA)
running on any sequential or parallel main frame computer. In order to cope with ill-conditioned time series problems, we
extended the original backpropagation algorithm to a K nearest neighbors algorithm (K-NARX), where the number K is determined genetically along with a set of key parameters. In the K-NARX algorithm, the terminal solution at instant t can be used as a starting point for the next t, which tends to stabilize the optimization process when dealing with autocorrelated time series vectors. This possibility
has proved to be especially useful in difficult time series problems. Following the prevailing research directions, we use
a genetic algorithm to determine optimal parameterizations for the network, including the lag structure for the nonlinear
vector time series system, the net structure with one or two hidden layers and the corresponding number of nodes, type of
activation function (currently the standard logistic sigmoid, a bipolar transformation, the hyperbolic tangent, an exponential
function and the sine function), the type of minimization algorithm, the number K of nearest neighbors in the K-NARX procedure, the initial value of the Levenberg–Marquardt damping parameter and the value of the neural learning (stabilization)
coefficient α. We have focused on a flexible structure allowing addition of, e.g., new minimization algorithms and activation
functions in the future. We demonstrate the power of the genetically trimmed K-NARX algorithm on a representative data set. 相似文献
18.
Adrian Blumer Jan Novák Ralf Habel Derek Nowrouzezahrai Wojciech Jarosz 《Computer Graphics Forum》2016,35(7):461-473
Aggregate scattering operators (ASOs) describe the overall scattering behavior of an asset (i.e., an object or volume, or collection thereof) accounting for all orders of its internal scattering. We propose a practical way to precompute and compactly store ASOs and demonstrate their ability to accelerate path tracing. Our approach is modular avoiding costly and inflexible scene‐dependent precomputation. This is achieved by decoupling light transport within and outside of each asset, and precomputing on a per‐asset level. We store the internal transport in a reduced‐dimensional subspace tailored to the structure of the asset geometry, its scattering behavior, and typical illumination conditions, allowing the ASOs to maintain good accuracy with modest memory requirements. The precomputed ASO can be reused across all instances of the asset and across multiple scenes. We augment ASOs with functionality enabling multi‐bounce importance sampling, fast short‐circuiting of complex light paths, and compact caching, while retaining rapid progressive preview rendering. We demonstrate the benefits of our ASOs by efficiently path tracing scenes containing many instances of objects with complex inter‐reflections or multiple scattering. 相似文献
19.
Vasileios Belagiannis Xinchao Wang Horesh Beny Ben Shitrit Kiyoshi Hashimoto Ralf Stauder Yoshimitsu Aoki Michael Kranzfelder Armin Schneider Pascal Fua Slobodan Ilic Hubertus Feussner Nassir Navab 《Machine Vision and Applications》2016,27(7):1035-1046
Multiple human pose estimation is an important yet challenging problem. In an operating room (OR) environment, the 3D body poses of surgeons and medical staff can provide important clues for surgical workflow analysis. For that purpose, we propose an algorithm for localizing and recovering body poses of multiple human in an OR environment under a multi-camera setup. Our model builds on 3D Pictorial Structures and 2D body part localization across all camera views, using convolutional neural networks (ConvNets). To evaluate our algorithm, we introduce a dataset captured in a real OR environment. Our dataset is unique, challenging and publicly available with annotated ground truths. Our proposed algorithm yields to promising pose estimation results on this dataset. 相似文献
20.
Christoph Bosshard Roland Bouffanais Michel Deville Ralf Gruber Jonas Latt 《Computers & Fluids》2011,44(1):1-8
In this paper, a comprehensive performance review of an MPI-based high-order three-dimensional spectral element method C++ toolbox is presented. The focus is put on the performance evaluation of several aspects with a particular emphasis on the parallel efficiency. The performance evaluation is analyzed with the help of a time prediction model based on a parameterization of the application and the hardware resources. Two tailor-made benchmark cases in computational fluid dynamics (CFD) are introduced and used to carry out this review, stressing the particular interest for clusters with up to thousands of cores. Some problems in the parallel implementation have been detected and corrected. The theoretical complexities with respect to the number of elements, to the polynomial degree, and to communication needs are correctly reproduced. It is concluded that this type of code has a nearly perfect speedup on machines with thousands of cores, and is ready to make the step to next-generation petaFLOP machines. 相似文献