全文获取类型
收费全文 | 1859篇 |
免费 | 90篇 |
国内免费 | 5篇 |
专业分类
电工技术 | 24篇 |
综合类 | 16篇 |
化学工业 | 568篇 |
金属工艺 | 30篇 |
机械仪表 | 32篇 |
建筑科学 | 138篇 |
矿业工程 | 3篇 |
能源动力 | 27篇 |
轻工业 | 172篇 |
水利工程 | 11篇 |
石油天然气 | 5篇 |
无线电 | 116篇 |
一般工业技术 | 378篇 |
冶金工业 | 71篇 |
原子能技术 | 16篇 |
自动化技术 | 347篇 |
出版年
2023年 | 27篇 |
2022年 | 49篇 |
2021年 | 66篇 |
2020年 | 36篇 |
2019年 | 46篇 |
2018年 | 43篇 |
2017年 | 41篇 |
2016年 | 51篇 |
2015年 | 47篇 |
2014年 | 73篇 |
2013年 | 97篇 |
2012年 | 89篇 |
2011年 | 158篇 |
2010年 | 101篇 |
2009年 | 111篇 |
2008年 | 103篇 |
2007年 | 84篇 |
2006年 | 86篇 |
2005年 | 74篇 |
2004年 | 69篇 |
2003年 | 54篇 |
2002年 | 51篇 |
2001年 | 44篇 |
2000年 | 42篇 |
1999年 | 41篇 |
1998年 | 27篇 |
1997年 | 27篇 |
1996年 | 18篇 |
1995年 | 22篇 |
1994年 | 27篇 |
1993年 | 17篇 |
1992年 | 12篇 |
1991年 | 23篇 |
1990年 | 13篇 |
1989年 | 15篇 |
1988年 | 7篇 |
1987年 | 6篇 |
1986年 | 4篇 |
1985年 | 6篇 |
1984年 | 10篇 |
1983年 | 8篇 |
1981年 | 4篇 |
1980年 | 2篇 |
1978年 | 4篇 |
1977年 | 4篇 |
1976年 | 2篇 |
1974年 | 2篇 |
1967年 | 2篇 |
1933年 | 1篇 |
1931年 | 3篇 |
排序方式: 共有1954条查询结果,搜索用时 0 毫秒
11.
Zeolite based trace humidity sensor for high temperature applications in hydrogen atmosphere 总被引:1,自引:0,他引:1
We present a humidity sensor based on H-ZSM-5 type zeolite that is suitable to detect traces of humidity (10–110 ppmV) under harsh conditions, e.g. reducing atmosphere (H2) and high temperature (up to 600 °C). By means of complex impedance spectroscopy (IS) we show that the zeolite sensor responds linearly towards minimal changes in humidity. Therefore this result indicates that the zeolite sensor is capable to detect traces of humidity in processes where high temperatures in a hydrogen environment are required. 相似文献
12.
Dipl.-Wirt.-Inform. Daniel Beverungen Dr. Ralf Knackstedt Dipl.-Wirt.-Inform. Oliver Müller 《WIRTSCHAFTSINFORMATIK》2008,50(3):220-234
Offering product-service bundles (consisting of products and services) is becoming more important for companies. Modifying the organizational structure of the cooperation as well as adapting to changing customer demands requires versatile information systems. Implementing Service Oriented Architectures (SOA) is one attempt to provide this flexibility. Currently, there is little methodical guidance for the identification, specification and implementation of services as building blocks of Service Oriented Architectures. Accounting for this need, a conceptual approach is designed, which adapts approaches of customer integration, and combines a business and IT analysis. The applicability of the method is demonstrated by designing a Service Oriented Architecture for the recycling of electronic equipment. Implementing services for other product-service bundles will support additional integration scenarios. By standardizing services, a sound integration of products and services can be backed by providing a reference architecture. 相似文献
13.
Integrating products and services to customized solutions can help firms to differentiate from their competitors. In practice, however, various companies fall short in extracting value from their customers. Therefore this paper focuses on pricing aspects as central means for value appropriation in the context of solutions. Following the resource-based view of the firm, we adopt a process-oriented perspective on pricing practices in order to identify crucial factors and activities. Based on 15 in-depth interviews with practitioners from various industries we derive six steps of a price management process for value appropriation in the context of solution selling and present critical activities and routines within each step. 相似文献
14.
15.
Nesterov A König K Felgenhauer T Lindenstruth V Trunk U Fernandez S Hausmann M Bischoff FR Breitling F Stadler V 《The Review of scientific instruments》2008,79(3):035106
We examined the high precision deposition of toner and polymer microparticles with a typical size of approximately 10 microm on electrode arrays with electrodes of 100 microm and below using custom-made microelectronic chips. Selective desorption of redundant particles was employed to obtain a given particle pattern from preadsorbed particle layers. Microparticle desorption was regulated by dielectrophoretic attracting forces generated by individual pixel electrodes, tangential detaching forces of an air flow, and adhesion forces on the microchip surface. A theoretical consideration of the acting forces showed that without pixel voltage, the tangential force applied for particle detachment exceeded the particle adhesion force. When the pixel voltage was switched on, however, the sum of attracting forces was larger than the tangential detaching force, which was crucial for desorption efficiency. In our experiments, appropriately large dielectrophoretic forces were achieved by applying high voltages of up to 100 V on the pixel electrodes. In addition, electrode geometries on the chip's surface as well as particle size influenced the desorption quality. We further demonstrated the compatibility of this procedure to complementary metal oxide semiconductor chip technology, which should allow for an easy technical implementation with respect to high-resolution microparticle deposition. 相似文献
16.
Daniele Colombo Slah Drira Ralf Frotscher Manfred Staat 《International journal for numerical methods in engineering》2023,124(2):402-433
Edge-based and face-based smoothed finite element methods (ES-FEM and FS-FEM, respectively) are modified versions of the finite element method allowing to achieve more accurate results and to reduce sensitivity to mesh distortion, at least for linear elements. These properties make the two methods very attractive. However, their implementation in a standard finite element code is nontrivial because it requires heavy and extensive modifications to the code architecture. In this article, we present an element-based formulation of ES-FEM and FS-FEM methods allowing to implement the two methods in a standard finite element code with no modifications to its architecture. Moreover, the element-based formulation permits to easily manage any type of element, especially in 3D models where, to the best of the authors' knowledge, only tetrahedral elements are used in FS-FEM applications found in the literature. Shape functions for non-simplex 3D elements are proposed in order to apply FS-FEM to any standard finite element. 相似文献
17.
Anne Martens Heiko Koziolek Lutz Prechelt Ralf Reussner 《Empirical Software Engineering》2011,16(5):587-622
Model-based performance evaluation methods for software architectures can help architects to assess design alternatives and save costs for late life-cycle performance fixes. A recent trend is component-based performance modelling, which aims at creating reusable performance models; a number of such methods have been proposed during the last decade. Their accuracy and the needed effort for modelling are heavily influenced by human factors, which are so far hardly understood empirically. Do component-based methods allow to make performance predictions with a comparable accuracy while saving effort in a reuse scenario? We examined three monolithic methods (SPE, umlPSI, Capacity Planning (CP)) and one component-based performance evaluation method (PCM) with regard to their accuracy and effort from the viewpoint of method users. We conducted a series of three experiments (with different levels of control) involving 47 computer science students. In the first experiment, we compared the applicability of the monolithic methods in order to choose one of them for comparison. In the second experiment, we compared the accuracy and effort of this monolithic and the component-based method for the model creation case. In the third, we studied the effort reduction from reusing component-based models. Data were collected based on the resulting artefacts, questionnaires and screen recording. They were analysed using hypothesis testing, linear models, and analysis of variance. For the monolithic methods, we found that using SPE and CP resulted in accurate predictions, while umlPSI produced over-estimates. Comparing the component-based method PCM with SPE, we found that creating reusable models using PCM takes more (but not drastically more) time than using SPE and that participants can create accurate models with both techniques. Finally, we found that reusing PCM models can save time, because effort to reuse can be explained by a model that is independent of the inner complexity of a component. The tasks performed in our experiments reflect only a subset of the actual activities when applying model-based performance evaluation methods in a software development process. Our results indicate that sufficient prediction accuracy can be achieved with both monolithic and component-based methods, and that the higher effort for component-based performance modelling will indeed pay off when the component models incorporate and hide a sufficient amount of complexity. 相似文献
18.
Jörg Becker Daniel F. Beverungen Ralf Knackstedt 《Information Systems and E-Business Management》2010,8(1):33-66
Confronted with decreasing margins and a rising customer demand for integrated solutions, manufacturing companies integrate
complementary services into their portfolio. Offering value bundles (consisting of services and physical goods) takes place
in integrated product–service systems, spanning the coordinated design and delivery of services and physical goods for customers.
Conceptual Modeling is an established approach to support and guide such efforts. Using a framework for the design and delivery
of value bundles as an analytical lens, this study evaluates the current support of reference models and modeling languages
for setting up conceptual models for an integrated design and delivery of value bundles. Consecutively, designing modeling
languages and reference models to fit the requirements of conceptual models in product–service systems are presented as upcoming
challenges in Service Research. To guide further research, first steps are proposed by exemplarily integrating reference models
and modeling languages stemming from the service and manufacturing domains. 相似文献
19.
A two-layer architecture for dynamic real-time optimization (or nonlinear modelpredictive control (NMPC) with an economic objective) is presented, where the solution of the dynamic optimization problem is computed on two time-scales. On the upper layer, a rigorous optimization problem is solved with an economic objective function at a slow time-scale, which captures slow trends in process uncertainties. On the lower layer, a fast neighboring-extremal controller is tracking the trajectory in order to deal with fast disturbances acting on the process. Compared to a single-layer architecture, the two-layer architecture is able to address control systems with complex models leading to high computational load, since the rigorous optimization problem can be solved at a slower rate than the process sampling time. Furthermore, solving a new rigorous optimization problem is not necessary at each sampling time if the process has rather slow dynamics compared to the disturbance dynamics. The two-layer control strategy is illustrated with a simulated case study of an industrial polymerization process. 相似文献
20.
Luis M. Torres Ramiro Torres Ralf Borndörfer Marc E. Pfetsch 《International Transactions in Operational Research》2011,18(4):455-472
We discuss an optimization model for the line planning problem in public transport in order to minimize operation costs while guaranteeing a certain level of quality of service, in terms of available transport capacity. We analyze the computational complexity of this problem for tree network topologies as well as several categories of line operations that are important for the Quito Trolebús system. In practice, these instances can be solved quite well, and significant optimization potentials can be demonstrated. 相似文献