全文获取类型
收费全文 | 2260篇 |
免费 | 140篇 |
国内免费 | 4篇 |
专业分类
电工技术 | 35篇 |
化学工业 | 555篇 |
金属工艺 | 38篇 |
机械仪表 | 55篇 |
建筑科学 | 95篇 |
矿业工程 | 2篇 |
能源动力 | 112篇 |
轻工业 | 245篇 |
水利工程 | 18篇 |
石油天然气 | 5篇 |
无线电 | 191篇 |
一般工业技术 | 415篇 |
冶金工业 | 78篇 |
原子能技术 | 5篇 |
自动化技术 | 555篇 |
出版年
2024年 | 7篇 |
2023年 | 27篇 |
2022年 | 65篇 |
2021年 | 82篇 |
2020年 | 35篇 |
2019年 | 64篇 |
2018年 | 68篇 |
2017年 | 75篇 |
2016年 | 93篇 |
2015年 | 68篇 |
2014年 | 119篇 |
2013年 | 180篇 |
2012年 | 174篇 |
2011年 | 190篇 |
2010年 | 121篇 |
2009年 | 158篇 |
2008年 | 156篇 |
2007年 | 109篇 |
2006年 | 94篇 |
2005年 | 61篇 |
2004年 | 71篇 |
2003年 | 54篇 |
2002年 | 51篇 |
2001年 | 31篇 |
2000年 | 20篇 |
1999年 | 31篇 |
1998年 | 33篇 |
1997年 | 35篇 |
1996年 | 21篇 |
1995年 | 27篇 |
1994年 | 10篇 |
1993年 | 12篇 |
1992年 | 6篇 |
1991年 | 3篇 |
1990年 | 6篇 |
1989年 | 3篇 |
1988年 | 5篇 |
1987年 | 3篇 |
1986年 | 2篇 |
1985年 | 5篇 |
1984年 | 3篇 |
1983年 | 3篇 |
1982年 | 4篇 |
1980年 | 2篇 |
1978年 | 2篇 |
1977年 | 4篇 |
1976年 | 2篇 |
1975年 | 3篇 |
1971年 | 2篇 |
1965年 | 1篇 |
排序方式: 共有2404条查询结果,搜索用时 31 毫秒
31.
Jacques G. Noudem Sophie Meslin Daniel Horvath Christelle Harnois Daniel Chateigner Bachir Ouladdiaf Sophie Eve Mousta Gomina Xavier Chaud Masato Murakami 《Journal of the American Ceramic Society》2007,90(9):2784-2790
The recently reported hole-patterned YBa2 Cu3 O y (Y123) bulks with improved superconducting properties are highly interesting from material quality and application variety points of view. It is well known that the core of plain bulk superconductors needs to be fully oxygenated and some defects like cracks, pores, and voids must be suppressed in order that the material can trap a high magnetic field or carry a high current density. To minimize the above defects, we have used a combination of standard superconducting ceramic processing and an infiltration technique to prepare regularly perforated YBa2 Cu3 O y (Y123) bulk superconductors. This process leads to negligible shrinkage upon annealing and a uniform distribution of Y211 inclusions. Texture was evidenced by neutron pole figure measurements. Flux mapping was used to verify the superconducting homogeneity of the samples and to investigate the field-trapping ability. In addition, the textured drilled samples were reinforced using resin or metal impregnation and the influence of the different processing steps on the hardness of the materials has been investigated. 相似文献
32.
Ethylene-propylene copolymers have been prepared by using Ziegler-Natta catalysts based on TiCl4, MgCl2, PCl3 and (n-Bu)3PO4. The catalysts TiCl4/MgCl2/PCl3 and TiCl4/MgCl2/(n-Bu)3PO4 were prepared by reacting TiCl4 with pretreated MgCl2. The support was prepared by ball milling of MgCl2 with varied amounts of PCl3 or (n-Bu)3PO4. The addition of PCl3 has remarkably increased the MgCl2 surface area in comparison with (n-Bu)3PO4. The effects of PCl3 and (n-Bu)3PO4 on ethylene homopolymerization, ethylene-propylene copolymerization and on copolymer properties were evaluated. The catalyst
system containing PCl3 permitted to synthesize propylene-ethylene copolymers with up to 75% (w/w) of propylene and provided control of copolymer
crystallinity. The reduction of the copolymer molecular weight distribution suggested that PCl3 acted as an internal donor, poisoning some active catalytic sites.
Received: 2 April 1997/Revised: 6 June 1997/Accepted: 18 June 1997 相似文献
33.
This paper presents GPELab (Gross–Pitaevskii Equation Laboratory), an advanced easy-to-use and flexible Matlab toolbox for numerically simulating many complex physics situations related to Bose–Einstein condensation. The model equation that GPELab solves is the Gross–Pitaevskii equation. The aim of this first part is to present the physical problems and the robust and accurate numerical schemes that are implemented for computing stationary solutions, to show a few computational examples and to explain how the basic GPELab functions work. Problems that can be solved include: 1d, 2d and 3d situations, general potentials, large classes of local and nonlocal nonlinearities, multi-components problems, and fast rotating gases. The toolbox is developed in such a way that other physics applications that require the numerical solution of general Schrödinger-type equations can be considered. 相似文献
34.
Roser Cervellera Xavier Ramis Josep Maria Salla Ana Mantecn Angels Serra 《应用聚合物科学杂志》2006,102(3):2086-2093
Diglycidyl ether of bisphenol A or 3,4‐epoxycyclohexylmethyl 3,4‐epoxycyclohexane carboxylate were mixed with different proportions of 4‐methyl‐1,3‐dioxolan‐2‐one and cured using lanthanide triflates as initiators. In order to compare the materials obtained, conventional initiators such as boron trifluoride complexes and N,N‐dimethylaminopyridine were also tested. The curing process was followed by differential scanning calorimetry (DSC) and Fourier transform IR in attenuated total reflectance mode. This technique proved that the carbonate accelerates the curing process because it helps to form the active initiating species, although it was not chemically incorporated into the network and remained entrapped in the material. The DSC kinetic study was also reported. © 2006 Wiley Periodicals Inc. J Appl Polym Sci 102: 2086–2093, 2006 相似文献
35.
This paper aims to present several clustering methods based on rank distance. Rank distance has applications in many different fields such as computational linguistics, biology and computer science. The K-means algorithm represents each cluster by a single mean vector. The mean vector is computed with respect to a distance measure. Two K-means algorithms based on rank distance are described in this paper. Hierarchical clustering builds models based on distance connectivity. This paper describes two hierarchical clustering techniques that use rank distance. Experiments using mitochondrial DNA sequences extracted from several mammals are performed to compare the results of the clustering methods. Results demonstrate the clustering performance and the utility of the proposed algorithms. 相似文献
36.
Successful massively multiplayer online games (MMOGs) have today millions of registered users and hundreds of thousands of active concurrent players. To be able to guarantee quality of service (QoS) to a highly variable number of concurrent users, game operators statically over-provision a large infrastructure capable of sustaining the game peak load, even though a large portion of the resources is unused most of the time. To address this problem, we introduce in this work a new MMOG ecosystem for hosting and provisioning of MMOGs which effectively splits the traditional monolithic MMOG companies into three main service providers: game providers, game operators, and resource providers. Their interaction is regulated through comprehensive service level agreements (SLA) that establish the price, terms of operation, and compensation for service violations. In our model, game operators efficiently provision resources for MMOGs from multiple cloud providers, based on dynamic load forecasts, and ensure proper game operation that maintains the required QoS to all clients under varying resource availability. Game providers manage multiple distributed MMOGs for which they lease services under strict operational SLAs from game operators to satisfy all client requests. These three self-standing, smaller, more agile service providers enable access to the MMOG market for the small and medium enterprises, and to the current commercial cloud providers. We evaluate, through simulations based on real-life MMOG traces and commercial cloud SLAs, the impact of resource availability on the QoS offered to the MMOG clients. We find that our model can mitigate the negative effects of resource failures within four minutes and that MMOG server consolidation can accentuate the negative effects of failures in a resource-scarce environment. We further investigate different methods of ranking MMOG operational offers with either single or multiple (competing) MMOG providers. Our results show that compensations for SLA faults in the offer selection process can lead up to 11–16 % gain in the game providers’ income. We also demonstrate that adequate ranking of offers can lead to MMOG operational cost reductions from 20 up to 60 %. 相似文献
37.
At high temperatures (1000–2000°C) and low pressures (10?5?10?2 Torr) ethylene, acetylene and benzene decompose helerogeneously on pyrolytic carbon giving mainly hydrogen and deposited carbon, with collision yields of the order of 10?4. The kinetics of these carbon deposition reactions show some striking similarities with carbon removal reactions by oxygen or oxygenated compounds.The true reaction order of these decomposition reactions is one above 1400°C, but becomes smaller at lower temperatures. This behaviour, common in gas-solid reactions, is generally interpreted as an inhibition due to chemisorption of some intermediate or reaction product. Evidence is also obtained that decomposition of the hydrocarbon molecules only occurs on peculiar sites of the carbon surface, i.e. the decomposition is not a purely thermal process, but involves a specific chemical interaction with the surface.Moreover, the behaviour of the pyrocarbon surface in carbon deposition reactions is similar to that observed in gasification reactions, i.e. the reactivity of the surface accommodates itself to the temperature and pressure conditions, as revealed by the observation of “transitory” and “stationary rates”. Transitory rates show that the surface deactivates with increasing temperatures (Figs. 4 and 5) [from which a maximum in the stationary rate results (Figs. 1–3)] and decreasing pressures (Figs. 7 and 8). The interpretation assumes that reaction sites are continuously created as an effect of carbon atoms deposition, but also deactivated by a thermal healing process.A main difference between carbon deposition reactions from hydrocarbons and carbon gasification reactions concerns the temperature range where reactivity is temperature dependent: in carbon deposition reactions, deactivation of the pyrocarbon surface is still effective up to much higher temperatures (Fig. 12). 相似文献
38.
The nucleation and growth of CO2 bubbles in non-Newtonian and Newtonian fluids that were initially supersaturated under different pressures are investigated in the present work. Quantitative information by means of two cameras reveals that at an immobile nucleation site the bubble grows rapidly followed by a linear increase in bubble diameter with time. After reaching a critical size, the bubble detaches from the stagnant site to rise in liquids with an exponential temporary increase for both the diameter and distance. A simple physical reasoning was proposed to qualitatively explain these observed phenomena. Recently, the growth rate and flow fields around a CO2 micro-bubble were measured in a microdevice by a micro-Particle Image Velocimetry in water. This information at microscale gives new insight into the complex mechanism of bubble nucleation and growth in fluids and could help to develop a rigorous theoretical modelling and numerical simulation such as the Lattice Boltzmann approach. 相似文献
39.
Traditional approach to clustering is to fit a model (partition or prototypes) for the given data. We propose a completely opposite approach by fitting the data into a given clustering model that is optimal for similar pathological data of equal size and dimensions. We then perform inverse transform from this pathological data back to the original data while refining the optimal clustering structure during the process. The key idea is that we do not need to find optimal global allocation of the prototypes. Instead, we only need to perform local fine-tuning of the clustering prototypes during the transformation in order to preserve the already optimal clustering structure. 相似文献
40.
Jean‐Rémy Falleri Xavier Blanc Reda Bendraou Marcos Aurélio Almeida da Silva Cédric Teyton 《Software》2014,44(5):621-641
Ensuring models’ consistency is a key concern when using a model‐based development approach. Therefore, model inconsistency detection has received significant attention over the last years. To be useful, inconsistency detection has to be sound, efficient, and scalable. Incremental detection is one way to achieve efficiency in the presence of large models. In most of the existing approaches, incrementalization is carried out at the expense of the memory consumption that becomes proportional to the model size and the number of consistency rules. In this paper, we propose a new incremental inconsistency detection approach that only consumes a small and model size‐independent amount of memory. It will therefore scale better to projects using large models and many consistency rules. Copyright © 2012 John Wiley & Sons, Ltd. 相似文献