全文获取类型
收费全文 | 1748篇 |
免费 | 68篇 |
国内免费 | 1篇 |
专业分类
电工技术 | 44篇 |
综合类 | 2篇 |
化学工业 | 435篇 |
金属工艺 | 42篇 |
机械仪表 | 45篇 |
建筑科学 | 83篇 |
矿业工程 | 3篇 |
能源动力 | 91篇 |
轻工业 | 182篇 |
水利工程 | 8篇 |
石油天然气 | 2篇 |
无线电 | 74篇 |
一般工业技术 | 314篇 |
冶金工业 | 175篇 |
原子能技术 | 20篇 |
自动化技术 | 297篇 |
出版年
2023年 | 7篇 |
2022年 | 18篇 |
2021年 | 26篇 |
2020年 | 25篇 |
2019年 | 33篇 |
2018年 | 38篇 |
2017年 | 37篇 |
2016年 | 54篇 |
2015年 | 30篇 |
2014年 | 67篇 |
2013年 | 116篇 |
2012年 | 84篇 |
2011年 | 146篇 |
2010年 | 83篇 |
2009年 | 87篇 |
2008年 | 100篇 |
2007年 | 90篇 |
2006年 | 91篇 |
2005年 | 84篇 |
2004年 | 52篇 |
2003年 | 64篇 |
2002年 | 36篇 |
2001年 | 42篇 |
2000年 | 34篇 |
1999年 | 23篇 |
1998年 | 43篇 |
1997年 | 31篇 |
1996年 | 22篇 |
1995年 | 22篇 |
1994年 | 15篇 |
1993年 | 18篇 |
1992年 | 12篇 |
1991年 | 11篇 |
1990年 | 13篇 |
1989年 | 16篇 |
1988年 | 16篇 |
1987年 | 15篇 |
1986年 | 7篇 |
1985年 | 11篇 |
1984年 | 9篇 |
1983年 | 13篇 |
1982年 | 11篇 |
1981年 | 10篇 |
1980年 | 4篇 |
1979年 | 7篇 |
1978年 | 6篇 |
1977年 | 4篇 |
1976年 | 5篇 |
1975年 | 5篇 |
1967年 | 3篇 |
排序方式: 共有1817条查询结果,搜索用时 312 毫秒
31.
Bozena Silberova Hilde J. Venvik John C. Walmsley Anders Holmen 《Catalysis Today》2005,100(3-4):457-462
Partial oxidation and oxidative steam reforming of propane were investigated over 0.01 wt.% Rh/Al2O3 foam catalysts. High selectivity to hydrogen was obtained for both reactions, but addition of steam to the reactant mixture gave higher selectivity to hydrogen. Stability tests over 7 h revealed that the catalytic activity of Rh was quite stable under partial oxidation conditions. Higher loss in Rh activity was observed when steam was present in the reactant mixture. FE-SEM images showed that Rh particle size and distribution are modified under partial oxidation and oxidative steam reforming conditions. However, these changes were more distinct on the catalyst used for oxidative steam reforming. 相似文献
32.
An experimental investigation of the rheological properties of glass fiber-reinforced polycarbonate melts and the extrusion of such compounds through capillary and slit dies is presented. The viscosity–shear rate function seems independent of instrument for cone-plate and capillary investigations. The presence of fibers increases the level of the viscosity. Normal stresses at fixed shear stress are also increased by the presence of fibers. The extrudate swell is decreased by the presence of fibers and surface roughness is increased. Fiber orientation increases and surface roughness decreases with increasing extrusion rate. 相似文献
33.
A series of placebo powders for inhalation was characterized regarding bulk density and powder flowability using different techniques. The powders were of the ordered mixture type and were prepared by mixing a pharmaceutical carrier grade of lactose with different fractions of intermediate sized and fine (i.e., micronized) lactose. A modified Hausner Ratio was obtained by measurement of the poured and the compressed bulk densities. Other tests investigated were the angle of repose, the avalanching behaviour using the AeroFlow, and the yield strength using the Uniaxial tester. Furthermore, the relation between ordered mixture composition and flowability was examined.Of the methods investigated, the modified Hausner Ratio discriminates well between the investigated powders and seems to have the widest measuring range. It was also found that the poured and compressed bulk densities provide information about the packing of the particles in the powders. A good correlation was obtained between the modified Hausner Ratio and the angle of repose. The AeroFlow was suitable for powders with a low percentage of fine particles, but could not discriminate between the more cohesive powders. The Uniaxial tester, on the other hand, seems to be better suited for more cohesive powders.Regarding the powder composition, addition of micronized particles has a strong influence on the flowability of ordered mixtures, while intermediate sized particles have little impact on the powder flow. 相似文献
34.
A single-stage “wet impactor” is presented, where the impaction occurs on a regenerated water surface. The developed impactor is equipped with an impaction liquid support plate of etched glass and a drain spout providing a continuous liquid flow covering the impaction area. Subsequent transport of the impaction liquid makes an on-line determination possible. With multiple nozzles (74 holes, 0.3 mm i.d.) and an air flow of 101/min the cut-off was determined to 0.41 ± 0.02 μm. The impactor was also investigated for its particle loss. The cut-off function, regarding the consequences of letting impaction occur in a liquid film is discussed and compared to conventional impactors. The analysis technique was tested in an ambient air measurement study with an ion chromatograph attached to the sampling system. 相似文献
35.
Garrett BC Dixon DA Camaioni DM Chipman DM Johnson MA Jonah CD Kimmel GA Miller JH Rescigno TN Rossky PJ Xantheas SS Colson SD Laufer AH Ray D Barbara PF Bartels DM Becker KH Bowen KH Bradforth SE Carmichael I Coe JV Corrales LR Cowin JP Dupuis M Eisenthal KB Franz JA Gutowski MS Jordan KD Kay BD Laverne JA Lymar SV Madey TE McCurdy CW Meisel D Mukamel S Nilsson AR Orlando TM Petrik NG Pimblott SM Rustad JR Schenter GK Singer SJ Tokmakoff A Wang LS Wettig C Zwier TS 《Chemical reviews》2005,105(1):355-390
36.
37.
Yazdan Shirvany Fredrik Edelvik Stefan Jakobsson Anders Hedström Mikael Persson 《Applied Soft Computing》2013,13(5):2515-2525
Surgical therapy has become an important therapeutic alternative for patients with medically intractable epilepsy. Correct and anatomically precise localization of an epileptic focus is essential to decide if resection of brain tissue is possible. The inverse problem in EEG-based source localization is to determine the location of the brain sources that are responsible for the measured potentials at the scalp electrodes. We propose a new global optimization method based on particle swarm optimization (PSO) to solve the epileptic spike EEG source localization inverse problem. In a forward problem a modified subtraction method is proposed to reduce the computational time. The good accuracy and fast convergence are demonstrated for 2D and 3D cases with realistic head models. The results from the new method are promising for use in the pre-surgical clinic in the future. 相似文献
38.
The profile of a graph is an integer-valued parameter defined via vertex orderings; it is known that the profile of a graph
equals the smallest number of edges of an interval supergraph. Since computing the profile of a graph is an NP-hard problem,
we consider parameterized versions of the problem. Namely, we study the problem of deciding whether the profile of a connected
graph of order n is at most n−1+k, considering k as the parameter; this is a parameterization above guaranteed value, since n−1 is a tight lower bound for the profile. We present two fixed-parameter algorithms for this problem. The first algorithm
is based on a forbidden subgraph characterization of interval graphs. The second algorithm is based on two simple kernelization
rules which allow us to produce a kernel with linear number of vertices and edges. For showing the correctness of the second
algorithm we need to establish structural properties of graphs with small profile which are of independent interest.
A preliminary version of the paper is published in Proc. IWPEC 2006, LNCS vol. 4169, 60–71. 相似文献
39.
Ketut Fundana Niels C. Overgaard Anders Heyden 《International Journal of Computer Vision》2008,80(3):289-299
In this paper we address the problem of segmentation in image sequences using region-based active contours and level set methods.
We propose a novel method for variational segmentation of image sequences containing nonrigid, moving objects. The method
is based on the classical Chan-Vese model augmented with a novel frame-to-frame interaction term, which allow us to update
the segmentation result from one image frame to the next using the previous segmentation result as a shape prior. The interaction
term is constructed to be pose-invariant and to allow moderate deformations in shape. It is expected to handle the appearance
of occlusions which otherwise can make segmentation fail. The performance of the model is illustrated with experiments on
synthetic and real image sequences. 相似文献
40.
Roberto Bruttomesso Alessandro Cimatti Anders Franzen Alberto Griggio Roberto Sebastiani 《Annals of Mathematics and Artificial Intelligence》2009,55(1-2):63-99
Most state-of-the-art approaches for Satisfiability Modulo Theories $(SMT(\mathcal{T}))$ rely on the integration between a SAT solver and a decision procedure for sets of literals in the background theory $\mathcal{T} (\mathcal{T}{\text {-}}solver)$ . Often $\mathcal{T}$ is the combination $\mathcal{T}_1 \cup \mathcal{T}_2$ of two (or more) simpler theories $(SMT(\mathcal{T}_1 \cup \mathcal{T}_2))$ , s.t. the specific ${\mathcal{T}_i}{\text {-}}solvers$ must be combined. Up to a few years ago, the standard approach to $SMT(\mathcal{T}_1 \cup \mathcal{T}_2)$ was to integrate the SAT solver with one combined $\mathcal{T}_1 \cup \mathcal{T}_2{\text {-}}solver$ , obtained from two distinct ${\mathcal{T}_i}{\text {-}}solvers$ by means of evolutions of Nelson and Oppen’s (NO) combination procedure, in which the ${\mathcal{T}_i}{\text {-}}solvers$ deduce and exchange interface equalities. Nowadays many state-of-the-art SMT solvers use evolutions of a more recent $SMT(\mathcal{T}_1 \cup \mathcal{T}_2)$ procedure called Delayed Theory Combination (DTC), in which each ${\mathcal{T}_i}{\text {-}}solver$ interacts directly and only with the SAT solver, in such a way that part or all of the (possibly very expensive) reasoning effort on interface equalities is delegated to the SAT solver itself. In this paper we present a comparative analysis of DTC vs. NO for $SMT(\mathcal{T}_1 \cup \mathcal{T}_2)$ . On the one hand, we explain the advantages of DTC in exploiting the power of modern SAT solvers to reduce the search. On the other hand, we show that the extra amount of Boolean search required to the SAT solver can be controlled. In fact, we prove two novel theoretical results, for both convex and non-convex theories and for different deduction capabilities of the ${\mathcal{T}_i}{\text {-}}solvers$ , which relate the amount of extra Boolean search required to the SAT solver by DTC with the number of deductions and case-splits required to the ${\mathcal{T}_i}{\text {-}}solvers$ by NO in order to perform the same tasks: (i) under the same hypotheses of deduction capabilities of the ${\mathcal{T}_i}{\text {-}}solvers$ required by NO, DTC causes no extra Boolean search; (ii) using ${\mathcal{T}_i}{\text {-}}solvers$ with limited or no deduction capabilities, the extra Boolean search required can be reduced down to a negligible amount by controlling the quality of the $\mathcal{T}$ -conflict sets returned by the ${\mathcal{T}_i}{\text {-}}solvers$ . 相似文献