全文获取类型
收费全文 | 2287篇 |
免费 | 151篇 |
国内免费 | 2篇 |
专业分类
电工技术 | 47篇 |
综合类 | 6篇 |
化学工业 | 671篇 |
金属工艺 | 60篇 |
机械仪表 | 60篇 |
建筑科学 | 147篇 |
矿业工程 | 2篇 |
能源动力 | 50篇 |
轻工业 | 165篇 |
水利工程 | 6篇 |
石油天然气 | 2篇 |
无线电 | 186篇 |
一般工业技术 | 487篇 |
冶金工业 | 153篇 |
原子能技术 | 10篇 |
自动化技术 | 388篇 |
出版年
2022年 | 29篇 |
2021年 | 49篇 |
2020年 | 32篇 |
2019年 | 44篇 |
2018年 | 50篇 |
2017年 | 66篇 |
2016年 | 67篇 |
2015年 | 76篇 |
2014年 | 88篇 |
2013年 | 131篇 |
2012年 | 110篇 |
2011年 | 165篇 |
2010年 | 103篇 |
2009年 | 105篇 |
2008年 | 123篇 |
2007年 | 110篇 |
2006年 | 101篇 |
2005年 | 79篇 |
2004年 | 59篇 |
2003年 | 63篇 |
2002年 | 42篇 |
2001年 | 41篇 |
2000年 | 43篇 |
1999年 | 40篇 |
1998年 | 51篇 |
1997年 | 42篇 |
1996年 | 42篇 |
1995年 | 33篇 |
1994年 | 21篇 |
1993年 | 30篇 |
1992年 | 23篇 |
1991年 | 21篇 |
1990年 | 17篇 |
1989年 | 22篇 |
1988年 | 18篇 |
1987年 | 12篇 |
1986年 | 25篇 |
1985年 | 20篇 |
1984年 | 20篇 |
1983年 | 15篇 |
1982年 | 13篇 |
1981年 | 18篇 |
1980年 | 13篇 |
1979年 | 19篇 |
1978年 | 20篇 |
1977年 | 17篇 |
1976年 | 15篇 |
1974年 | 12篇 |
1973年 | 11篇 |
1970年 | 10篇 |
排序方式: 共有2440条查询结果,搜索用时 31 毫秒
991.
Copolymers of acrylonitrile and vinylchloride (molar ratio ACN : VC = 44.5 : 55.5) were synthesized by continuous emulsion polymerization. Their sequence distribution was studied by 13C-NMR-spectra at 90.52 MHz. Three different methylene carbon regions were attributed to diads of the type ? VC? VC? ,? VC? ACN? and ? ACN? ACN? . The experimentally determined diad distribution was in good agreement with the calculated diad distribution, using reactivity ratios of rACN = 3.6 and rVC = 0.05. 相似文献
992.
Claudius Gros Wolfgang Wenzel Roser Valentí Joachim Stolze 《Journal of Low Temperature Physics》1995,99(3-4):603-605
We consider the Hubbard model on the Bethe lattice with infinite Coordination number and construct (i) a systematic series of self-consistent approximations to the one-particle Green's function, G(n)(), n = 2,3,... and (ii) conduct an exact diagonalization study of the Hubbard star and the star of the stars. We present analytic and numerical results for the Mott-Hubbard transition at half filling. We find consistently (i) a critical Uc 2.5 and (ii) that the gap opens like (U-Uc)3/2. 相似文献
993.
The solubility of nitrogen in ferritic iron-chromium alloys with mass contents of between 6.08 to 23.94 % chromium was measured in the temperature range 1523 to 1773 K. Combining the results with published data for the nitrogen solubility in iron-chromium, improved parameters describing the chromium-nitrogen interaction were derived. The Gibbs free excess energy can be described by . Calculations of the nitrogen solubility using this equation and calculations using parameters published by the Swedish Institute of Metals Research differ the more the higher the temperature. A repeated analysis of measurements of the nitrogen solubility in ferritic Fe-Cr-Mn alloys using the Gibbs free excess energy derived in this investigation for the chromium-nitrogen interaction leads to the manganese-nitrogen interaction energy . 相似文献
994.
This contribution presents two approximation methods for linear infinite-dimensional systems that ensure the preservation of stability and passivity. The first approach allows one to approximate internal source free infinite-dimensional systems such that the resulting approximation is a port-controlled Hamiltonian system with dissipation. The second method deals with the class of systems that are not required to have conjugated outputs but only a dissipative system operator. It yields approximations with a dissipative system matrix for which bounds of their stability margin are provided. Both approaches are based on a state space formulation of the infinite-dimensional system. This makes it possible to use the Petrov–Galerkin approximation whose free parameters are partly used for achieving the structure preservation. Since still free parameters remain, further application specific objectives, such as, e.g., moment matching, can be achieved. Both approaches are applied to the approximation of an Euler–Bernoulli beam. 相似文献
995.
This paper contrasts two methods to verify timing constraints of real-time applications. The method of static analysis predicts the worst-case and best-case execution times of a task's code by analyzing execution paths and simulating processor characteristics without ever executing the program or requiring the program's input. Evolutionary testing is an iterative testing procedure, which approximates the extreme execution times within several generations. By executing the test object dynamically and measuring the execution times the inputs are guided yielding gradually tighter predictions of the extreme execution times. We examined both approaches on a number of real world examples. The results show that static analysis and evolutionary testing are complementary methods, which together provide upper and lower bounds for both worst-case and best-case execution times. 相似文献
996.
Mischak H Apweiler R Banks RE Conaway M Coon J Dominiczak A Ehrich JH Fliser D Girolami M Hermjakob H Hochstrasser D Jankowski J Julian BA Kolch W Massy ZA Neusuess C Novak J Peter K Rossing K Schanstra J Semmes OJ Theodorescu D Thongboonkerd V Weissinger EM Van Eyk JE Yamamoto T 《Proteomics. Clinical applications》2007,1(2):148-156
997.
We show that for arbitrary positive integers
with probability
the gcd of two linear combinations of these integers with rather small random integer coefficients coincides with
This naturally leads to a probabilistic algorithm for computing the gcd of several integers, with probability
via just one gcd of two numbers with about the same size as the initial data (namely the above linear combinations). This
algorithm can be repeated to achieve any desired confidence level. 相似文献
998.
Böttger J Balzer M Deussen O 《IEEE transactions on visualization and computer graphics》2006,12(5):845-852
Commonly known detail in context techniques for the two-dimensional Euclidean space enlarge details and shrink their context using mapping functions that introduce geometrical compression. This makes it difficult or even impossible to recognize shapes for large differences in magnification factors. In this paper we propose to use the complex logarithm and the complex root functions to show very small details even in very large contexts. These mappings are conformal, which means they only locally rotate and scale, thus keeping shapes intact and recognizable. They allow showing details that are orders of magnitude smaller than their surroundings in combination with their context in one seamless visualization. We address the utilization of this universal technique for the interaction with complex two-dimensional data considering the exploration of large graphs and other examples 相似文献
999.
Georgii J Westermann R 《IEEE transactions on visualization and computer graphics》2006,12(5):1345-1352
Recent advances in algorithms and graphics hardware have opened the possibility to render tetrahedral grids at interactive rates on commodity PCs. This paper extends on this work in that it presents a direct volume rendering method for such grids which supports both current and upcoming graphics hardware architectures, large and deformable grids, as well as different rendering options. At the core of our method is the idea to perform the sampling of tetrahedral elements along the view rays entirely in local barycentric coordinates. Then, sampling requires minimum GPU memory and texture access operations, and it maps efficiently onto a feed-forward pipeline of multiple stages performing computation and geometry construction. We propose to spawn rendered elements from one single vertex. This makes the method amenable to upcoming Direct3D 10 graphics hardware which allows to create geometry on the GPU. By only modifying the algorithm slightly it can be used to render per-pixel iso-surfaces and to perform tetrahedral cell projection. As our method neither requires any pre-processing nor an intermediate grid representation it can efficiently deal with dynamic and large 3D meshes. 相似文献
1000.
This paper introduces a uniform statistical framework for both 3-D and 2-D object recognition using intensity images as input data. The theoretical part provides a mathematical tool for stochastic modeling. The algorithmic part introduces methods for automatic model generation, localization, and recognition of objects. 2-D images are used for learning the statistical appearance of 3-D objects; both the depth information and the matching between image and model features are missing for model generation. The implied incomplete data estimation problem is solved by the Expectation Maximization algorithm. This leads to a novel class of algorithms for automatic model generation from projections. The estimation of pose parameters corresponds to a non-linear maximum likelihood estimation problem which is solved by a global optimization procedure. Classification is done by the Bayesian decision rule. This work includes the experimental evaluation of the various facets of the presented approach. An empirical evaluation of learning algorithms and the comparison of different pose estimation algorithms show the feasibility of the proposed probabilistic framework. 相似文献