全文获取类型
收费全文 | 21427篇 |
免费 | 2149篇 |
国内免费 | 1625篇 |
专业分类
电工技术 | 1294篇 |
技术理论 | 1篇 |
综合类 | 1634篇 |
化学工业 | 4560篇 |
金属工艺 | 1460篇 |
机械仪表 | 819篇 |
建筑科学 | 1260篇 |
矿业工程 | 630篇 |
能源动力 | 1021篇 |
轻工业 | 1085篇 |
水利工程 | 557篇 |
石油天然气 | 1348篇 |
武器工业 | 307篇 |
无线电 | 2203篇 |
一般工业技术 | 3396篇 |
冶金工业 | 771篇 |
原子能技术 | 625篇 |
自动化技术 | 2230篇 |
出版年
2024年 | 96篇 |
2023年 | 525篇 |
2022年 | 621篇 |
2021年 | 753篇 |
2020年 | 848篇 |
2019年 | 713篇 |
2018年 | 721篇 |
2017年 | 845篇 |
2016年 | 904篇 |
2015年 | 848篇 |
2014年 | 1166篇 |
2013年 | 1430篇 |
2012年 | 1390篇 |
2011年 | 1619篇 |
2010年 | 1204篇 |
2009年 | 1174篇 |
2008年 | 1172篇 |
2007年 | 1333篇 |
2006年 | 1189篇 |
2005年 | 1026篇 |
2004年 | 873篇 |
2003年 | 731篇 |
2002年 | 576篇 |
2001年 | 538篇 |
2000年 | 430篇 |
1999年 | 402篇 |
1998年 | 309篇 |
1997年 | 286篇 |
1996年 | 262篇 |
1995年 | 203篇 |
1994年 | 179篇 |
1993年 | 138篇 |
1992年 | 116篇 |
1991年 | 113篇 |
1990年 | 96篇 |
1989年 | 91篇 |
1988年 | 62篇 |
1987年 | 38篇 |
1986年 | 40篇 |
1985年 | 33篇 |
1984年 | 30篇 |
1983年 | 22篇 |
1982年 | 14篇 |
1981年 | 5篇 |
1980年 | 7篇 |
1979年 | 6篇 |
1977年 | 3篇 |
1976年 | 3篇 |
1974年 | 3篇 |
1951年 | 5篇 |
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
151.
着墨量的精确计算和自动控制是提高彩色报纸图像质量的主要因素;针对目前彩报印刷计算不准确及自动化程度低等不足,主要对基于粒子群算法的彩报印刷CMYK四色油墨墨量计算方法进行研究;在分析基本粒子群算法原理的基础上,通过引入自适应惯性权重和收缩因子,改善了算法的收敛性能,应用改进粒子群算法对求解本问题进行研究,验证了算法的有效性,该墨量预置方法能较精确地控制墨层厚度,为彩报印刷自动配色提供了可参考的方法. 相似文献
152.
The objective of the paper is to describe a novel finite element computational method based on a strain energy density function and to implement it in the object-oriented environment. The original energy-based finite element was put into the known standard framework of classes and handled in a different manner. The nonlinear properties of material are defined with a modified strain energy density function. The local relaxation procedure proposed as a method used to resolve a nonlinear problem is implemented in C++ language. The hexahedral element with eight nodes as well as the adaptation of the nonlinear finite element is introduced. The chosen numerical model is made of nearly incompressible hyperelastic material. The application of the proposed element is shown on the example of a rectangular parallelepiped with a hollow port. 相似文献
153.
154.
Shin-Min Chao 《Pattern recognition》2010,43(5):1917-6849
In this paper, an anisotropic diffusion model with a generalized diffusion coefficient function is presented for defect detection in low-contrast surface images and, especially, aims at material surfaces found in liquid crystal display (LCD) manufacturing. A defect embedded in a low-contrast surface image is extremely difficult to detect, because the intensity difference between the unevenly illuminated background and the defective region is hardly observable and no clear edges are present between the defect and its surroundings.The proposed anisotropic diffusion model provides a generalized diffusion mechanism that can flexibly change the curve of the diffusion coefficient function. It adaptively carries out a smoothing process for faultless areas and performs a sharpening process for defect areas in an image. An entropy criterion is proposed as the performance measure of the diffused image and then a stochastic evolutionary computation algorithm, particle swarm optimization (PSO), is applied to automatically determine the best parameter values of the generalized diffusion coefficient function. Experimental results have shown that the proposed method can effectively and efficiently detect small defects in various low-contrast surface images. 相似文献
155.
基于自适应阈值设置的运动目标检测算法 总被引:1,自引:0,他引:1
提出一种基于自适应阈值设置的运动目标检测算法,该算法不同干传统的全局阈值设置方法,而是利用核密度对背景像素点进行密度估计,给出一种新的全局和局部阈值相结合的自适应阈值设置方法。该方法考虑不同位置的像素颜色分布复杂度不同,针对每个像素点自适应设置局部阈值,能克服全局阈值的不足,提高检测的精度。对多个标准视频进行实验,实验结果证明了该算法的有效性。 相似文献
156.
157.
苯与羰基钼相互作用的密度泛函研究 总被引:1,自引:1,他引:0
用密度泛函理论在B3LYP/LANL2DZ基组水平上自由优化(η~x-C_6H_6)Mo(CO)_n(x=1-6;n=1-5)复合物体系的可能构型及计算相互作用能,探索不同羰基数对复合物稳定性、苯和羰基钼相互作用的影响,并分析苯和羰基钼相互作用的NBO。结论(1)苯以η~6与Mo(CO)_n(n=1-3)配位形成的复合物比较稳定,但η~6配位复合物CO的个数越多,则越不稳定;(2)复合物1、2、3和10中,Mo(CO)_n与苯的相互作用拉动电荷由苯的π键电子向Mo(CO)_n的σ_(Mo-CO)~*键转移,而在复合物7中,苯的π键电子向Mo(CO)_n中Mo的孤对电子轨道d转移。 相似文献
158.
This paper describes the Bayesian inference and prediction of the inverse Weibull distribution for Type-II censored data. First we consider the Bayesian inference of the unknown parameter under a squared error loss function. Although we have discussed mainly the squared error loss function, any other loss function can easily be considered. A Gibbs sampling procedure is used to draw Markov Chain Monte Carlo (MCMC) samples, and they have in turn, been used to compute the Bayes estimates and also to construct the corresponding credible intervals with the help of an importance sampling technique. We have performed a simulation study in order to compare the proposed Bayes estimators with the maximum likelihood estimators. We further consider one-sample and two-sample Bayes prediction problems based on the observed sample and provide appropriate predictive intervals with a given coverage probability. A real life data set is used to illustrate the results derived. Some open problems are indicated for further research. 相似文献
159.
We describe a fast, data-driven bandwidth selection procedure for kernel conditional density estimation (KCDE). Specifically, we give a Monte Carlo dual-tree algorithm for efficient, error-controlled approximation of a cross-validated likelihood objective. While exact evaluation of this objective has an unscalable O(n2) computational cost, our method is practical and shows speedup factors as high as 286,000 when applied to real multivariate datasets containing up to one million points. In absolute terms, computation times are reduced from months to minutes. This enables applications at much greater scale than previously possible. The core idea in our method is to first derive a standard deterministic dual-tree approximation, whose loose deterministic bounds we then replace with tight, probabilistic Monte Carlo bounds. The resulting Monte Carlo dual-tree algorithm exhibits strong error control and high speedup across a broad range of datasets several orders of magnitude greater in size than those reported in previous work. The cost of this high acceleration is the loss of the formal error guarantee of the deterministic dual-tree framework; however, our experiments show that error is still amply controlled by our Monte Carlo algorithm, and the many-order-of-magnitude speedups are worth this sacrifice in the large-data case, where cross-validated bandwidth selection for KCDE would otherwise be impractical. 相似文献
160.