首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1089篇
  免费   155篇
  国内免费   66篇
电工技术   58篇
综合类   77篇
化学工业   129篇
金属工艺   33篇
机械仪表   115篇
建筑科学   58篇
矿业工程   26篇
能源动力   42篇
轻工业   204篇
水利工程   26篇
石油天然气   63篇
武器工业   4篇
无线电   55篇
一般工业技术   168篇
冶金工业   18篇
原子能技术   22篇
自动化技术   212篇
  2024年   9篇
  2023年   41篇
  2022年   52篇
  2021年   61篇
  2020年   65篇
  2019年   56篇
  2018年   48篇
  2017年   59篇
  2016年   56篇
  2015年   64篇
  2014年   64篇
  2013年   87篇
  2012年   70篇
  2011年   68篇
  2010年   49篇
  2009年   65篇
  2008年   56篇
  2007年   49篇
  2006年   62篇
  2005年   42篇
  2004年   33篇
  2003年   32篇
  2002年   27篇
  2001年   15篇
  2000年   12篇
  1999年   7篇
  1998年   11篇
  1997年   3篇
  1996年   6篇
  1995年   11篇
  1994年   6篇
  1993年   5篇
  1992年   5篇
  1991年   4篇
  1990年   2篇
  1989年   1篇
  1987年   2篇
  1986年   2篇
  1985年   1篇
  1983年   1篇
  1979年   1篇
排序方式: 共有1310条查询结果,搜索用时 78 毫秒
1.
This paper presents the Kriging model approach for stochastic free vibration analysis of composite shallow doubly curved shells. The finite element formulation is carried out considering rotary inertia and transverse shear deformation based on Mindlin’s theory. The stochastic natural frequencies are expressed in terms of Kriging surrogate models. The influence of random variation of different input parameters on the output natural frequencies is addressed. The sampling size and computational cost is reduced by employing the present method compared to direct Monte Carlo simulation. The convergence studies and error analysis are carried out to ensure the accuracy of present approach. The stochastic mode shapes and frequency response function are also depicted for a typical laminate configuration. Statistical analysis is presented to illustrate the results using Kriging model and its performance.  相似文献   
2.
The motivation of this work is to address real-time sequential inference of parameters with a full Bayesian formulation. First, the proper generalized decomposition (PGD) is used to reduce the computational evaluation of the posterior density in the online phase. Second, Transport Map sampling is used to build a deterministic coupling between a reference measure and the posterior measure. The determination of the transport maps involves the solution of a minimization problem. As the PGD model is quasi-analytical and under a variable separation form, the use of gradient and Hessian information speeds up the minimization algorithm. Eventually, uncertainty quantification on outputs of interest of the model can be easily performed due to the global feature of the PGD solution over all coordinate domains. Numerical examples highlight the performance of the method.  相似文献   
3.
Z.P. Luo  J.H. Koo 《Polymer》2008,49(7):1841-1852
As the performance of polymer layered silicate nanocomposites strongly depends on their interior layer dispersion, quantification of the layer dispersion degree is needed. In this work, a new methodology was developed to determine the dispersion parameter D0.1, based on the measurement of the free-path spacing distance between the single clay sheets from the transmission electron microscopy (TEM) images. Several examples of exfoliated, intercalated, and immiscible composites were studied. It was found that the exfoliated composites had D0.1 over 8%, while that of intercalated composites were between 4 and 8%. In the case of intercalation, a high frequency peak appeared at a short spacing distance in the histogram, which was a characteristic of the intercalation, distinct from the exfoliation. The main utility of this TEM methodology is for the quantification of exfoliated or intercalated samples with small number of layers with stacks. The dispersion parameter D0.1 below 4% was suggested to classify as immiscible. A unique advantage of the TEM measurement is that the dispersion degree of different fillers can be counted individually.  相似文献   
4.
Soybeans are believed to be a rich source of sphingolipids, a class of polar lipids that has received attention for their possible cancer-inhibiting activities. The effect of processing on the sphingolipid content of various soybean products has not been determined. Glucosylceramide (GlcCer), the major sphingolipid type in soybeans, was measured in several processed soybean products to illustrate which product(s) GlcCer is partitioned into during processing and where it is lost. Whole soybeans were processed into full-fat flakes, from which crude oil was extracted. Crude oil was refined by conventional methods, and defatted soy flakes were further processed into alcohol-washed and acid-washed soy protein concentrates (SPC) and soy protein isolates (SPI) by laboratory-scale methods that simulated industrial practices. GlcCer was isolated from the samples by solvent extraction, solvent partition, and TLC and was quantified by HPLC. GlcCer remained mostly within the defatted soy flakes (91%) rather than in the oil (9%) after oil extraction. Only 52, 42, and 26% of GlcCer from defatted soy flakes was recovered in the acid-washed SPC, alcohol-washed SPC, and SPI products, respectively. All protein products had a similar GlcCer concentration of about 281 nmol/g (dry wt basis). The minor quantity of GlcCer in the crude oil was almost completely removed by water degumming.  相似文献   
5.
Sensitivity analysis (SA) is a commonly used approach for identifying important parameters that dominate model behaviors. We use a newly developed software package, a Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), to evaluate the effectiveness and efficiency of ten widely used SA methods, including seven qualitative and three quantitative ones. All SA methods are tested using a variety of sampling techniques to screen out the most sensitive (i.e., important) parameters from the insensitive ones. The Sacramento Soil Moisture Accounting (SAC-SMA) model, which has thirteen tunable parameters, is used for illustration. The South Branch Potomac River basin near Springfield, West Virginia in the U.S. is chosen as the study area. The key findings from this study are: (1) For qualitative SA methods, Correlation Analysis (CA), Regression Analysis (RA), and Gaussian Process (GP) screening methods are shown to be not effective in this example. Morris One-At-a-Time (MOAT) screening is the most efficient, needing only 280 samples to identify the most important parameters, but it is the least robust method. Multivariate Adaptive Regression Splines (MARS), Delta Test (DT) and Sum-Of-Trees (SOT) screening methods need about 400–600 samples for the same purpose. Monte Carlo (MC), Orthogonal Array (OA) and Orthogonal Array based Latin Hypercube (OALH) are appropriate sampling techniques for them; (2) For quantitative SA methods, at least 2777 samples are needed for Fourier Amplitude Sensitivity Test (FAST) to identity parameter main effect. McKay method needs about 360 samples to evaluate the main effect, more than 1000 samples to assess the two-way interaction effect. OALH and LPτ (LPTAU) sampling techniques are more appropriate for McKay method. For the Sobol' method, the minimum samples needed are 1050 to compute the first-order and total sensitivity indices correctly. These comparisons show that qualitative SA methods are more efficient but less accurate and robust than quantitative ones.  相似文献   
6.
采用二醇柱,在等度洗脱条件下,建立了高效液相色谱(HPLC)定量分析人参提取液中人参皂苷的检测方法.优化的色谱条件:LiChrospher100 Diol柱(250 mm x2 mm i.d.,5μm),流动相为乙腈-水(体积比为82∶18),流速为1.0 L.min-1,检测波长为200 nm,柱温为15℃.研究结果表明:在较低温度下,采用二醇柱分析7种主要人参皂苷,具有分析速度快,不需要梯度洗脱程序,方法简单等优点.  相似文献   
7.
社交网络用户隐私泄露的量化评估有利于帮助用户了解个人隐私泄露状况,提高公众隐私保护和防范意识,同时也能为个性化隐私保护方法的设计提供依据.针对目前隐私量化评估方法主要用于评估隐私保护方法的保护效果,无法有效评估社交网络用户的隐私泄露风险的问题,提出了一种社交网络用户隐私泄露量化评估方法.基于用户隐私偏好矩阵,利用皮尔逊相似度计算用户主观属性敏感性,然后取均值得到客观属性敏感性;采用属性识别方法推测用户隐私属性,并利用信息熵计算属性公开性;通过转移概率和用户重要性估计用户数据的可见范围,计算数据可见性;综合属性敏感性、属性公开性和数据可见性计算隐私评分,对隐私泄露风险进行细粒度的个性化评估,同时考虑时间因素,支持用户隐私泄露状况的动态评估,为社交网络用户了解隐私泄露状况、针对性地进行个性化隐私保护提供支持.在新浪微博数据上的实验结果表明,所提方法能够有效地对用户的隐私泄露状况进行量化评估.  相似文献   
8.
Pavement images are widely used in transportation agencies to detect cracks accurately so that the best proper plans of maintenance and rehabilitation could be made. Although crack in a pavement image is perceived because the intensity of crack pixels contrasts with that of the pavement background, there are still challenges in distinguishing cracks from complex textures, heavy noise, and interference. Unlike the intensity or the first-order edge feature of crack, this paper proposes the second-order directional derivative to characterize the directional valley-like structure of crack. The multi-scale Hessian structure is first proposed to analytically adapt to the direction and valley of cracking in the Gaussian scale space. The crack structure field is then proposed to mimic the curvilinear propagation of crack in the local area, which is iteratively applied at every point of the crack curve to infer the crack structure at the gaps and intersections. Finally, the most salient centerline of the crack within its curvilinear buffer is exactly located with non-maximum suppression along the perpendicular direction of crack. The experiments on large numbers of images of various crack types and with diverse conditions of noise, illumination and interference demonstrate the proposed method can detect pavement cracks well with an average Precision, Recall and F-measure of 92.4%, 88.4%, and 90.4% respectively. Also, the proposed method achieves the best performance of crack detection on the benchmark datasets among methods that also require no training and publicly offer the detection results for every image.  相似文献   
9.
张文烨  尚方信  郭浩 《计算机应用》2021,41(5):1299-1304
浮点数位宽的深度神经网络需要大量的运算资源,这导致大型深度神经网络难以在低算力场景(如边缘计算)上部署。为解决这一问题,提出一种即插即用的神经网络量化方法,以压缩大型神经网络的运算成本,并保持模型性能指标不显著下降。首先,基于Octave卷积将输入特征图的高频和低频成分进行分离;其次,分别对高低频分量应用不同位宽的卷积核进行卷积运算;第三,使用不同位宽的激活函数将高低频卷积结果量化至相应位宽;最后,混合不同精度的特征图来获得该层卷积结果。实验结果证实了所提方法压缩模型的有效性,在CIFAR-10/100数据集上,将模型压缩至1+8位宽时,该方法可保持准确率指标的下降小于3个百分点;在ImageNet数据集上,使用该方法将ResNet50模型压缩至1+4位宽时,其正确率指标仍高于70%。  相似文献   
10.
李静轩 《计算机应用研究》2020,37(10):3071-3076,3111
为解决APT(高级持续性威胁)攻防对抗过程中的防御滞后性问题,并在有限资源下做出最优主动防御决策,针对APT攻击过程中攻防双方意图、可行策略集随攻击阶段推进而演变的特点进行了研究,基于非合作博弈理论构建了多阶段APT攻防随机博弈模型AO-ADSG(APT-oriented attack-defense stochastic game)。针对APT攻防对抗中双方效用不对等的现象引入非零和思想,设计符合APT攻击特征的全资产要素效用量化方法;在分析博弈均衡的基础上给出最优防御策略选取算法。最后,通过“夜龙攻击”模拟实验验证了提出方法的可行性及正确性。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号