首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   686篇
  免费   55篇
  国内免费   32篇
电工技术   22篇
综合类   83篇
化学工业   36篇
金属工艺   9篇
机械仪表   10篇
建筑科学   32篇
矿业工程   7篇
能源动力   20篇
轻工业   19篇
水利工程   13篇
石油天然气   6篇
武器工业   5篇
无线电   47篇
一般工业技术   132篇
冶金工业   73篇
原子能技术   4篇
自动化技术   255篇
  2023年   3篇
  2022年   8篇
  2021年   8篇
  2020年   17篇
  2019年   20篇
  2018年   25篇
  2017年   12篇
  2016年   22篇
  2015年   18篇
  2014年   29篇
  2013年   42篇
  2012年   46篇
  2011年   40篇
  2010年   29篇
  2009年   34篇
  2008年   55篇
  2007年   42篇
  2006年   32篇
  2005年   36篇
  2004年   28篇
  2003年   32篇
  2002年   29篇
  2001年   13篇
  2000年   12篇
  1999年   14篇
  1998年   9篇
  1997年   15篇
  1996年   11篇
  1995年   7篇
  1994年   7篇
  1993年   8篇
  1992年   10篇
  1991年   5篇
  1990年   4篇
  1989年   4篇
  1988年   6篇
  1986年   4篇
  1985年   4篇
  1984年   7篇
  1983年   8篇
  1979年   1篇
  1978年   4篇
  1975年   1篇
  1967年   1篇
  1965年   1篇
  1964年   2篇
  1962年   1篇
  1961年   1篇
  1957年   1篇
  1956年   1篇
排序方式: 共有773条查询结果,搜索用时 0 毫秒
61.
Principal components analysis (PCA) is a multivariate statistical technique that transforms a data set having a large number of inter-related variables to a new set of uncorrelated variables called the principal components, determined to allow the dimensionality of the data set to be reduced while retaining as much of the variation present as possible. PCA can be applied to dynamic structural response data to identify the predominant modes of vibration of the structure. Because PCA is a statistical technique, there are errors in the computed modes due to the use of a sample of finite size. The aim of this paper is to study the effect of sample size on the accuracy with which the modes of vibration can be computed. The paper focuses predominantly on elastic response data and examines the potential influence of various parameters such as the period of the structure, the input excitation, and the spatial distribution of mass over the structure. Issues relating to errors in the modes of nonlinear structures are also discussed.  相似文献   
62.
弹载红外成像系统随弹体作高速旋转运动时,在曝光时间内获取的图像存在严重旋转模糊问题,给后续的目标识别与图像跟踪造成极大困扰。针对此问题,提出了一种基于自适应梯度先验的旋转模糊图像复原算法。该算法通过使用自适应梯度先验的正则化项,对从图像中沿旋转模糊路径提取的一维向量在频域内进行反卷积运算。同时针对使用Bresenham算法提取像素方法产生的空穴点问题,设计了一种查表决策的自适应中值滤波算法。仿真实验结果表明,相对于改进的维纳滤波、约束最小二乘滤波、梯度加载滤波,该算法能有效地适应低信噪比干扰环境,具有较强的噪声抑制和削弱振铃效应的能力。  相似文献   
63.
In this work, we propose a finite element method for solving the linear poroelasticity equations. Both displacement and pressure are approximated by continuous piecewise polynomials. The proposed method is sequential, leading to decoupled smaller linear systems compared to the systems resulting from a fully implicit finite element approach. A priori error estimates are derived. Numerical results validate the theoretical convergence rates.  相似文献   
64.
Range estimating is a simple form of simulating a project estimate by breaking the project into work packages and approximating the variables in each package using statistical distributions. This paper explores an alternate approach to range estimating that is grounded in fuzzy set theory. The approach addresses two shortcomings of Monte Carlo simulation. The first is related to the analytical difficulty associated with fitting statistical distributions to subjective data, and the second relates to the required number of simulation runs to establish a meaningful estimate of a given parameter at the end of the simulation. For applications in cost estimating, the paper demonstrates that comparable results to Monte Carlo simulation can be achieved using the fuzzy set theory approach. It presents a methodology for extracting fuzzy numbers from experts and processing the information in fuzzy range estimating analysis. It is of relevance to industry and practitioners as it provides an approach to range estimating that more closely resembles the way in which experts express themselves, making it practically easy to apply an approach.  相似文献   
65.
Efforts to incorporate the concerns of bridge users in bridge investment evaluation are often stymied by lack of a comprehensive framework for assessing different user costs. There is a need to synthesise and update existing user cost estimation techniques so that the incorporation of user costs in bridge investment evaluation can be more consistent and streamlined. Secondly, a bridge detour may occur for more than one reason, thus there is a danger of multiple counting that could cause overestimation of user costs. Thirdly, user costs during bridge workzones have rarely been considered in the literature. To address these issues, this article presents a framework for comprehensive estimation of bridge user costs, an approach to address the multiple-counting problem, and a methodology for bridge workzone user cost estimation. Furthermore, the article develops a method to estimate bridge user delay cost due to traffic capacity limitation. The methodologies are demonstrated using a case study.  相似文献   
66.
In this paper, we developed a binary particle swarm optimization (BPSO) based association rule miner. Our BPSO based association rule miner generates the association rules from the transactional database by formulating a combinatorial global optimization problem, without specifying the minimum support and minimum confidence unlike the a priori algorithm. Our algorithm generates the best M rules from the given database, where M is a given number. The quality of the rule is measured by a fitness function defined as the product of support and confidence. The effectiveness of our algorithm is tested on a real life bank dataset from commercial bank in India and three transactional datasets viz. books database, food items dataset and dataset of the general store taken from literature. Based on the results, we infer that our algorithm can be used as an alternative to the a priori algorithm and the FP-growth algorithm.  相似文献   
67.
针对当前基于深度学习的显著性检测算法缺少利用先验特征和边缘信息,且在复杂场景中难以检测出鲁棒性强的显著性区域的问题,提出了一种结合边缘特征,利用先验信息引导的全卷积神经网络显著性检测算法。该算法利用三种被经常用到的先验知识结合边缘信息形成先验图,通过注意力机制将提取的先验特征与深度特征有效融合,最终通过提出的循环卷积反馈优化策略迭代地学习改进显著性区域,从而产生更可靠的最终显著图预测。经过实验定性定量分析,对比证明了算法的可靠性。  相似文献   
68.
The uncertainty in the methane (CH4) source strength of rice fields is among the highest of all sources in the global CH4 budget. Methods to estimate the source strength of rice fields can be divided into two scaling categories: bottom-up (upscaling) and top-down (downscaling). A brief review of upscaling and downscaling methodologies is presented. The combination of upscaling and downscaling methodologies is proposed as a potential method to reduce the uncertainty in the regional CH4 source strength of rice fields. Some preliminary results based on upscaling and downscaling are presented and the limitations of the approaches are discussed. The first case study focuses on upscaling by using a field-scale model in combination with spatial databases to calculate CH4 emissions for the island of Java. The reliability of upscaling results is limited by the uncertainty in model input parameters such as soil properties and organic carbon management. Because controlling variables such as harvested rice area may change on relatively short time scales, a land use change model (CLUE) was used to quantify the potential land use changes on Java in the period 1994–2010. The predicted changes were evaluated using the CH4 emission model. Temporal scaling by coupling land use change models and emission models is necessary to answer policy-related questions on future greenhouse gas emissions. In a downscaling case study, we investigate if inverse modeling can constrain the emissions from rice fields by testing a standard CH4 from rice scenario and a low CH4 from rice scenario (80 and 30 Tg CH4 yr–1, respectively). The results of this study are not yet conclusive; to obtain fine-resolution CH4 emission estimates over the Southeast Asian continent, the monitoring network atmospheric mixturing ratios need to be extended and located closer to the continental sources.  相似文献   
69.
Location based social networks (LBSNs) provide location specific data generated from smart phone into online social networks thus people can share their points of interest (POIs).POI collections are complex and can be influenced by various factors,such as user preferences,social relationships and geographical influence.Therefore,recommending new locations in LBSNs requires to take all these factors into consideration.However,one problem is how to determine optimal weights of influencing factors in an algorithm in which these factors are combined.The user similarity can be obtained from the user check-in data,or from the user friend information,or based on the different geographical influences on each user's check-in activities.In this paper,we propose an algorithm that calculates the user similarity based on check-in records and social relationships,using a proposed weighting function to adjust the weights of these two kinds of similarities based on the geographical distance between users.In addition,a non-parametric density estimation method is applied to predict the unique geographical influence on each user by getting the density probability plot of the distance between every pair of user's check-in locations.Experimental results,using foursquare datasets,have shown that comparisons between the proposed algorithm and the other five baseline recommendation algorithms in LBSNs demonstrate that our proposed algorithm is superior in accuracy and recall,furthermore solving the sparsity problem.  相似文献   
70.
This paper considers two discrete time, finite state processes XX and YY. In the usual hidden Markov model XX modulates the values of YY. However, the values of YY are then i.i.d. given XX. In this paper a new model is considered where the Markov chain XX modulates the transition probabilities of the second, observed chain YY. This more realistically can represent problems arising in DNA sequencing. Algorithms for all related filters, smoothers and parameter estimations are derived. Versions of the Viterbi algorithms are obtained.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号