首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3824篇
  免费   84篇
  国内免费   1篇
电工技术   61篇
综合类   4篇
化学工业   588篇
金属工艺   58篇
机械仪表   62篇
建筑科学   91篇
矿业工程   29篇
能源动力   45篇
轻工业   353篇
水利工程   20篇
石油天然气   8篇
无线电   548篇
一般工业技术   472篇
冶金工业   1100篇
原子能技术   35篇
自动化技术   435篇
  2021年   32篇
  2020年   26篇
  2019年   19篇
  2018年   45篇
  2017年   36篇
  2016年   22篇
  2015年   31篇
  2014年   51篇
  2013年   216篇
  2012年   90篇
  2011年   115篇
  2010年   81篇
  2009年   113篇
  2008年   127篇
  2007年   134篇
  2006年   120篇
  2005年   102篇
  2004年   86篇
  2003年   106篇
  2002年   73篇
  2001年   82篇
  2000年   73篇
  1999年   102篇
  1998年   290篇
  1997年   198篇
  1996年   133篇
  1995年   112篇
  1994年   98篇
  1993年   96篇
  1992年   60篇
  1991年   65篇
  1990年   56篇
  1989年   46篇
  1988年   54篇
  1987年   30篇
  1986年   35篇
  1985年   34篇
  1984年   41篇
  1983年   50篇
  1982年   47篇
  1981年   44篇
  1980年   41篇
  1979年   38篇
  1978年   26篇
  1977年   56篇
  1976年   50篇
  1975年   42篇
  1973年   24篇
  1972年   29篇
  1966年   20篇
排序方式: 共有3909条查询结果,搜索用时 357 毫秒
101.
Solutions of numerically ill-posed least squares problems for ARm×n by Tikhonov regularization are considered. For DRp×n, the Tikhonov regularized least squares functional is given by where matrix W is a weighting matrix and is given. Given a priori estimates on the covariance structure of errors in the measurement data , the weighting matrix may be taken as which is the inverse covariance matrix of the mean 0 normally distributed measurement errors in . If in addition is an estimate of the mean value of , and σ is a suitable statistically-chosen value, J evaluated at its minimizer approximately follows a χ2 distribution with degrees of freedom. Using the generalized singular value decomposition of the matrix pair , σ can then be found such that the resulting J follows this χ2 distribution. But the use of an algorithm which explicitly relies on the direct solution of the problem obtained using the generalized singular value decomposition is not practical for large-scale problems. Instead an approach using the Golub-Kahan iterative bidiagonalization of the regularized problem is presented. The original algorithm is extended for cases in which is not available, but instead a set of measurement data provides an estimate of the mean value of . The sensitivity of the Newton algorithm to the number of steps used in the Golub-Kahan iterative bidiagonalization, and the relation between the size of the projected subproblem and σ are discussed. Experiments presented contrast the efficiency and robustness with other standard methods for finding the regularization parameter for a set of test problems and for the restoration of a relatively large real seismic signal. An application for image deblurring also validates the approach for large-scale problems. It is concluded that the presented approach is robust for both small and large-scale discretely ill-posed least squares problems.  相似文献   
102.
随机采样的2DPCA人脸识别方法   总被引:1,自引:0,他引:1  
在2DPCA的基础上提出一种随机采样的2DPCA人脸识别方法--RRS-2DPCA.同传统通过对特征或投影向量进行采样的方法不同的是,RRS-2DPCA(Row Random Sampling 2DPCA)将随机采样建立于图像的行向量集中,然后在行向量子集中执行2DPCA.在ORL、Yale和AR人脸数据集上进行实验,结果表明RRS-2DPCA不仅具很好的识别性能和运算效率,而且对参数具有很大的稳定性.另外针对2DPCA和RRS-2DPCA对光线、遮挡等不鲁棒问题,进一步提出了局部区域随机采样的2DPCA方法LRRS-2DPCA(Local Row Random Sampling 2DPCA),将RRS-2DPCA执行在人脸图像的局部区域中.实验结果表明LRRS-2DPCA不仅具有较好的鲁棒性更大大的提高了RRS-2DPCA的识别性能.  相似文献   
103.
104.
VerifyThis 2015     
VerifyThis 2015 was a one-day program verification competition which took place on April 12th, 2015 in London, UK, as part of the European Joint Conferences on Theory and Practice of Software (ETAPS 2015). It was the fourth instalment in the VerifyThis competition series. This article provides an overview of the VerifyThis 2015 event, the challenges that were posed during the competition, and a high-level overview of the solutions to these challenges. It concludes with the results of the competition and some ideas and thoughts for future instalments of VerifyThis.  相似文献   
105.
This paper develops a framework for the consideration of internal markets as an alternative to information systems (IS) outsourcing. It is based on an assessment of the pros and cons of both outsourcing and of insourcing based on the internal markets approach. It is formulated in terms of the operational, tactical, and strategic impacts of the choice among the alternatives. The framework, and the propositions that are developed from it, should be useful both for researchers, who can use it for developing testable research hypotheses, and for practitioners, who may use it as a basis for developing a comprehensive set of criteria for the evaluation of these sourcing options.  相似文献   
106.
In this paper, we propose a novel large margin classifier, called the maxi-min margin machine M(4). This model learns the decision boundary both locally and globally. In comparison, other large margin classifiers construct separating hyperplanes only either locally or globally. For example, a state-of-the-art large margin classifier, the support vector machine (SVM), considers data only locally, while another significant model, the minimax probability machine (MPM), focuses on building the decision hyperplane exclusively based on the global information. As a major contribution, we show that SVM yields the same solution as M(4) when data satisfy certain conditions, and MPM can be regarded as a relaxation model of M(4). Moreover, based on our proposed local and global view of data, another popular model, the linear discriminant analysis, can easily be interpreted and extended as well. We describe the M(4) model definition, provide a geometrical interpretation, present theoretical justifications, and propose a practical sequential conic programming method to solve the optimization problem. We also show how to exploit Mercer kernels to extend M(4) for nonlinear classifications. Furthermore, we perform a series of evaluations on both synthetic data sets and real-world benchmark data sets. Comparison with SVM and MPM demonstrates the advantages of our new model.  相似文献   
107.
Although recent years have seen significant advances in the spatial resolution possible in the transmission electron microscope (TEM), the temporal resolution of most microscopes is limited to video rate at best. This lack of temporal resolution means that our understanding of dynamic processes in materials is extremely limited. High temporal resolution in the TEM can be achieved, however, by replacing the normal thermionic or field emission source with a photoemission source. In this case the temporal resolution is limited only by the ability to create a short pulse of photoexcited electrons in the source, and this can be as short as a few femtoseconds. The operation of the photo-emission source and the control of the subsequent pulse of electrons (containing as many as 5 x 10(7) electrons) create significant challenges for a standard microscope column that is designed to operate with a single electron in the column at any one time. In this paper, the generation and control of electron pulses in the TEM to obtain a temporal resolution <10(-6)s will be described and the effect of the pulse duration and current density on the spatial resolution of the instrument will be examined. The potential of these levels of temporal and spatial resolution for the study of dynamic materials processes will also be discussed.  相似文献   
108.
The Journal of Supercomputing - Rapid advances in interconnection networks in multiprocessors are closing the gap between computation and communication. Given this trend, how can we utilize fast...  相似文献   
109.
110.
In recent years, the state of the art in shape optimization has advanced due to new approaches proposed by various researchers. A fundamental difficulty in shape optimization is that the original finite element mesh may become invalid during large shape changes. Automatic remeshing and velocity field approaches are most commonly used for conventionalh-type finite element analysis to address this problem.In this paper, we describe a different approach to shape optimization based on the use of high-orderp-type finite elements tightly coupled to a parameterized computational geometry module. The advantages of this approach are as follows.Accurate results can be obtained with much fewer finite elements, so large shape changes are possible without remeshing.Automatic adaptive analysis may be performed so that accurate results are achieved at each step of the optimization process.Since the elements derive their geometric mapping from the underlying geometry, the fundamental equivalent of velocity field element shape updating may be readily achieved.Results are presented for sizing and shape optimization with this approach and contrasted with previous results from the literature.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号