首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   84997篇
  免费   6671篇
  国内免费   3522篇
电工技术   7659篇
技术理论   21篇
综合类   14415篇
化学工业   6518篇
金属工艺   2972篇
机械仪表   4771篇
建筑科学   13528篇
矿业工程   4379篇
能源动力   2510篇
轻工业   4337篇
水利工程   5041篇
石油天然气   3521篇
武器工业   499篇
无线电   6698篇
一般工业技术   4995篇
冶金工业   3732篇
原子能技术   803篇
自动化技术   8791篇
  2024年   297篇
  2023年   745篇
  2022年   1585篇
  2021年   1767篇
  2020年   1744篇
  2019年   1359篇
  2018年   1339篇
  2017年   1920篇
  2016年   2249篇
  2015年   2731篇
  2014年   6097篇
  2013年   4626篇
  2012年   5798篇
  2011年   6438篇
  2010年   5190篇
  2009年   5639篇
  2008年   5047篇
  2007年   6560篇
  2006年   6061篇
  2005年   5203篇
  2004年   4285篇
  2003年   3583篇
  2002年   2960篇
  2001年   2445篇
  2000年   2079篇
  1999年   1574篇
  1998年   1019篇
  1997年   816篇
  1996年   756篇
  1995年   628篇
  1994年   554篇
  1993年   416篇
  1992年   350篇
  1991年   232篇
  1990年   202篇
  1989年   218篇
  1988年   156篇
  1987年   102篇
  1986年   89篇
  1985年   53篇
  1984年   53篇
  1983年   33篇
  1982年   26篇
  1981年   41篇
  1980年   25篇
  1979年   21篇
  1978年   12篇
  1975年   12篇
  1973年   6篇
  1959年   9篇
排序方式: 共有10000条查询结果,搜索用时 0 毫秒
231.
乘法器复用技术在滑窗FIR滤波处理中的运用   总被引:1,自引:0,他引:1  
随着超大规模集成电路的飞速发展,FPGA集成的硬件乘法器越来越多,内核时钟越来越快,使得FPGA在实时信号处理中得到广泛应用。介绍了FIR滤波运算的原理与硬件处理结构,通过乘法器的复用技术,解决了实现滑窗FIR滤波处理时,FPGA内部乘法器的高速运算与外部慢速数据率之间的矛盾。  相似文献   
232.
This paper resolves the problem of predicting as well as the best expert up to an additive term of the order o(n), where n is the length of a sequence of letters from a finite alphabet. We call the games that permit this weakly mixable and give a geometrical characterisation of the class of weakly mixable games. Weak mixability turns out to be equivalent to convexity of the finite part of the set of superpredictions. For bounded games we introduce the Weak Aggregating Algorithm that allows us to obtain additive terms of the form .  相似文献   
233.
Problems from plastic analysis are based on the convex, linear or linearised yield/strength condition and the linear equilibrium equation for the stress (state) vector. In practice one has to take into account stochastic variations of several model parameters. Hence, in order to get robust maximum load factors, the structural analysis problem with random parameters must be replaced by an appropriate deterministic substitute problem. A direct approach is proposed based on the primary costs for missing carrying capacity and the recourse costs (e.g. costs for repair, compensation for weakness within the structure, damage, failure, etc.). Based on the mechanical survival conditions of plasticity theory, a quadratic error/loss criterion is developed. The minimum recourse costs can be determined then by solving an optimisation problem having a quadratic objective function and linear constraints. For each vector a(·) of model parameters and each design vector x, one obtains then an explicit representation of the “best” internal load distribution F. Moreover, also the expected recourse costs can be determined explicitly. Consequently, an explicit stochastic nonlinear program results for finding a robust maximal load factor μ. The analytical properties and possible solution procedures are discussed.  相似文献   
234.
In an organization operating in the bancassurance sector we identified a low-risk IT subportfolio of 84 IT projects comprising together 16,500 function points, each project varying in size and duration, for which we were able to quantify its requirements volatility. This representative portfolio stems from a much larger portfolio of IT projects. We calculated the volatility from the function point countings that were available to us. These figures were aggregated into a requirements volatility benchmark. We found that maximum requirements volatility rates depend on size and duration, which refutes currently known industrial averages. For instance, a monthly growth rate of 5% is considered a critical failure factor, but in our low-risk portfolio we found more than 21% of successful projects with a volatility larger than 5%. We proposed a mathematical model taking size and duration into account that provides a maximum healthy volatility rate that is more in line with the reality of low-risk IT portfolios. Based on the model, we proposed a tolerance factor expressing the maximal volatility tolerance for a project or portfolio. For a low-risk portfolio its empirically found tolerance is apparently acceptable, and values exceeding this tolerance are used to trigger IT decision makers. We derived two volatility ratios from this model, the π-ratio and the ρ-ratio. These ratios express how close the volatility of a project has approached the danger zone when requirements volatility reaches a critical failure rate. The volatility data of a governmental IT portfolio were juxtaposed to our bancassurance benchmark, immediately exposing a problematic project, which was corroborated by its actual failure. When function points are less common, e.g. in the embedded industry, we used daily source code size measures and illustrated how to govern the volatility of a software product line of a hardware manufacturer. With the three real-world portfolios we illustrated that our results serve the purpose of an early warning system for projects that are bound to fail due to excessive volatility. Moreover, we developed essential requirements volatility metrics that belong on an IT governance dashboard and presented such a volatility dashboard.  相似文献   
235.
A multi-objective controller synthesis problem is considered in which an output is to be regulated approximately by assuring a bound on the steady-state peak amplification in response to an infinite-energy disturbance, while also guaranteeing a desired level of performance measured in terms of the worst-case energy gain from a finite-energy input to a performance output. Relying on a characterization of the controllers with which almost asymptotic regulation is accomplished, the problem of guaranteeing the desired level of performance is reduced to solving a system of linear matrix inequalities subject to a set of linear equality constraints. Based on the solution of this system, a procedure is outlined for the construction of a suitable controller whose order is equal to the order of the plant plus the order of the exogenous system.  相似文献   
236.
control with limited communication and message losses   总被引:2,自引:0,他引:2  
We propose an H approach to a remote control problem where the communication is constrained due to the use of a shared channel. The controller employs a periodic time sequencing scheme for message transmissions from multiple sensors and to multiple actuators of the system. It further takes into account the information on the random message losses that occur in the channel. An exact characterization for controller synthesis is obtained and is stated in terms of linear matrix inequalities. Furthermore, an analysis on the loss probabilities of the messages to accomplish stabilization is carried out. The results are illustrated through a numerical example.  相似文献   
237.
A new likelihood based AR approximation is given for ARMA models. The usual algorithms for the computation of the likelihood of an ARMA model require O(n) flops per function evaluation. Using our new approximation, an algorithm is developed which requires only O(1) flops in repeated likelihood evaluations. In most cases, the new algorithm gives results identical to or very close to the exact maximum likelihood estimate (MLE). This algorithm is easily implemented in high level quantitative programming environments (QPEs) such as Mathematica, MatLab and R. In order to obtain reasonable speed, previous ARMA maximum likelihood algorithms are usually implemented in C or some other machine efficient language. With our algorithm it is easy to do maximum likelihood estimation for long time series directly in the QPE of your choice. The new algorithm is extended to obtain the MLE for the mean parameter. Simulation experiments which illustrate the effectiveness of the new algorithm are discussed. Mathematica and R packages which implement the algorithm discussed in this paper are available [McLeod, A.I., Zhang, Y., 2007. Online supplements to “Faster ARMA Maximum Likelihood Estimation”, 〈http://www.stats.uwo.ca/faculty/aim/2007/faster/〉]. Based on these package implementations, it is expected that the interested researcher would be able to implement this algorithm in other QPEs.  相似文献   
238.
One of the main problems in operational risk management is the lack of loss data, which affects the parameter estimates of the marginal distributions of the losses. The principal reason is that financial institutions only started to collect operational loss data a few years ago, due to the relatively recent definition of this type of risk. Considering this drawback, the employment of Bayesian methods and simulation tools could be a natural solution to the problem. The use of Bayesian methods allows us to integrate the scarce and, sometimes, inaccurate quantitative data collected by the bank with prior information provided by experts. An original proposal is a Bayesian approach for modelling operational risk and for calculating the capital required to cover the estimated risks. Besides this methodological innovation a computational scheme, based on Markov chain Monte Carlo simulations, is required. In particular, the application of the MCMC method to estimate the parameters of the marginals shows advantages in terms of a reduction of capital charge according to different choices of the marginal loss distributions.  相似文献   
239.
We demonstrate, through separation of variables and estimates from the semi-classical analysis of the Schrödinger operator, that the eigenvalues of an elliptic operator defined on a compact hypersurface in ? n can be found by solving an elliptic eigenvalue problem in a bounded domain Ω?? n . The latter problem is solved using standard finite element methods on the Cartesian grid. We also discuss the application of these ideas to solving evolution equations on surfaces, including a new proof of a result due to Greer (J. Sci. Comput. 29(3):321–351, 2006).  相似文献   
240.
提出一种基于佳点集原理的进化策略用于神经网络结构和参数的调整.为了克服正交设计法的一些不足来处理高维最优化问题,本文采用分步交叉框架,将佳点集技术引入实数域交叉算子增强高维空间的搜索能力.前馈神经网络的隐含节点与连接边数从小逐步递增直至学习效果足够好.通过调整能得到一个部分连接的前馈网络,减少了网络实现的耗费.最后,佳点集进化策略有效应用于生成预测太阳黑子的演化神经网络.实验结果证明了新方法的有效性.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号