首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   44篇
  免费   0篇
化学工业   2篇
无线电   6篇
一般工业技术   3篇
冶金工业   1篇
自动化技术   32篇
  2020年   1篇
  2012年   1篇
  2011年   1篇
  2010年   1篇
  2009年   2篇
  2007年   6篇
  2006年   1篇
  2005年   2篇
  2002年   1篇
  2001年   1篇
  2000年   2篇
  1999年   2篇
  1998年   5篇
  1997年   3篇
  1996年   2篇
  1995年   2篇
  1994年   1篇
  1993年   3篇
  1992年   2篇
  1989年   2篇
  1983年   1篇
  1979年   1篇
  1977年   1篇
排序方式: 共有44条查询结果,搜索用时 15 毫秒
1.
Calculations of steel target penetration by L/D ≤ 1 tungsten and tungsten alloy projectiles have been extended to L/D = 1/32 over the velocity range 1.5 to 5 km/s. The ratio of crater to projectile diameter tends to 1 as L/D decreases over this entire velocity range. For impact velocities of 1.5 and 3 km/s, penetration depth normalized by projectile length, P/L, increases with decreasing projectile L/D up to a maximum value and then decreases for still lower L/D. Experiments at impact velocities of 2 and 3 km/s confirm these results. For 5 km/s impact velocity, the calculations show P/L increasing with decreasing projectile L/D over the entire range 1/32 ≤ L/D ≤ 1. The projectile L/D for which the maximum P/L occurs appears to depend on the impact velocity. P/L generally scales with impact velocity as P/L vf(L/D) where f(L/D) ranges from 0 for a long rod to, we believe, 2 in the limit as projectile L/D approaches zero. The calculations show for 1/8 ≤ L/D ≤ 1/2, P/L v0.9; for L/D = 1/16, P/L v1.5; and for L/D = 1/32, the new results give P/L v1.9.  相似文献   
2.
Schneidewind  N.E. 《Computer》2002,35(2):77-83
Measuring quality is the key to developing high-quality software. The author describes two approaches that help to identify the body of knowledge software engineers need to achieve this goal. The first approach derives knowledge requirements from a set of issues identified during two standards efforts: the IEEE Std. 1061-1998 for a Software Quality Metrics Methodology and the American National Standard Recommended Practice for Software Reliability (ANSI/AIAA R-013-1992). The second approach ties these knowledge requirements to phases in the software development life cycle. Together, these approaches define a body of knowledge that shows software engineers why and when to measure quality. Focusing on the entire software development life cycle, rather than just the coding phase, gives software engineers the comprehensive knowledge they need to enhance software quality and supports early detection and resolution of quality problems. The integration of product and process measurements lets engineers assess the interactions between them throughout the life cycle. Software engineers can apply this body of knowledge as a guideline for incorporating quality measurement in their projects. Professional licensing and training programs will also find it useful  相似文献   
3.
The experience of a team that evaluated many reliability models and tried to validate them for the on-board system software of the National Aeronautics and Space Administration's (NASA's) space shuttle is presented. It is shown that three separate but related functions comprise an integrated reliability program: prediction, control, and assessment. The application of the reliability model and the allocation of test resources as part of a testing strategy are discussed  相似文献   
4.
5.
Schneidewind  N.F. 《Computer》1977,10(4):47-53
Two particularly nagging ailments faced by the computer industry today are the high cost and late delivery of software.1,2The symptoms usually surface during software debugging, testing, and integration; but the ailments themselves can most often be traced back to the program design phase and the structural characteristics of the program. The significance of program structural characteristics has been recognized for some time, as witnessed by the emergence of structured programming,3,4,a methodology that sets out to (1) reduce programming errors; (2) design an understandable, readable, and therefore maintainable program; (3) increase our ability to detect errors; and (4) prove, if only informally, that the program is correct. But there is another tool available that has usually been overlooked in the software development process: simulation.  相似文献   
6.
The possibility of obtaining more accurate predictions of future failures by excluding or giving lower weight to the earlier failure counts is suggested. Although data aging techniques such as moving average and exponential smoothing are frequently used in other fields, such as inventory control, the author did not find use of data aging in the various models surveyed. A model that includes the concept of selecting a subset of the failure data is the Schneidewind nonhomogeneous Poisson process (NHPP) software reliability model. In order to use the concept of data aging, there must be a criterion for determining the optimal value of the starting failure count interval. Four criteria for identifying the optimal starting interval for estimating model parameters are evaluated The first two criteria treat the failure count interval index as a parameter by substituting model functions for data vectors and optimizing on functions obtained from maximum likelihood estimation techniques. The third uses weighted least squares to maintain constant variance in the presence of the decreasing failure rate assumed by the model. The fourth criterion is the familiar mean square error. It is shown that significantly improved reliability predictions can be obtained by using a subset of the failure data. The US Space Shuttle on-board software is used as an example  相似文献   
7.
Can object-oriented methods be applied to mathematical software? Apparently, according to Beall and Shepard (An object-oriented framework for reliable numerical simulations, object-oriented software. Addison Wesley, Reading, 1994) who say: “It has been recognized in a number of different fields that object-oriented programming, in general, and software frameworks, in particular, provide a means to allow the efficient construction and maintenance of large scale software systems. Since general purpose numerical analysis codes certainly qualify as large-scale software it makes sense for us to see how these methodologies can be applied to this field.”  相似文献   
8.
Software quality control and prediction model for maintenance   总被引:1,自引:0,他引:1  
We develop a quality control and prediction model for improving the quality of software delivered by development to maintenance. This model identifies modules that require priority attention during development and maintenance by using Boolean discriminant functions. The model also predicts during development the quality that will be delivered to maintenance by using both point and confidence interval estimates of quality. We show that it is important to perform a marginal analysis when making a decision about how many metrics to include in a discriminant function. If many metrics are added at once, the contribution of individual metrics is obscured. Also, the marginal analysis provides an effective rule for deciding when to stop adding metrics. We also show that certain metrics are dominant in their effects on classifying quality and that additional metrics are not needed to increase the accuracy of classification. Related to this property of dominance is the property of concordance, which is the degree to which a set of metrics produces the same result in classifying software quality. A high value of concordance implies that additional metrics will not make a significant contribution to accurately classifying quality; hence, these metrics are redundant. Data from the Space Shuttle flight software are used to illustrate the model process. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   
9.
Failure rate parameters exert a great deal of influence on the accuracy of software reliability predictions. Thus, it behooves us to examine the methods that are used to estimate the parameters. The value of parameters is determined by the pattern of historical failure data and the amount of data used in the estimates. Conventional wisdom suggests that all the data be used. However, this could be counterproductive if all the data are not representative of the current and future failure process. Therefore, we can analyze these factors to obtain the optimal parameter estimates, where “optimal” means parameter values that will result in minimum prediction error. We examine one software reliability model that does not use all the failure data if a priori analysis indicates that optimal parameter values could not be obtained. Historically, this model has used a mean square error criterion (MSE) of the difference between actual and predicted reliability in the parameter estimation range. However, it has been observed that this policy does not always result in minimum error predictions in the prediction range. Therefore, we investigated the linear interpolation estimation method as an alternative to MSE for three NASA shuttle flight software releases comprising forty-two reliability predictions. Interestingly, linear interpolation provided better prediction accuracy, overall. We believe our analysis can be applied to other software and failure data. In order to systematize our thinking on developing new models using linear interpolation, we employed object-oriented design concepts as guides to implementing C++ programs for evaluating linear interpolation with respect to MSE. The author is a fellow of the IEEE, an IEEE Congressional Fellow, and a member of the US Senate 2005.  相似文献   
10.
We present measurements of the field induced changes in the 47 GHz complex resistivity, Δρ~(H, T), in Tl2Ba2CaCu2O8+x (TBCCO) thin films with T c ≃ 105 K, prepared on CeO2 buffered sapphire substrates. At low fields (μ0 H < 10 mT) a very small irreversible feature is present, suggesting a little role of intergranular phenomena. Above that level Δρ~(H, T exhibits a superlinear dependence with the field, as opposed to the expected (at high frequencies) quasilinear behaviour. We observe a crossover between predominantly imaginary to predominantly real (dissipative) response with increasing temperature and/or field. In addition, we find the clear scaling property Δρ~(H, T = Δρ~[H/H * (T)], where the scaling field H (T) maps closely the melting field measured in single crystals. We discuss our microwave results in terms of loss of flux lines rigidity.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号