首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   23篇
  免费   0篇
化学工业   1篇
金属工艺   1篇
建筑科学   4篇
无线电   7篇
一般工业技术   6篇
自动化技术   4篇
  2020年   1篇
  2017年   1篇
  2013年   3篇
  2010年   1篇
  2008年   2篇
  2007年   1篇
  2006年   2篇
  2003年   2篇
  1998年   1篇
  1996年   1篇
  1995年   1篇
  1984年   2篇
  1981年   2篇
  1979年   2篇
  1974年   1篇
排序方式: 共有23条查询结果,搜索用时 15 毫秒
1.
This paper [Tian L, Noore A. Evolutionary neural network modelling for software cumulative failure time prediction. Reliab Eng Syst Saf 2005; 87:45–51] purports to present a useful means of predicting the cumulative failure time function for software reliability growth. In fact, the nature of the ‘prediction’ is too simplistic to be of use. Furthermore, the authors' claims for the accuracy of the predictions appear to be without value.  相似文献   
2.
3.
Abstract

Problem, research strategy, and findings: Historical patterns of discrimination and disinvestment have shaped the current landscape of vulnerability to heat in U.S. cities but are not explicitly considered by heat mitigation planning efforts. Drawing upon the equity planning framework and developing a broader conceptualization of what equity means can enhance urban heat management. Here I ask whether areas in Baltimore (MD), Dallas (TX), and Kansas City (MO) targeted for disinvestment in the past through practices like redlining are now more exposed to heat. I compare estimates of land surface temperature (LST) derived from satellite imagery across the four-category rating system used to guide lending practices in cities around the United States, summarize the demographic characteristics of current residents within each of these historical designations using U.S. Census data, and discuss the connection between systematic disinvestment and exposure to heat. LST and air temperatures are not equivalent, which makes it difficult to reconcile existing research on the human health impacts of heat exposure that rely on a sparse network of air temperature monitoring stations with more granular LST data. Areas of these cities that were targeted for systematic disinvestment in the past have higher mean land surface temperatures than those that received more favorable ratings. Poor and minority residents are also overrepresented in formerly redlined areas in each of the three study cities.

Takeaway for practice: By examining areas that have experienced sustained disinvestment, cities may be able to more quickly narrow the focus of heat mitigation planning efforts while furthering social equity. Efforts to mitigate the negative impacts of rising temperatures in U.S. cities must be tailored to the local climate, built environment, and sociodemographic history. Finally, geospatial data sets that document historical policies are useful for centering and redressing current inequalities when viewed through an equity planning lens.  相似文献   
4.
Recent models for the failure behaviour of systems involving redundancy and diversity have shown that common mode failures can be accounted for in terms of the variability of the failure probability of components over operational environments. Whenever such variability is present, we can expect that the overall system reliability will be less than we could have expected if the components could have been assumed to fail independently. We generalise a model of hardware redundancy due to Hughes, [Hughes, R. P., A new approach to common cause failure. Reliab. Engng, 17 (1987) 211–236] and show that with forced diversity, this unwelcome result no longer applies: in fact it becomes theoretically possible to do better than would be the case under independence of failures. An example shows how the new model can be used to estimate redundant system reliability from component data.  相似文献   
5.
It is increasingly argued that uncertainty is an inescapable feature of the design and operational behaviour of software-intensive systems. This paper elaborates the role of models in managing such uncertainty, in relation to evidence and claims for dependability. Personal and group models are considered with regard to abstraction, consensus and corroboration. The paper focuses on the predictive property of models, arguing for the need for empirical validation of their trustworthiness through experimentation and observation. The impact on trustworthiness of human fallibility, formality of expression and expressiveness is discussed. The paper identifies two criteria for deciding the degree of trust to be placed in a model, and hence also for choosing between models, namely accuracy and informativeness. Finally, analogy and reuse are proposed as the only means by which empirical evidence can be established for models in software engineering.  相似文献   
6.
Abstract

A personal experience of political action against genetic engineering in New Zealand led the writer to reflect on the opportunities provided by science and technology curricula for students to develop critical biotechnological literacy. An analysis of relevant New Zealand science and technology curricula shows that although there are opportunities for genetic engineering issues to be explored, such teaching situations are not commonplace. In this study, the literature reflecting the components of scientific literacy for citizenship are reviewed and a strategy for developing critical biotechnological literacy is suggested.  相似文献   
7.
8.
The paper criticises the underlying assumptions which have been made in much early modeling of computer software reliability. The following suggestions will improve modeling. 1) Do not apply hardware techniques to software without thinking carefully. Software differs from hardware in important respects; we ignore these at our peril. In particular-2) Do not use MTTF, MTBF for software, unless certain that they exist. Even then, remember that- 3) Distributions are always more informative than moments or parameters; so try to avoid commitment to a single measure of reliability. Anyway- 4) There are better measures than MTTF. Percentiles and failure rates are more intuitively appealing than means. S) Software reliability means operational reliability. Who cares how many bugs are in a program? We should be concerned with their effect on its operation. In fact- 6) Bug identification (and elimination) should be separated from reliability measurement, if only to ensure that the measurers do not have a vested interest in getting good results. 7) Use a Bayesian approach and do not be afraid to be subjective. All our statements will ultimately be about our beliefs in the quality of programs. 8) Do not stop at a reliability analysis; try to model life-time utility (or cost) of programs. 9) Now is the time to devote effort to structural models. 10) Structure should be of a kind appropriate to software, e.g. top-down, modular.  相似文献   
9.
An assumption commonly made in early models of software reliability is that the failure rate of a program is a constant multiple of the (unknown) number of faults remaining. This implies that all faults contribute the same amount to the failure rate of the program. The assumption is challenged and an alternative proposed. The suggested model results in earlier fault-fixes having a greater effect than later ones (the faults which make the greatest contribution to the overall failure rate tend to show themselves earlier, and so are fixed earlier), and the DFR property between fault fixes (assurance about programs increases during periods of failure-free operation, as well as at fault fixes). The model is tractable and allows a variety of reliability measures to be calculated. Predictions of total execution time to achieve a target reliability, and total number of fault fixes to target reliability, are obtained. The model might also apply to hardware reliability growth resulting from the elimination of design errors.  相似文献   
10.
In recent years considerable attention has been focused on the potential of the Internet as a means of health information delivery that can meet varied health information needs and empower patients. In this article, we explore utilization of the Internet as a means of health information consumption amongst young women with breast cancer who were known Internet users. Focusing on a population known to be competent at using the Internet allowed us to eliminate the digital divide as a possible explanation for limited use of the Internet for health information‐seeking. Ultimately, this allowed us to demonstrate that even in this Internet savvy population, the Internet is not necessarily an unproblematic means of disseminating health care information, and to demonstrate that the huge amount of health care information available does not automatically mean that information is useful to those who seek it, or even particularly easy to find. Results from our qualitative study suggest that young women with breast cancer sought information about their illness in order to make a health related decision, to learn what would come next, or to pursue social support. Our respondents reported that the Internet was one source of many that they consulted when seeking information about their illness, and it was not the most trusted or most utilized source of information this population sought.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号