首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
New insights on multi-state component criticality and importance   总被引:1,自引:1,他引:1  
In this paper, new importance measures for multi-state systems with multi-state components are introduced and evaluated. These new measures complement and enhance current work done in the area of multi-state reliability. In general, importance measures are used to evaluate and rank the criticality of component or component states with respect to system reliability. The focus of the study is to provide intuitive and clear importance measures that can be used to enhance system reliability from two perspectives: (1) how a specific component affects multi-state system reliability and (2) how a particular component state or set of states affects multi-state system reliability. The first measure unsatisfied demand index, provides insight regarding a component or component state contribution to unsatisfied demand. The second measure multi-state failure frequency index, elaborates on an approach that quantifies the contribution of a particular component or component state to system failure. Finally, the multi-state redundancy importance identifies where to allocate component redundancy as to improve system reliability. The findings of this study indicate that both perspectives can be used to complement each other and as an effective tool to assess component criticality. Examples illustrate and compare the proposed measures with previous multi-state importance measures.  相似文献   

2.
Binary capacitated two-terminal reliability at demand level d (2TRd) is defined as the probability that network capacity, generated by binary capacitated components, between specified source and sink nodes is greater than or equal to a demand of d units. For the components that comprise these networks, reliability estimates are usually obtained from some source of testing. For these estimates and depending on the type of testing, there is an associated uncertainty that can significantly affect the overall estimation of 2TRd. That is, an accurate estimate of 2TRd is highly dependent on the uncertainty associated to the reliability of the network components. Current methods for the estimation of network reliability and associated uncertainty are restricted to the case where the network follows a series-parallel architecture and the components are binary and non-capacitated. For different capacitated network designs, an estimate on 2TRd can only be approximated for specific scenarios. This paper presents a bounding approach for 2TRd by explaining how component reliability and associated uncertainty impact estimates at the network level. The proposed method is based on a structured approach that generates a α-level confidence interval (CI) for binary capacitated two-terminal network reliability. Simulation results on different test networks show that the proposed methods can be used to develop very accurate bounds of two-terminal network reliability.  相似文献   

3.
An active research field is the evaluation of the reliability of a complex network. The most popular methods for such evaluation often use Minimal Paths (MP) or Minimal Cuts (MC) of the network. This paper proposes an algorithmic approach to enumerate MC of the directed network's reliability measures. Another attempt has been made in this paper to provide an answer to the question as to when MP or MC are suitable for evaluating reliability measures. An exhaustive study has been conducted to provide some guidelines in this respect. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

4.
In reliability analysis, the stress-strength model is often used to describe the life of a component which has a random strength (X) and is subjected to a random stress (Y). In this paper, we considered the problem of estimating the reliability R=P [Y<X] when the distributions of both stress and strength are independent and follow exponentiated Pareto distribution. The maximum likelihood estimator of the stress strength reliability is calculated under simple random sample, ranked set sampling and median ranked set sampling methods. Four different reliability estimators under median ranked set sampling are derived. Two estimators are obtained when both strength and stress have an odd or an even set size. The two other estimators are obtained when the strength has an odd size and the stress has an even set size and vice versa. The performances of the suggested estimators are compared with their competitors under simple random sample via a simulation study. The simulation study revealed that the stress strength reliability estimates based on ranked set sampling and median ranked set sampling are more efficient than their competitors via simple random sample. In general, the stress strength reliability estimates based on median ranked set sampling are smaller than the corresponding estimates under ranked set sampling and simple random sample methods.  相似文献   

5.
This work is focused on proving implications between criteria for ageing of a system with a well-defined survival time distribution. Specifically, two reliability measures, the mean residual lifetime (MRL) and the hazard rate (HR) are considered. The current article deals with the question of whether a specific property or criterion for ageing with respect to a reliability measure implies the same property or some other criterion with another reliability measure. Implications of various criteria, additional consequences regarding various non-parametric classes of life distributions and particular features of MRL functions are derived and discussed using a characterization result of such functions.  相似文献   

6.
Markov models are an established part of current systems reliability and availability analysis. They are extensively used in various applications, including, in particular, electrical power supply systems. One of their advantages is that they considerably simplify availability evaluation so that the availability of very large and complex systems can be computed. It is generally assumed, with some justification, that the results obtained from such Markov reliability models are relatively robust. It has, however, been known for some time, that practical time to failure distributions are frequently non-exponential, particular attention being given in much reliability work to the Weibull family. Morover, recently additional doubt has been case on the validity of the Markov approach, both because of the work of Professor Kline and others on the non-exponentiality of practical repair time distribution, and because of the advantages to be obtained in terms of modelling visibility of the alternative simulation approach. In this paper we employ results on the ability of the k-out-of-n systems to span the coherent set to investigate the robustness of Markov reliability models based upon a simulation investigation of coherent systems of up to 10 identical components. We treat the case where adequate repair facilities are available for all components. The effects upon the conventional transient and steady-state measures of Weibull departures from exponentiality are considered. In general, the Markov models are found to be relatively robust, with alterations to failure distributions being more important than those to repair distributions, and decreasing hazard rates more critical than increasing hazard rates. Of the measures studied, the mean time to failure is most sensitive to variations in distributional shape.  相似文献   

7.
The reliability equivalence factors of a parallel system with n independent and identical components are obtained. The failure rates of the system's components are assumed to be constant. Three different methods are used to improve the given system. The mean times to failures of the original and improved systems are obtained. A comparison between the mean time to failures of the improved systems obtained via different methods used is presented. Numerical studies are presented to compare the different reliability factors obtained. The results obtained here generalize the results given in the literature by setting n=1, 2.  相似文献   

8.
Main specifications are formulated for methods of determining control measures of concrete quality. The expediency of applying combined methods in some cases is substantiated. The known approaches to the application of combined methods are summarized; they arise from the need to upgrade the reliability and accuracy of determining the control measure and to decrease the labour consumption of testing.  相似文献   

9.
This paper presents a probabilistic methodology for nonlinear fracture analysis in order to get decisive help for the reparation and functioning optimization of general cracked structures. It involves nonlinear finite element analysis. Two methods are studied for the coupling of finite element with reliability software: the direct method and the quadratic response surface method. To ensure the response surface efficiency, we introduce new quality measures in the convergence scheme. An example of a cracked pipe is presented to illustrate the proposed methodology. The results show that the methodology is able to give accurate probabilistic characterization of the J-integral in elastic–plastic fracture mechanics without obvious time consumption. By introducing an “analysis re-using” technique, we show how the response surface method becomes cost attractive in case of incremental finite element analysis.  相似文献   

10.
Journal impact measures in bibliometric research   总被引:12,自引:2,他引:10  
Glänzel  Wolfgang  Moed  Henk F. 《Scientometrics》2002,53(2):171-193
The Impact Factor introduced by Eugene Garfield is a fundamental citation-based measure for significance and performance of scientific journals. It is perhaps the most popular bibliometric product used in bibliometrics itself, as well as outside the scientific community. First, a concise review of the background and history of the ISI impact factor and the basic ideas underlying it are given. A cross-citation matrix is used to visualise the construction of the Impact Factor and several related journal citation measures.Both strengths and flaws of the impact factor are discussed. Several attempts made by different authors to introduce more sophisticated journal citation measures and the reasons why many indicators aiming at a correction of methodological limitations of the Impact Factor were not successful are described. The next section is devoted to the analysis of basic technical and methodological aspects. In this context, the most important sources of possible biases and distortions for calculation and use of journal citation measures are studied. Thereafter, main characteristics of application contexts are summarised. The last section is concerned with questions of statistical reliability of journal citation measures. It is shown that in contrast to a common misbelief statistical methods can be applied to discrete "skewed" distributions, and that the statistical reliability of these statistics can be used as a basis for application of journal impact measures in comparative analyses. Finally, the question of sufficiency or insufficiency of a single, howsoever complex measure for characterising the citation impact of scientific journals is discussed. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

11.
Modeling of system lifetimes becomes more complicated when external events can cause the simultaneous failure of two or more system components. Models that ignore these common cause failures lead to methods of analysis that overestimate system reliability. Typical data consist of observed frequencies in which i out of m (identical) components in a system failed simultaneously, i = 1,…, m. Because this attribute data is inherently dependent on the number of components in the system, procedures for interpretation of data from different groups with more or fewer components than the system under study are not straightforward. This is a recurrent problem in reliability applications in which component configurations change from one system to the next. For instance, in the analysis of a large power-supply system that has three stand-by diesel generators in case of power loss, statistical tests and estimates of system reliability cannot be derived easily from data pertaining to different plants for which only one or two diesel generators were used to reinforce the main power source. This article presents, discusses, and analyzes methods to use generic attribute reliability data efficiently for systems of varying size.  相似文献   

12.
In an era of high stakes testing and evaluation in education, psychology, and health care, there is need for rigorous methods and standards for obtaining evidence of the reliability of measures and validity of inferences. Messick (1989, 1995), the Standard for Educational and Psychological Testing (American Psychological Association, American Educational Research Association, and National Council on Measurement in Education, 1999), and the Medical Outcomes Trust (1995), among others, have described methods that may be used to gather evidence for reliability and validity, but ignored the potential role Rasch measurement may contribute to this process. This article will outline methods in Rasch measurement that are used to gather evidence for reliability and validity and attempt to articulate how these methods may be linked with the current views of reliability and validity.  相似文献   

13.
Despite the recent revolution in statistical thinking and methodology. practical reliability analysis and assessment remains almost exclusively based on a black-box approach employing parametric statistical techniques and significance tests. Such practice, which is largely automatic for the industrial practitioner, implicity involves a large number of physically unreasonable assumptions that in practice are rarely met. Extensive investigation of reliability source data indicates a variety of differing data structures, which contradict the assumptions implicit in the usual methodology. As well as these, lack of homogeneity in the data, due, for instance, to multiple failure modes or misdefinition of environment, is commonly overlooked by the standard methodology. In this paper we argue the case for exploring reliability data. The pattern revealed by such exploration of a data set provides intrinsic information which helps to reinforce and reinterpret the engineering knowledge about the physical nature of the technological system to which the data refers. Employed in this way, the data analyst and the reliability engineer are partners in an iterative process aimed towards the greater understanding of the system and the process of failure. Despite current standard practice, the authors believe it to be critical that the structure of data analysis must reflect the structure in the failure data. Although standard methodology provides an easy and repeatable analysis, the authors' experience indicates that it is rarely an appropriate one. It is ironic that whereas methods to analyse the data structures commonly found in reliability data have been available for some time, the insistence about the standard black-box approach has prevented the identification of such ‘abnormal’ features in reliability data and the application of these approaches. We discuss simple graphical procedures to investigate the structure of reliability data, as well as more formal testing procedures which assist in decision-making methodology. Partial reviews of such methods have appeared previously and a more detailed development of the exploration approach and of the appropriate analysis it implies will be dealt with elsewhere. Here, our aim is to argue the case for the reliability analyst to LOOK AT THE DATA. and to analyse it accordingly.  相似文献   

14.
The case of time non-homogeneous Markov systems in discrete time is studied in this article. In order to have measures adapted to this kind of systems, some reliability and performability measures are formulated, such as reliability, availability, maintainability and different time variables including new indicators more dedicated to electrical systems like instantaneous expected load curtailed and the expected energy not supplied on a time interval. The previous indicators are also formulated in the case of cyclic chains where asymptotic results can be obtained. The interest of taking into account hazard rate time variation, is to get more accurate and more instructive indicators but also be able to access new performability indicators that cannot be obtained by classical methods. To illustrate this, an example from an Electricité De France electrical substation is solved.  相似文献   

15.
An evaluation of the measurement reliability is made by a comparison of hydrophone calibration methods in the frequency range from 0.1 Hz to 63 kHz.Translated from Izmeritel'naya Tekhnika, No. 12, pp. 39–43, December, 1995.  相似文献   

16.
In the broadest sense, reliability is a measure of performance of systems. As systems have grown more complex, the consequences of their unreliable behavior have become severe in terms of cost, effort, lives, etc., and the interest in assessing system reliability and the need for improving the reliability of products and systems have become very important. Most solution methods for reliability optimization assume that systems have redundancy components in series and/or parallel systems and alternative designs are available. Reliability optimization problems concentrate on optimal allocation of redundancy components and optimal selection of alternative designs to meet system requirement. In the past two decades, numerous reliability optimization techniques have been proposed. Generally, these techniques can be classified as linear programming, dynamic programming, integer programming, geometric programming, heuristic method, Lagrangean multiplier method and so on. A Genetic Algorithm (GA), as a soft computing approach, is a powerful tool for solving various reliability optimization problems. In this paper, we briefly survey GA-based approach for various reliability optimization problems, such as reliability optimization of redundant system, reliability optimization with alternative design, reliability optimization with time-dependent reliability, reliability optimization with interval coefficients, bicriteria reliability optimization, and reliability optimization with fuzzy goals. We also introduce the hybrid approaches for combining GA with fuzzy logic, neural network and other conventional search techniques. Finally, we have some experiments with an example of various reliability optimization problems using hybrid GA approach.  相似文献   

17.
Many real-world systems are multistate systems composed of multistate components in which the reliability can be computed in terms of the lower bound points of level d, called d-MCs. Such systems (electric power, transportation, etc.) may be regarded as flow networks whose arcs have independent, discrete, limited and multivalued random capacities. In this study, all MCs are assumed to be known in advance and we focused on how to find the entire d-MCs before calculating the reliability value of a network. Just based on the definition of d-MC, we develop an intuitive algorithm which is better than the best-known existing method. Analysis of our algorithm and comparison to existing algorithms shows that our proposed method is easier to understand and implement. Finally, the computational complexity of the proposed algorithm is analysed and compared with the existing methods.  相似文献   

18.
As an efficient data structure for representation and manipulation of Boolean functions, binary decision diagrams (BDDs) have been applied to network reliability analysis. However, most of the existing BDD methods on network reliability analysis have assumed perfectly reliable vertices, which is often not true for real‐world networks where the vertices can fail because of factors such as limited resources (eg, power and memory) or harsh operating environments. Extensions have been made to the existing BDD methods (particularly, edge expansion diagram and boundary set–based methods) to address imperfect vertices. But these extended methods have various constraints leading to problems in accuracy or space efficiency. To overcome these constraints, in this paper, we propose a new BDD‐based algorithm called ordered BDD dependency test for K‐terminal network reliability analysis considering both edge and vertex failures. Based on a newly defined concept “dependency set”, the proposed algorithm can accurately compute the reliability of networks with imperfect vertices. In addition, the proposed algorithm has no restrictions on the starting vertex for the BDD model construction. Comprehensive examples and experiments are provided to show effectiveness of the proposed approach.  相似文献   

19.
This paper investigates the possibility of transferring the concepts developed for SPC into reliability control of integrated circuits. It employs Taguchi methods and response surface methodology to predict the reliability of a 20 nm gate oxide process using selected critical in line parameters. A Taguchi L12 design was used as a screening experiment to determine the most critical factors which effect the reliability of the gate oxide dielectric. From this three parameters were selected for use in a central composite face centred array to model their effect on the oxide dielectric reliability using response surface methodology. The reliability of the oxide dielectric was measured using time-dependent dielectric breakdown testing, and the calculations were based on the time to 0⋅1 per cent cumulative failure, as this is the time on which industry standard reliability predictions are based. The results show that using a test chip the intrinsic reliability of the oxide can be modelled using the values obtained from critical nodes within a wafer fabrication facility and that this is a viable approach to predict oxide reliability. © 1997 John Wiley & Sons, Ltd.  相似文献   

20.
For the interpretation of the results of probabilistic risk assessments it is important to have measures which identify the basic events that contribute most to the frequency of the top event but also to identify basic events that are the main contributors to the uncertainty in this frequency. Both types of measures, often called Importance Measure and Measure of Uncertainty Importance, respectively, have been the subject of interest for many researchers in the reliability field. The most frequent mode of uncertainty analysis in connection with probabilistic risk assessment has been to propagate the uncertainty of all model parameters up to an uncertainty distribution for the top event frequency. Various uncertainty importance measures have been proposed in order to point out the parameters that in some sense are the main contributors to the top event distribution. The new measure of uncertainty importance suggested here goes a step further in that it has been developed within a decision theory framework, thereby providing an indication of on what basic event it would be most valuable, from the decision-making point of view, to procure more information.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号