首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Software reliability assessment models in use today treat software as a monolithic block. An aversion towards ‘atomic' models seems to exist. These models appear to add complexity to the modeling, to the data collection and seem intrinsically difficult to generalize. In 1997, we introduced an architecturally based software reliability model called FASRE. The model is based on an architecture derived from the requirements which captures both functional and nonfunctional requirements and on a generic classification of functions, attributes and failure modes. The model focuses on evaluation of failure mode probabilities and uses a Bayesian quantification framework. Failure mode probabilities of functions and attributes are propagated to the system level using fault trees. It can incorporate any type of prior information such as results of developers' testing, historical information on a specific functionality and its attributes, and, is ideally suited for reusable software. By building an architecture and deriving its potential failure modes, the model forces early appraisal and understanding of the weaknesses of the software, allows reliability analysis of the structure of the system, provides assessments at a functional level as well as at a systems' level. In order to quantify the probability of failure (or the probability of success) of a specific element of our architecture, data are needed. The term element of the architecture is used here in its broadest sense to mean a single failure mode or a higher level of abstraction such as a function. The paper surveys the potential sources of software reliability data available during software development. Next the mechanisms for incorporating these sources of relevant data to the FASRE model are identified.  相似文献   

2.
A reliability evaluation approach based on the development process of the structural nonlinearity is presented. The traditional structural system reliability theory for structural safety regarding combination of failure modes is first revisited. It is seen that it stemmed from, and was heavily affected by, the assumption of perfect elasto-plasticity of materials. This will make the number of the failure modes increase in a non-polynomial form against the number of the potential plastic hinges. Moreover, the above methodology does not work appropriately in the case of nonlinearity in general form other than perfect elasto-plasticity, as commonly encountered in engineering practice. Discussions show that total information of the structure is involved in the development process of its nonlinearity, be it a deterministic case or stochastic counterpart. The information needed for reliability evaluation of structures could be extracted, for example, by capturing the probabilistic information of the extreme value of the corresponding response, which could be obtained by using the probability density evolution method. Therefore, the reliability evaluation for structural safety could then be directly evaluated without searching the failure modes. Taking a 10-bar truss as an example, the proposed method is theoretically elaborated and numerically exemplified.  相似文献   

3.
Reliability demonstration test planning: A three dimensional consideration   总被引:3,自引:0,他引:3  
Increasing customer demand for reliability, fierce market competition on time-to-market and cost, and highly reliable products are making reliability testing more challenging task. This paper presents a systematic approach for identifying critical elements (subsystems and components) of the system and deciding the types of test to be performed to demonstrate reliability. It decomposes the system into three dimensions, (i.e. physical, functional and time) and identifies critical elements in the design by allocating system level reliability to each candidate. The decomposition of system level reliability is achieved by using criticality index. The numerical value of criticality index for each candidate is derived based on the information available from failure mode and effects analysis (FMEA) document or warranty data from a prior system. It makes use of this information to develop reliability demonstration test plan for the identified (critical) failure mechanisms and physical elements. It also highlights the benefits of using prior information in order to locate critical spots in the design and in subsequent development of test plans. A case example is presented to demonstrate the proposed approach.  相似文献   

4.
This paper presents the design optimization by a multi-objective genetic algorithm of a safety-instrumented system based on RAMS+C measures. This includes optimization of safety and reliability measures plus lifecycle cost. Diverse redundancy is implemented as an option for redundancy allocation, and special attention is paid to its effect on common cause failure and the overall system objectives. The requirements for safety integrity established by the standard IEC 61508 are addressed, as well as the modelling detail required for this purpose. The problem is about reliability and redundancy allocation with diversity for a series-parallel system. The objectives to optimize are the average probability of failure on demand, which represents the system safety integrity, Spurious Trip Rate and Lifecycle Cost. The overall method is illustrated with a practical example from the chemical industry: a safety function against high pressure and temperature for a chemical reactor. In order to implement diversity, each subsystem is given the option of three different technologies, each technology with different reliability and diagnostic coverage characteristics. Finally, the optimization with diversity is compared against optimization without diversity.  相似文献   

5.
This study presents a formulation that supports decision-making by determining the optimal number of standby suppliers required to respond to supply failure risks. The problem of supply failure is modelled through a standby approach, in which failure is time-dependent. The probabilities of supply interruption, financial loss caused by supply failure, and operating cost of working with suppliers are modelled to yield the expected total cost, which enables organisations to determine the optimal number of suppliers. Two possible modes of substitution failure are considered in the standby model to enhance the analysis. A set of sensitivity analyses are performed for several input parameters to illustrate the model’s behaviour. The analysis provides an optimal sourcing strategy depending on a combination of supply risk, operational cost vs. loss ratio and length of the supply period. The proposed model indicates the benefits of cost savings, unlike other dynamic models that use multiple suppliers simultaneously.  相似文献   

6.
This paper describes a method for estimating and forecasting reliability from attribute data, using the binomial model, when reliability requirements are very high and test data are limited. Integer data—specifically, numbers of failures — are converted into non-integer data. The rationale is that when engineering corrective action for a failure is implemented, the probability of recurrence of that failure is reduced; therefore, such failures should not be carried as full failures in subsequent reliability estimates. The reduced failure value for each failure mode is the upper limit on the probability of failure based on the number of successes after engineering corrective action has been implemented. Each failure value is less than one and diminishes as test programme successes continue. These numbers replace the integral numbers (of failures) in the binomial estimate. This method of reliability estimation was applied to attribute data from the life history of a previously tested system, and a reliability growth equation was fitted. It was then ‘calibrated’ for a current similar system's ultimate reliability requirements to provide a model for reliability growth over its entire life-cycle. By comparing current estimates of reliability with the expected value computed from the model, the forecast was obtained by extrapolation.  相似文献   

7.
Reliability certification is set as a problem of Bayesian Decision Analysis. Uncertainties about the system reliability are quantified by assuming the parameters of the models describing the stochastic behavior of components as random variables. A utility function quantifies the relative value of each possible level of system reliability after having been accepted or the opportunity loss of the same level if the system has been rejected. A decision about accepting or rejecting the system can be made either on the basis of the existing prior assessment of uncertainties or after obtaining further information through testing of the components or the system at a cost. The concepts of value of perfect information, expected value of sample information and the expected net gain of sampling are specialized to the reliability of a multicomponent system to determine the optimum component testing scheme prior to deciding on the system's certification. A component importance ranking is proposed on the basis of the expected value of perfect information about the reliability of each component. The proposed approach is demonstrated on a single component system failing according to a Poisson random process and with natural conjugate prior probability density functions (pdf) for the failure rate and for a multicomponent system under general assumptions.  相似文献   

8.
Increasing trend in global business integration and movement of material around the world has caused supply chain system susceptible to disruption involving higher risks. This paper presents a methodology for supplier selection in a global sourcing environment by considering multiple cost and risk factors. Failure modes and effects analysis technique from reliability engineering field and Bayesian belief networks are used to quantify the risk posed by each factor. The probability and the cost of each risk are then incorporated into a decision tree model to compute the total expected costs for each supply option. The supplier selection decision is made based on the total purchasing costs including both deterministic costs (such as product and transportation costs) and the risk-associated costs. The proposed approach is demonstrated using an example of a US-based Chemical distributor. This framework provides a visual tool for supply chain managers to see how cost and risks are distributed across the different alternatives. Lastly, managers can calculate expected value of perfect information to avoid a certain risk.  相似文献   

9.
Mixing Bayes and Empirical Bayes inference provides reliability estimates for variant system designs by using relevant failure data - observed and anticipated - about engineering changes arising due to modification and innovation. A coherent inference framework is proposed to predict the realization of engineering concerns during product development so that informed decisions can be made about the system design and the analysis conducted to prove reliability. The proposed method involves combining subjective prior distributions for the number of engineering concerns with empirical priors for the non-parametric distribution of time to realize these concerns in such a way that we can cross-tabulate classes of concerns to failure events within time partitions at an appropriate level of granularity. To support efficient implementation, a computationally convenient hypergeometric approximation is developed for the counting distributions appropriate to our underlying stochastic model. The accuracy of our approximation over first-order alternatives is examined, and demonstrated, through an evaluation experiment. An industrial application illustrates model implementation and shows how estimates can be updated using information arising during development test and analysis.  相似文献   

10.
Despite many advances in the field of computational reliability analysis, the efficient estimation of the reliability of a system with multiple failure modes remains a persistent challenge. Various sampling and analytical methods are available, but they typically require accepting a tradeoff between accuracy and computational efficiency. In this work, a surrogate-based approach is presented that simultaneously addresses the issues of accuracy, efficiency, and unimportant failure modes. The method is based on the creation of Gaussian process surrogate models that are required to be locally accurate only in the regions of the component limit states that contribute to system failure. This approach to constructing surrogate models is demonstrated to be both an efficient and accurate method for system-level reliability analysis.  相似文献   

11.
In this paper, we introduce a new reliability growth methodology for one-shot systems that is applicable to the case where all corrective actions are implemented at the end of the current test phase. The methodology consists of four model equations for assessing: expected reliability, the expected number of failure modes observed in testing, the expected probability of discovering new failure modes, and the expected portion of system unreliability associated with repeat failure modes. These model equations provide an analytical framework for which reliability practitioners can estimate reliability improvement, address goodness-of-fit concerns, quantify programmatic risk, and assess reliability maturity of one-shot systems. A numerical example is given to illustrate the value and utility of the presented approach. This methodology is useful to program managers and reliability practitioners interested in applying the techniques above in their reliability growth program.  相似文献   

12.
We propose an integrated methodology for the reliability and dynamic performance analysis of fault-tolerant systems. This methodology uses a behavioral model of the system dynamics, similar to the ones used by control engineers to design the control system, but also incorporates artifacts to model the failure behavior of each component. These artifacts include component failure modes (and associated failure rates) and how those failure modes affect the dynamic behavior of the component. The methodology bases the system evaluation on the analysis of the dynamics of the different configurations the system can reach after component failures occur. For each of the possible system configurations, a performance evaluation of its dynamic behavior is carried out to check whether its properties, e.g., accuracy, overshoot, or settling time, which are called performance metrics, meet system requirements. Markov chains are used to model the stochastic process associated with the different configurations that a system can adopt when failures occur. This methodology not only enables an integrated framework for evaluating dynamic performance and reliability of fault-tolerant systems, but also enables a method for guiding the system design process, and further optimization. To illustrate the methodology, we present a case-study of a lateral-directional flight control system for a fighter aircraft.  相似文献   

13.
Recently, reliability analysis has been advocated as an effective approach to account for uncertainty in the geometric design process and to evaluate the risk associated with a particular design. In this approach, a risk measure (e.g. probability of noncompliance) is calculated to represent the probability that a specific design would not meet standard requirements. The majority of previous applications of reliability analysis in geometric design focused on evaluating the probability of noncompliance for only one mode of noncompliance such as insufficient sight distance. However, in many design situations, more than one mode of noncompliance may be present (e.g. insufficient sight distance and vehicle skidding at horizontal curves). In these situations, utilizing a multi-mode reliability approach that considers more than one failure (noncompliance) mode is required. The main objective of this paper is to demonstrate the application of multi-mode (system) reliability analysis to the design of horizontal curves. The process is demonstrated by a case study of Sea-to-Sky Highway located between Vancouver and Whistler, in southern British Columbia, Canada. Two noncompliance modes were considered: insufficient sight distance and vehicle skidding. The results show the importance of accounting for several noncompliance modes in the reliability model. The system reliability concept could be used in future studies to calibrate the design of various design elements in order to achieve consistent safety levels based on all possible modes of noncompliance.  相似文献   

14.
Reliability improvement of CMOS VLSI circuits depends on a thorough understanding of the technology, failure mechanisms, and resulting failure modes involved. Failure analysis has identified open circuits, short circuits and MOSFET degradations as the prominent failure modes. Classical methods of fault simulation and test generation are based on the gate level stuck-at fault model. This model has proved inadequate to model all realistic CMOS failure modes. An approach, which will complement available VLSI design packages, to aid reliability improvement and assurance of CMOS VLSI is outlined. A ‘two-step’ methodology is adopted. Step one, described in this paper, involves accurate circuit level fault simulation of CMOS cells used in a hierarchical design process. The simulation is achieved using SPICE and pre-SPICE insertion of faults (PSIF). PSIF is an additional module to SPICE that has been developed and is outlined in detail. Failure modes effects analysis (FMEA) is executed on the SPICE results and FMEA tables are generated. The second step of the methodology uses the FMEA tables to produce a knowledge base. Step two is essential when reliability studies of larger and VLSI circuits are required and will be the subject of a future paper. The knowledge base has the potential to generate fault trees, fault simulate and fault diagnose automatically.  相似文献   

15.
A new reliability measure is proposed and equations are derived which determine the probability of existence of a specified set of minimum gaps between random variables following a homogeneous Poisson process in a finite interval. Using the derived equations, a method is proposed for specifying the upper bound of the random variables' number density which guarantees that the probability of clustering of two or more random variables in a finite interval remains below a maximum acceptable level. It is demonstrated that even for moderate number densities the probability of clustering is substantial and should not be neglected in reliability calculations.In the important special case where the random variables are failure times, models have been proposed for determining the upper bound of the hazard rate which guarantees a set of minimum failure-free operating intervals before the random failures, with a specified probability. A model has also been proposed for determining the upper bound of the hazard rate which guarantees a minimum availability target. Using the models proposed, a new strategy, models and reliability tools have been developed for setting quantitative reliability requirements which consist of determining the intersection of the hazard rate envelopes (hazard rate upper bounds) which deliver a minimum failure-free operating period before random failures, a risk of premature failure below a maximum acceptable level and a minimum required availability. It is demonstrated that setting reliability requirements solely based on an availability target does not necessarily mean a low risk of premature failure. Even at a high availability level, the probability of premature failure can be substantial. For industries characterised by a high cost of failure, the reliability requirements should involve a hazard rate envelope limiting the risk of failure below a maximum acceptable level.  相似文献   

16.
Probabilistic risk analysis has historically been developed for situations in which measured data about the overall reliability of a system are limited and expert knowledge is the best source of information available. There continue to be a number of important problem areas characterized by a lack of hard data. However, in other important problem areas the emergence of information technology has transformed the situation from one characterized by little data to one characterized by data overabundance. Natural disaster risk assessments for events impacting large-scale, critical infrastructure systems such as electric power distribution systems, transportation systems, water supply systems, and natural gas supply systems are important examples of problems characterized by data overabundance. There are often substantial amounts of information collected and archived about the behavior of these systems over time. Yet it can be difficult to effectively utilize these large data sets for risk assessment. Using this information for estimating the probability or consequences of system failure requires a different approach and analysis paradigm than risk analysis for data-poor systems does. Statistical learning theory, a diverse set of methods designed to draw inferences from large, complex data sets, can provide a basis for risk analysis for data-rich systems. This paper provides an overview of statistical learning theory methods and discusses their potential for greater use in risk analysis.  相似文献   

17.
Although many products are made through several tiers of supply chains, a systematic way of handling reliability issues in a various product planning stage has drawn attention, only recently, in the context of supply chain management (SCM). The main objective of this paper is to develop a fuzzy quality function deployment (QFD) model in order to convey fuzzy relationship between customers needs and design specification for reliability in the context of SCM. A fuzzy multi criteria decision-making procedure is proposed and is applied to find a set of optimal solution with respect to the performance of the reliability test needed in CRT design. It is expected that the proposed approach can make significant contributions on the following areas: effectively communicating with technical personnel and users; developing relatively error-free reliability review system; and creating consistent and complete documentation for design for reliability.  相似文献   

18.
Despite the recent revolution in statistical thinking and methodology. practical reliability analysis and assessment remains almost exclusively based on a black-box approach employing parametric statistical techniques and significance tests. Such practice, which is largely automatic for the industrial practitioner, implicity involves a large number of physically unreasonable assumptions that in practice are rarely met. Extensive investigation of reliability source data indicates a variety of differing data structures, which contradict the assumptions implicit in the usual methodology. As well as these, lack of homogeneity in the data, due, for instance, to multiple failure modes or misdefinition of environment, is commonly overlooked by the standard methodology. In this paper we argue the case for exploring reliability data. The pattern revealed by such exploration of a data set provides intrinsic information which helps to reinforce and reinterpret the engineering knowledge about the physical nature of the technological system to which the data refers. Employed in this way, the data analyst and the reliability engineer are partners in an iterative process aimed towards the greater understanding of the system and the process of failure. Despite current standard practice, the authors believe it to be critical that the structure of data analysis must reflect the structure in the failure data. Although standard methodology provides an easy and repeatable analysis, the authors' experience indicates that it is rarely an appropriate one. It is ironic that whereas methods to analyse the data structures commonly found in reliability data have been available for some time, the insistence about the standard black-box approach has prevented the identification of such ‘abnormal’ features in reliability data and the application of these approaches. We discuss simple graphical procedures to investigate the structure of reliability data, as well as more formal testing procedures which assist in decision-making methodology. Partial reviews of such methods have appeared previously and a more detailed development of the exploration approach and of the appropriate analysis it implies will be dealt with elsewhere. Here, our aim is to argue the case for the reliability analyst to LOOK AT THE DATA. and to analyse it accordingly.  相似文献   

19.
In order to enhance the safety of new advanced reactors, optimization of the design of the implemented passive systems is required. Therefore, a reliability-based approach to the design of a thermal–hydraulic passive system is being considered, and a limit state function (LSF)-based approach elicited from mechanical reliability is developed. The concept of functional failure, i.e., the possibility that the load will exceed the capacity in a reliability physics framework, in terms of performance parameter is introduced here for the reliability evaluation of a natural circulation passive system, designed for decay heat removal of innovative light water reactors. Water flow rate circulating through the system is selected as passive system performance characteristic parameter and the related limit state or performance function is defined. The probability of failure of the system is assessed in terms of safety margin, corresponding to the LSF. Results help the designer to determine the allowable limits or set the safety margin for the system operation parameters, to meet the safety and reliability requirements.  相似文献   

20.
Technological innovations provide integrated circuits of increased functionality and complexity, and modern design tools facilitate a new multiplicity of products, such as application-specific products (ASICs). Traditional qualification procedures cannot keep pace with this evolution with respect to requirements of product reliability, ability of qualifying the multiplicity of future products, and market demands for saving cost and time. A further development of a reliability assurance concept, which will be discussed here, considers design tools, basic product elements, materials, manufacturing process and controls as a ‘system’, which has to be qualified with respect to the consistency and efficiency of all of the implemented reliability assurance measures. The concept is based on the manufacturer's ‘system’ knowledge and responsibility. It is compatible with the relevant requirements of ISO 9000 and recent military standard proposals. The procedure is applied to commercial products. The main part of this concept is the qualification of the manufacturing technology. The procedure is organized as a continuous process starting at the concept phase of a new technology and its pilot product. The various steps then follow the development, the pre-series and series production phases. The reliability aspects concentrate on the physical properties of product elements relevant to their stability and endurance, i.e. the potential failure mechanisms and their root causes as reliability risks. Thus a major part of reliability testing for the qualification of the pilot product of a new technology can be performed without the use of the final product version. The benefits derivable from this approach are savings in time and cost as well as the capability to handle future product multiplicity.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号