首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Reliability assessments of repairable (electronic) equipment are often based on failure data recorded under field conditions. The main objective in the analyses is to provide information that can be used in improving the reliability through design changes. For this purpose it is of particular interest to be able to locate ‘trouble-makers’, i.e. components that are particular likely to fail. In the present context, reliability is measured in terms of the mean cumulative number of failures as a function of time. This function may be considered for the system as a whole, or for stratified data. The stratification is obtained by sorting data according to different factors, such as component positions, production series, etc. The mean cumulative number of failures can then be estimated either nonparametrically as an average of the observed failures, or parametrically, if a certain model for the lifetimes of the components involved is assumed. As an example we here consider a simple component lifetime model based on the assumption that components are ‘drawn’ randomly from a heterogeneous population, where a small proportion of the components are weak (with a small mean lifetime), and the remaining are standard components (with a large mean lifetime). This model enables formulation of an analytical expression for the mean cumulative number of failures. In both the nonparametric and the parametric case the uncertainty of the estimation may be assessed by computing a confidence interval for the estimated values (a confidence band for the estimated time functions). The determination of confidence bands provides a basis for assessing the significance of the factors underlying the stratification. The methods are illustrated through an industrial case study using field failure data.  相似文献   

2.
ABSTRACT

Warranty data can be used for estimating product reliability, identifying causes of failure and designing warranty policy. Based on two-dimensional warranty data, we utilize an accelerated failure time model to investigate the effect of usage rate on product degradation. The stochastic expectation-maximization algorithm is proposed to estimate parameters of the reliability model considering both censored data and field data. Extensive simulation studies are used to validate the proposed method and to compare it with the maximum likelihood method. The utilities of the results have been demonstrated through real warranty data collected from an automobile manufacturer in China.  相似文献   

3.
Fatigue cracking is one of the major types of distress in asphalt mixtures and is caused by the accumulation of damage in pavement sections under repeated load applications. The fatigue endurance limit (EL) concept assumes a specific strain level, below which the damage in hot mix asphalt (HMA) is not cumulative. In other words, if the asphalt layer depth is controlled in a way that keeps the critical HMA flexural strain level below the EL, the fatigue life of the mixture can be extended significantly. This paper uses two common failure criteria, the traditional beam fatigue criterion and the simplified viscoelastic continuum damage model energy-based failure criterion (the so-called GR method), to evaluate the effect of different parameters, such as reclaimed asphalt pavement (RAP) content, binder content, binder modification and warm mix asphalt (WMA) additives, on the EL value. In addition, both failure criteria are employed to investigate the impacts of these parameters in terms of the fatigue life of the study mixtures. According to the findings, unlike an increase in RAP content, which has a negative effect on the mixtures’ fatigue resistance, a higher binder content and/or binder modification can significantly increase the EL value and extend the fatigue life as was proved before by other researchers, whereas WMA additives do not significantly affect the mixtures’ fatigue behaviour. A comparison of the model simulation results with the field observations indicates that the GR method predicts the field performance more accurately than the traditional method.  相似文献   

4.
While cyber–physical system sciences are developing methods for studying reliability that span domains such as mechanics, electronics and control, there remains a lack of methods for investigating the impact of the environment on the system. External conditions such as flooding, fire or toxic gas may damage equipment and failing to foresee such possibilities will result in invalid worst-case estimates of the safety and reliability of the system. Even if single component failures are anticipated, abnormal environmental conditions may result in common cause failures that cripple the system. This paper proposes a framework for modeling interactions between a cyber–physical system and its environment. The framework is limited to environments consisting of spaces with clear physical boundaries, such as power plants, buildings, mines and urban underground infrastructures. The purpose of the framework is to support simulation-based risk analysis of an initiating event such as an equipment failure or flooding. The functional failure identification and propagation (FFIP) framework is extended for this purpose, so that the simulation is able to detect component failures arising from abnormal environmental conditions and vice versa: Flooding could be caused by a failure in a pipe or valve component. As abnormal flow states propagate through the system and its environment, the goal of the simulation is to identify the system-wide cumulative effect of the initiating event and any related common cause failure scenario. FFIP determines this effect in terms of degradation or loss of the functionality of the system. The method is demonstrated with a nuclear reactor’s redundant coolant supply system.  相似文献   

5.
Abstract

In this work, failure loads and failure modes of single lap adhesive joints between composite laminates are investigated. To this aim, a coupled stress and energy criterion is applied and results are compared to numerical reference solutions using cohesive zone modeling and to experimental values from literature. Possible failure modes are adhesive failure along the adherend/adhesive interface, adherend failure as intralaminar failure in the first ply closest to the adhesive layer and interlaminar failure between the first and second ply. Suitable failure criteria adressing the different failure modes are implemented within the framework of the coupled criterion. The stress criterion is carried out in a pointwise or in an averaged manner, called point method or line method respectively. It is shown that two physically sound failure modes can only be predicted using the stress criterion in an averaged manner since the pointwise evaluation does not allow the formation of certain types of cracks.  相似文献   

6.
Dependability tools are becoming an indispensable tool for modeling and analyzing (critical) systems. However the growing complexity of such systems calls for increasing sophistication of these tools. Dependability tools need to not only capture the complex dynamic behavior of the system components, but they must be also easy to use, intuitive, and computationally efficient. In general, current tools have a number of shortcomings including lack of modeling power, incapacity to efficiently handle general component failure distributions, and ineffectiveness in solving large models that exhibit complex dependencies between their components. We propose a novel reliability modeling and analysis framework based on the Bayesian network (BN) formalism. The overall approach is to investigate timed Bayesian networks and to find a suitable reliability framework for dynamic systems. We have applied our methodology to two example systems and preliminary results are promising. We have defined a discrete-time BN reliability formalism and demonstrated its capabilities from a modeling and analysis point of view. This research shows that a BN based reliability formalism is a powerful potential solution to modeling and analyzing various kinds of system components behaviors and interactions. Moreover, being based on the BN formalism, the framework is easy to use and intuitive for non-experts, and provides a basis for more advanced and useful analyses such as system diagnosis.  相似文献   

7.

We examined the effects that different levels of, and changes in, automation reliability have on users' trust of automated diagnostics aids. Participants were presented with a series of testing trials in which they diagnosed the validity of a system failure using only information provided to them by an automated diagnostic aid. The initial reliability of the aid was either 60, 80 or 100% reliable. However, for participants initially provided with the 60%-reliable aid, the accuracy of the aid increased to 80% half way through testing, whereas for those initially provided the 100%-reliable aid, the aid's reliability was reduced to 80%. Aid accuracy remained at 80% throughout testing for participants in the 80%-reliability group. Both subjective measures (i.e. perceived reliability of the aids and subjective confidence ratings) and objective measures of performance (concurrence with the aid's diagnosis and decision times) were examined. Results indicated that users of automated diagnostic aids were sensitive to different levels of aid reliabilities, as well as to subsequent changes in initial aid reliabilities. However, objective performance measures were related to, but not perfectly calibrated with, subjective measures of confidence and reliability estimates. These findings highlight the need to distinguish between automation trust as a psychological construct that can be assessed only through subjective measures and automation reliance that can only be defined in terms of performance data. A conceptual framework for understanding the relationship between trust and reliance is presented.  相似文献   

8.
Product reliability is a very important issue for the competitive strategy of industries. In order to estimate a product's reliability, parametric inferential methods are required to evaluate survival test data, which happens to be a fairly expensive data source. Such costly information usually imposes additional compromises in the product development and new challenges to be overcome throughout the product's life cycle. However, manufacturers also keep field failure data for warranty and maintenance purposes, which can be a low‐cost data source for reliability estimation. Field‐failure data are very difficult to evaluate using parametric inferential methods due to their small and highly censored samples, quite often representing mixed modes of failure. In this paper a method for reliability estimation using field failure data is proposed. The proposal is based on the use of non‐parametric inferential methods, associated with resampling techniques to derive confidence intervals for the reliability estimates. Test results show the adequacy of the proposed method to calculate reliability estimates and their confidence interval for different populations, including cases with highly right‐censored failure data. The method is shown to be particularly useful when the sampling distribution is not known, which happens to be the case in a large number of practical reliability evaluations. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

9.
The numerical modeling of dynamic failure mechanisms in solids due to fracture based on sharp crack discontinuities suffers in situations with complex crack topologies and demands the formulation of additional branching criteria. This drawback can be overcome by a diffusive crack modeling, which is based on the introduction of a crack phase field. Following our recent works on quasi‐static modeling of phase‐field‐type brittle fracture, we propose in this paper a computational framework for diffusive fracture for dynamic problems that allows the simulation of complex evolving crack topologies. It is based on the introduction of a local history field that contains a maximum reference energy obtained in the deformation history, which may be considered as a measure of the maximum tensile strain in the history. This local variable drives the evolution of the crack phase field. Its introduction provides a very transparent representation of the balance equation that governs the diffusive crack topology. In particular, it allows for the construction of a very robust algorithmic treatment for elastodynamic problems of diffusive fracture. Here, we extend the recently proposed operator split scheme from quasi‐static to dynamic problems. In a typical time step, it successively updates the history field, the crack phase field, and finally the displacement field. We demonstrate the performance of the phase field formulation of fracture by means of representative numerical examples, which show the evolution of complex crack patterns under dynamic loading. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

10.
In this paper, a general form of bathtub shape hazard rate function is proposed in terms of reliability. The degradation of system reliability comes from different failure mechanisms, in particular those related to (1) random failures, (2) cumulative damage, (3) man–machine interference, and (4) adaptation. The first item is referred to the modeling of unpredictable failures in a Poisson process, i.e. it is shown by a constant. Cumulative damage emphasizes the failures owing to strength deterioration and therefore the possibility of system sustaining the normal operation load decreases with time. It depends on the failure probability, 1−R. This representation denotes the memory characteristics of the second failure cause. Man–machine interference may lead to a positive effect in the failure rate due to learning and correction, or negative from the consequence of human inappropriate habit in system operations, etc. It is suggested that this item is correlated to the reliability, R, as well as the failure probability. Adaptation concerns with continuous adjusting between the mating subsystems. When a new system is set on duty, some hidden defects are explored and disappeared eventually. Therefore, the reliability decays combined with decreasing failure rate, which is expressed as a power of reliability. Each of these phenomena brings about the failures independently and is described by an additive term in the hazard rate function h(R), thus the overall failure behavior governed by a number of parameters is found by fitting the evidence data. The proposed model is meaningful in capturing the physical phenomena occurring during the system lifetime and provides for simpler and more effective parameter fitting than the usually adopted ‘bathtub’ procedures. Five examples of different type of failure mechanisms are taken in the validation of the proposed model. Satisfactory results are found from the comparisons.  相似文献   

11.
Reliability-based robust design optimization (RBRDO) is a crucial tool for life-cycle quality improvement. Gaussian process (GP) model is an effective alternative modeling technique that is widely used in robust parameter design. However, there are few studies to deal with reliability-based design problems by using GP model. This article proposes a novel life-cycle RBRDO approach concerning response uncertainty under the framework of GP modeling technique. First, the hyperparameters of GP model are estimated by using the Gibbs sampling procedure. Second, the expected partial derivative expression is derived based on GP modeling technique. Moreover, a novel failure risk cost function is constructed to assess the life-cycle reliability. Then, the quality loss function and confidence interval are constructed by simulated outputs to evaluate the robustness of optimal settings and response uncertainty, respectively. Finally, an optimization model integrating failure risk cost function, quality loss function, and confidence interval analysis approach is constructed to find reasonable optimal input settings. Two case studies are given to illustrate the performance of the proposed approach. The results show that the proposed approach can make better trade-offs between the quality characteristics and reliability requirements by considering response uncertainty.  相似文献   

12.
Left censoring or left truncation occurs when specific failure information on machines is not available before a certain age. If only the number of failures but not the actual failure times before a certain age is known, we have left censoring. If neither the number of failures nor the times of failure are known, we have left truncation. A datacenter will typically include servers and storage equipment installed on different dates. However, data collection on failures and repairs may not begin on the installation date. Often, the capture of reliability data starts only after the initiation of a service contract on a particular date. Thus, such data may exhibit severe left censoring or truncation, since machines may have operated for considerable time periods without any reliability history being recorded. This situation is quite different from the notion of left censoring in non-repairable systems, which has been dealt with extensively in the literature. Parametric modeling methods are less intuitive when the data has severe left censoring. In contrast, non-parametric methods based on the Mean Cumulative Function (MCF), recurrence rate plots, and calendar time analysis are simple to use and can provide valuable insights into the reliability of repairable systems, even under severe left censoring or truncation. The techniques shown have been successfully applied at a large server manufacturer to quantify the reliability of computer servers at customer sites. In this discussion, the techniques will be illustrated with actual field examples.  相似文献   

13.
Most systems experience both random shocks (hard failure) and performance degradation (soft failure) during service span, and the dependence of the two competing failure processes has become a key issue. In this study, a novel dependent competing failure processes (DCFPs) model with a varying degradation rate is proposed. The comprehensive impact of random shocks, especially the effect of cumulative shock, is reasonably considered. Specifically, a shock will cause an abrupt degradation damage, and when the cumulative shock reaches a predefined threshold, the degradation rate will change. An analytical reliability solution is derived under the concept of first hitting time (FHT). Besides, a one-step maximum likelihood estimation method is established by constructing a comprehensive likelihood function. Finally, the reasonability of the closed form reliability solution and the feasibility and effectiveness of the proposed DCFPs modeling methodology are demonstrated by a comparative simulation study.  相似文献   

14.
An approach is developed to locally estimate the failure probability of a system under various design values. Although it seems to require numerous reliability analysis runs to locally estimate the failure probability function, which is a function of the design variables, the approach only requires a single reliability analysis run. The approach can be regarded as an extension of that proposed by Au [Au SK. Reliability-based design sensitivity by efficient simulation. Computers and Structures 2005;83(14):1048–61], but it proposes a better framework in estimating the failure probability function. The key idea is to implement the maximum entropy principle in estimating the failure probability function. The resulting local failure probability function estimate is more robust; moreover, it is possible to find the confidence interval of the failure probability function as well as estimate the gradient of the logarithm of that function with respect to the design variables. The use of the new approach is demonstrated with several simulated examples. The results show that the new approach can effectively locally estimate the failure probability function and the confidence interval with one single Subset Simulation run. Moreover, the new approach is applicable when the dimension of the uncertainties is high and when the system is highly nonlinear. The approach should be valuable for reliability-based optimization and reliability sensitivity analysis.  相似文献   

15.
ABSTRACT

Reliability demonstration tests have important applications in reliability assurance activities to demonstrate product quality over time and safeguard companies’ market positions and competitiveness. With greatly increasing global market competition, conventional binomial reliability demonstration tests based on binary test outcomes (success or failure) at a single time point become insufficient for meeting consumers’ diverse requirements. This article proposes multi-state reliability demonstration tests (MSRDTs) for demonstrating reliability at multiple time periods or involving multiple failure modes. The design strategy for MSRDTs employs a Bayesian approach to allow incorporation of prior knowledge, which has the potential to reduce the test sample size. Simultaneous demonstration of multiple objectives can be achieved and critical requirements specified to avoid early/critical failures can be explicitly demonstrated to ensure high customer satisfaction. Two case studies are explored to demonstrate the proposed test plans for different objectives.  相似文献   

16.
This article develops reliability models for systems subject to two dependent competing failure processes, considering the correlation between additional damage size on degradation in soft failure process and stress magnitude of shock load in hard failure process, both of which are caused by the same kth random shock. The generalized correlative reliability assessment model based on copulas is proposed, which is then extended to three different shock patterns: (1) δ‐shock, (2) m‐shock, and (3) m‐run shocks. There are some statistical works to be introduced in reliability modeling, including data separation of total degradation amount, inferring the distribution of amount of aging continuous degradation at time t, and fitting copula to the specific correlation. The developed reliability models are demonstrated for an application example of a micro‐electro‐mechanical system.  相似文献   

17.
In this paper, a bivariate replacement policy (n, T) for a cumulative shock damage process is presented that included the concept of cumulative repair cost limit. The arrival shocks can be divided into two kinds of shocks. Each type-I shock causes a random amount of damage and these damages are additive. When the total damage exceeds a failure level, the system goes into serious failure. Type-II shock causes the system into minor failure and such a failure can be corrected by minimal repair. When a minor failure occurs, the repair cost will be evaluated and minimal repair is executed if the accumulated repair cost is less than a predetermined limit L. The system is replaced at scheduled time T, at n-th minor failure, or at serious failure. The long-term expected cost per unit time is derived using the expected costs as the optimality criterion. The minimum-cost policy is derived, and existence and uniqueness of the optimal \( n^{*} \) and \( T^{*} \) are proved. This bivariate optimal replacement policy (n, T) is showed to be better than the optimal \( T^{*} \) and the optimal \( n^{*} \) policy.  相似文献   

18.
Many industrial products consist of multiple components that are necessary for system operation. There is an abundance of literature on modeling the lifetime of such components through competing risks models. During the life‐cycle of a product, it is common for there to be incremental design changes to improve reliability, to reduce costs, or due to changes in availability of certain part numbers. These changes can affect product reliability but are often ignored in system lifetime modeling. By incorporating this information about changes in part numbers over time (information that is readily available in most production databases), better accuracy can be achieved in predicting time to failure, thus yielding more accurate field‐failure predictions. This paper presents methods for estimating parameters and predictions for this generational model and a comparison with existing methods through the use of simulation. Our results indicate that the generational model has important practical advantages and outperforms the existing methods in predicting field failures. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

19.
Due to the difference of environmental stress and system configuration, as well as the coupling between phases, the phased-mission system (PMS) has significant disparity in the failure modes and mechanisms of components in different phases. From the perspective of failure mechanism, an initial independent one will affect or be affected by other failure mechanisms during a task, and eventually leads to the failure of system. Therefore, the coupling of failure mechanisms in PMS is considered in this paper, and the recursive quantitative analysis of coupling from failure mechanism level to system level and phase level are accomplished by means of the basic rules of reliability physics. Furthermore, taking a core circuit from an aircraft electronic system as an example, the reliability modeling and analysis are performed. The results show that after considering the coupling relationship between failure mechanisms within each phase, the reliability and lifetime of components are reduced compared with the mechanism- independence situation, however, the cumulative failure probability of the system is increased. It reveals that it is very important to involve the coupling relationship between failure mechanisms in the reliability analysis of PMS, which can improve the accuracy of the prediction.  相似文献   

20.
Reliability is a measure of how well a product will perform under a certain set of conditions for a specified amount of time especially in the field environments. In this paper, a reliability study of a computer numerical control (CNC) system is described. For this analysis, field failure data from a shop manufacturing factory collected over the course of a year on approximately 20 CNC machine tools during their operating period were analyzed. Based on the field failure data, the two‐parameter exponential distribution was found to be applicable to describe the time between failures of the CNC system from among many distributions including Weibull, gamma, two‐parameter exponential, normal, and logistic using the chi‐squared test. In this paper, we discuss the reliability estimation of the CNC system based on the collected field failure data from a manufacturing factory using the maximum likelihood estimate (MLE) and uniform minimum variance estimate (UMVUE) methods. We also discuss the confidence intervals of the mean residual lifetime and reliability function. The result shows that the UMVUE method can provide much better and more accurate results in estimating the reliability of the CNC system than the MLE. This finding, on the one hand, seems to be obvious because the UMVUE is not only an unbiased estimator but also sufficient statistic with the smallest variance; on the other hand, it is not straightforward to obtain the UMVUE of any complex function, which is the reliability function in this case. This is a very important finding and is very encouraging because it indicates that the reliability analysis of the CNC system based on the UMVUE can be more than compensated by the ability of the complexity of parameter estimation method to better evaluate and predict the reliability of the CNC system. Hence, we believe that it is worth the effort to derive those parameter functions using UMVUE method. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号