首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Abstract: Continuum damage mechanics (CDM) models have been employed successfully in the literature to predict the response of laminated composite materials. Some sophisticated models can include the effects of non‐linear shear and transverse damage progression, plasticity and shear‐transverse damage coupling. However, these models require non‐standard test data for calibration that may not always be available to a modeller. In this two‐part study, we examine the effect of neglecting plasticity parameters, and also the effect of neglecting both plasticity and shear‐transverse coupling parameters in simplified CDM models for predicting monotonic tensile strength. In part I, we develop simplified versions of the CDM model and test their ability to accurately predict the failure response of angle‐ply laminates. In part II, we provide details of the experimental test series carried out to determine the input parameters for the models. It was found that neglecting plasticity requires some approximations in the damage development laws, but the resulting model can predict well the response of the angle‐ply laminates tested under monotonic loading to failure. Neglecting shear‐transverse coupling is acceptable for the some materials.  相似文献   

2.
Artificial neural network (ANN)‐based methods have been extensively investigated for equipment health condition prediction. However, effective condition‐based maintenance (CBM) optimization methods utilizing ANN prediction information are currently not available due to two key challenges: (i) ANN prediction models typically only give a single remaining life prediction value, and it is hard to quantify the uncertainty associated with the predicted value; (ii) simulation methods are generally used for evaluating the cost of the CBM policies, while more accurate and efficient numerical methods are not available, which is critical for performing CBM optimization. In this paper, we propose a CBM optimization approach based on ANN remaining life prediction information, in which the above‐mentioned key challenges are addressed. The CBM policy is defined by a failure probability threshold value. The remaining life prediction uncertainty is estimated based on ANN lifetime prediction errors on the test set during the ANN training and testing processes. A numerical method is developed to evaluate the cost of the proposed CBM policy more accurately and efficiently. Optimization can be performed to find the optimal failure probability threshold value corresponding to the lowest maintenance cost. The effectiveness of the proposed CBM approach is demonstrated using two simulated degradation data sets and a real‐world condition monitoring data set collected from pump bearings. The proposed approach is also compared with benchmark maintenance policies and is found to outperform the benchmark policies. The proposed CBM approach can also be adapted to utilize information obtained using other prognostics methods. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

3.
Some life tests are terminated with few or no failures. In such cases, a recent approach is to obtain degradation measurements of product performance that may contain some useful information about product reliability. Generally degradation paths of products are modeled by a nonlinear regression model with random coefficients. If we can obtain the estimates of parameters under the model, then the time‐to‐failure distribution can be estimated. In some cases, the patterns of a few degradation paths are different from those of most degradation paths in a test. Therefore, this study develops a weighted method based on fuzzy clustering procedure to robust estimation of the underlying parameters and time‐to‐failure distribution. The method will be studied on a real data set. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

4.
In this study, we propose a new class of flexible bivariate distributions for discrete random variables. The proposed class of distribution is based on the notion of conditional failure rate for a discrete‐type random variable. We derive general formulae for the joint distributions belonging to the proposed class that, unlike other discrete bivariate models already proposed in the literature such as the well‐known and most popular Holgate's bivariate Poisson distribution, can model both positive and negative dependence. We discuss general statistical properties of the proposed class as well. Specific families of bivariate distributions can be generated from the general class proposed in this paper just by specifying the ‘baseline distributions’. Furthermore, specific discrete bivariate distributions belonging to the proposed class are applied to analyze three real data sets, and the results are compared with those obtained from conventional models. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

5.
Abstract

Exposure assessment models are deterministic models derived from physical–chemical laws. In real workplace settings, chemical concentration measurements can be noisy and indirectly measured. In addition, inference on important parameters such as generation and ventilation rates are usually of interest since they are difficult to obtain. In this article, we outline a flexible Bayesian framework for parameter inference and exposure prediction. In particular, we devise Bayesian state space models by discretizing the differential equation models and incorporating information from observed measurements and expert prior knowledge. At each time point, a new measurement is available that contains some noise, so using the physical model and the available measurements, we try to obtain a more accurate state estimate, which can be called filtering. We consider Monte Carlo sampling methods for parameter estimation and inference under nonlinear and non-Gaussian assumptions. The performance of the different methods is studied on computer-simulated and controlled laboratory-generated data. We consider some commonly used exposure models representing different physical hypotheses. Supplementary materials for this article are available online.  相似文献   

6.
Product reliability is a very important issue for the competitive strategy of industries. In order to estimate a product's reliability, parametric inferential methods are required to evaluate survival test data, which happens to be a fairly expensive data source. Such costly information usually imposes additional compromises in the product development and new challenges to be overcome throughout the product's life cycle. However, manufacturers also keep field failure data for warranty and maintenance purposes, which can be a low‐cost data source for reliability estimation. Field‐failure data are very difficult to evaluate using parametric inferential methods due to their small and highly censored samples, quite often representing mixed modes of failure. In this paper a method for reliability estimation using field failure data is proposed. The proposal is based on the use of non‐parametric inferential methods, associated with resampling techniques to derive confidence intervals for the reliability estimates. Test results show the adequacy of the proposed method to calculate reliability estimates and their confidence interval for different populations, including cases with highly right‐censored failure data. The method is shown to be particularly useful when the sampling distribution is not known, which happens to be the case in a large number of practical reliability evaluations. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

7.
A competing risks phenomenon arises in industrial life tests, where multiple types of failure determine the working duration of a unit. To model dependence among marginal failure times, copula models and frailty models have been developed for competing risks failure time data. In this paper, we propose a frailty-copula model, which is a hybrid model including both a frailty term (for heterogeneity among units) and a copula function (for dependence between failure times). We focus on models that are useful to investigate the reliability of marginal failure times that are Weibull distributed. Furthermore, we develop likelihood-based inference methods based on competing risks data, including accelerated failure time models. We also develop a model-diagnostic procedure to assess the adequacy of the proposed model to a given dataset. Simulations are conducted to demonstrate the operational performance of the proposed methods, and a real dataset is analyzed for illustration. We make an R package “gammaGumbel” such that users can apply the suggested statistical methods to their data.  相似文献   

8.
In this paper, we investigate a joint modeling method for hard failures where both degradation signals and time‐to‐event data are available. The mixed‐effects model is used to model degradation signals, and extended hazard model is used for the time‐to‐event data. The extended hazard is a general model which includes two well‐known hazard rate models, the Cox proportional hazards model and accelerated failure time model, as special cases. A two‐stage estimation approach is used to obtain model parameters, based on which remaining useful life for the in‐service unit can be predicted. The performance of the method is demonstrated through both simulation studies and a real case study.  相似文献   

9.
The computation of elastic continua is widely used in today's engineering practice, and finite element models yield a reasonable approximation of the physically observed behaviour. In contrast, the failure of materials due to overloading can be seen as a sequence of discontinuous effects finally leading to a system failure. Until now, it has not been possible to sufficiently predict this process with numerical simulations. It has recently been shown that discrete models like lattice spring models are a promising alternative to finite element models for computing the breakdown of materials because of static overstress and fatigue. In this paper, we will address one of the downsides of current lattice spring models, the need for a periodic mesh leading to a mesh‐induced anisotropy of material failure in simulations. We will show how to derive irregular cells that still behave as part of a homogeneous continuum irrespectively of their shape and which should be able to eliminate mesh‐induced anisotropy. In addition, no restraints concerning the material stiffness tensor are introduced, which allows the simulation of non‐isotropic materials. Finally, we compare the elastic response of the presented model with present lattice spring models using mechanical systems with a known analytical stress field. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

10.
For a period of mission time, only zero‐failure data can be obtained for high‐quality long‐life products. In the case of zero‐failure data reliability assessment, the point estimates and confidence interval estimates of distribution parameters cannot be obtained simultaneously by the current reliability assessment models, and the credibility of the assessment results may be reduced if they are obtained at the same time. A new model is proposed for consistency problem in this paper. In the proposed model, the point estimates of reliability can be obtained by the lifetime probability distribution derived from matching distribution curve method, while the confidence interval estimates of reliability can be obtained by using new samples generated from the lifetime probability distribution according to parameter bootstrap method. By analyzing the zero‐failure data of the torque motors after real operation, the results show that the new model not only meets the requirements of reliability assessment but also improves the accuracy of reliability interval estimation.  相似文献   

11.
Degradation tests are alternative approaches to lifetime tests and accelerated lifetime tests in reliability studies. Based on a degradation process of a product quality characteristic over time, degradation tests provide enough information to estimate the time‐to‐failure distribution. Some estimation methods, such as analytical, the numerical or the approximated, can be used to obtain the time‐to‐failure distribution. They are chosen according to the complexity of the degradation model used in the data analysis. An example of the application and analysis of degradation tests is presented in this paper to characterize the durability of a product and compare the various estimation methods of the time‐to‐failure distribution. The example refers to a degradation process related to an automobile's tyre, and was carried out to estimate its average distance covered and some percentiles of interest. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

12.
In this article, the authors present a general methodology for age‐dependent reliability analysis of degrading or ageing components, structures and systems. The methodology is based on Bayesian methods and inference—its ability to incorporate prior information and on ideas that ageing can be thought of as age‐dependent change of beliefs about reliability parameters (mainly failure rate), when change of belief occurs not only because new failure data or other information becomes available with time but also because it continuously changes due to the flow of time and the evolution of beliefs. The main objective of this article is to present a clear way of how practitioners can apply Bayesian methods to deal with risk and reliability analysis considering ageing phenomena. The methodology describes step‐by‐step failure rate analysis of ageing components: from the Bayesian model building to its verification and generalization with Bayesian model averaging, which as the authors suggest in this article, could serve as an alternative for various goodness‐of‐fit assessment tools and as a universal tool to cope with various sources of uncertainty. The proposed methodology is able to deal with sparse and rare failure events, as is the case in electrical components, piping systems and various other systems with high reliability. In a case study of electrical instrumentation and control components, the proposed methodology was applied to analyse age‐dependent failure rates together with the treatment of uncertainty due to age‐dependent model selection. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

13.
Degradation testing is an effective tool for evaluating the reliability of highly reliable products. There have been many data collection methods proposed in the literature. Some of these assumed that only degradation values are recorded, and some assumed failure times to be available. However, most research has been devoted to proposing parameter estimates or to designing degradation tests for a specific sampling method. The differences between these commonly used methods have rarely been investigated. The lack of comparisons between different sampling methods has made it difficult to select an appropriate means by which to collect data. In addition, it remains unclear whether obtaining extra information (eg, exact failure times) is useful for making statistical inferences. In this paper, we assume that the degradation path of a product follows a Wiener degradation process, and we summarize several data collection methods. Maximum likelihood estimates for parameters and their variance‐covariance matrices are derived for each type of data. Several commonly used optimization criteria for designing a degradation test are used to compare estimation efficiency. Sufficient conditions under which one method could be better than the others are proposed. Upper bounds of estimation efficiency are also investigated. Our results provide useful guidelines by which to choose a sampling method, as well as its design variables, to obtain efficient estimation. A simulated example based on real light‐emitting diodes data is studied to verify our theoretical results under a moderate sample size scenario.  相似文献   

14.
Degradation experiments are usually used to assess the lifetime distribution of highly reliable products, which are not likely to fail under the traditional life tests or accelerated life tests. In such cases, if there exist product characteristics whose degradation over time can be related to reliability, then collecting ‘degradation data’ can provide information about product reliability. In general, the degradation data are modeled by a nonlinear regression model with random coefficients. If we can obtain the estimates of parameters under the model, then the failure‐time distribution can be estimated. In order to estimate those parameters, three basic methods are available, namely, the analytical, numerical and the approximate. They are chosen according to the complexity of the degradation path model used in the analysis. In this paper, the numerical and the approximate methods are compared in a simulation study, assuming a simple linear degradation path model. A comparison with traditional failure‐time analysis is also performed. The mean‐squared error of the estimated 100pth percentile of the lifetime distribution is evaluated for each one of the approaches. The approaches are applied to a real degradation data set. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

15.
Reliability growth tests are often used for achieving a target reliability for complex systems via multiple test‐fix stages with limited testing resources. Such tests can be sped up via accelerated life testing (ALT) where test units are exposed to harsher‐than‐normal conditions. In this paper, a Bayesian framework is proposed to analyze ALT data in reliability growth. In particular, a complex system with components that have multiple competing failure modes is considered, and the time to failure of each failure mode is assumed to follow a Weibull distribution. We also assume that the accelerated condition has a fixed time scaling effect on each of the failure modes. In addition, a corrective action with fixed ineffectiveness can be performed at the end of each stage to reduce the occurrence of each failure mode. Under the Bayesian framework, a general model is developed to handle uncertainty on all model parameters, and several special cases with some parameters being known are also studied. A simulation study is conducted to assess the performance of the proposed models in estimating the final reliability of the system and to study the effects of unbiased and biased prior knowledge on the system‐level reliability estimates.  相似文献   

16.
Systems designed for high availability and fault tolerance are often configured as a series combination of redundant subsystems. When a unit of a subsystem fails, the system remains operational while the failed unit is repaired; however, if too many units in a subsystem fail concurrently, the system fails. Under conditions usually met in practical situations, we show that the reliability and availability of such systems can be accurately modeled by representing each redundant subsystem with a constant, ‘effective’ failure rate equal to the inverse of the subsystem mean‐time‐to‐failure (MTTF). The approximation model is surprisingly accurate, with an error on the order of the square of the ratio mean‐time‐to‐repair to mean‐time‐to‐failure (MTTR/MTTF), and it has wide applicability for commercial, high‐availability and fault‐tolerant computer systems. The effective subsystem failure rates can be used to: (1) evaluate the system and subsystem reliability and availability; (2) estimate the system MTTF; and (3) provide a basis for the iterative analysis of large complex systems. Some observations from renewal theory suggest that the approximate models can be used even when the unit failure rates are not constant and when the redundant units are not homogeneous. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

17.
An engineering company manufacturing high‐precision sensors had accumulated huge historical databases of information on a type of sensors, which had been tested. The aim of the company was not to use this historical data to improve estimation of future individual sensor parameters, but rather to use it to reduce the number of measurements needed per sensor, guaranteeing a required level of accuracy. In the paper, we show how this can be performed, using Bayesian ideas, and introduce the novel theory for linear regression models, which determines how the reduction in individual sensor measurements can be achieved. Specifically, for estimating parameters of closely related sensors, an estimate can be thought of as comprising a global component, that is, the mean of all the sensors, and a local component, which is a shift from the mean. The historical data can, in a Bayesian framework, provide the global component and hence all that is needed from an individual sensor is the local component. In non‐Bayesian estimation methods, both components are required, and hence, many measurements are needed. On the other hand, with Bayesian methods, only the local fit is needed, and hence, fewer measurements per sensor are required. We provide the supporting theory and demonstrate on a real‐life application with real data. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

18.
To keep up with the speed of globalization and growing customer demands for more technology‐oriented products, modern systems are becoming increasingly more complex. This complexity gives rise to unpredictable failure patterns. While there are a number of well‐established failure analysis (physics‐of‐failure) models for individual components, these models do not hold good for complex systems as their failure behaviors may be totally different. Failure analysis of individual components does consider the environmental interactions but is unable to capture the system interaction effects on failure behavior. These models are based on the assumption of independent failure mechanisms. Dependency relationships and interactions of components in a complex system might give rise to some new types of failures that are not considered during the individual failure analysis of that component. This paper presents a general framework for failure modes and effects analysis (FMEA) to capture and analyze component interaction failures. The advantage of the proposed methodology is that it identifies and analyzes the system failure modes due to the interaction between the components. An example is presented to demonstrate the application of the proposed framework for a specific product architecture (PA) that captures interaction failures between different modules. However, the proposed framework is generic and can also be used in other types of PA. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

19.
In many real‐life applications, the quality of products from a process is monitored by a functional relationship between a response variable and one or more explanatory variables. In these applications, methodologies of profile monitoring are used to check the stability of this relationship over time. In phase I of profile monitoring, historical data points that can be represented by curves (or profiles) are collected. In this article, 2 procedures are proposed for detecting outlying profiles in phase I data, by incorporating the local linear kernel smoothing within the framework of nonparametric mixed‐effect models. We introduce a stepwise algorithm on the basis of the multiple testing viewpoint. Our simulation results for various linear and nonlinear profiles display the superior efficiency of our proposed monitoring procedures over some existing techniques in the literature. To illustrate the implementation of the proposed methods in phase I profile monitoring, we apply the methods on a vertical density profile dataset.  相似文献   

20.
In electrical impedance tomography (EIT), an approximation for the internal resistivity distribution is computed based on the knowledge of the injected currents and measured voltages on the surface of the body. The conventional approach is to inject several different current patterns and use the associated data for the reconstruction of a single distribution. This is an ill‐posed inverse problem. In some applications the resistivity changes may be so fast that the target changes between the injection of the current patterns and thus the data do not correspond to the same target distribution. In these cases traditional reconstruction methods yield severely blurred resistivity estimates. We have earlier proposed to formulate the EIT problem as an augmented system theoretical state estimation problem. The reconstruction problem can then be solved with Kalman filter and Kalman smoother algorithms. In this paper, we use the so‐called fixed‐lag smoother to solve the dynamic EIT reconstruction problem. We show that data storage difficulties that are associated with the previously used fixed‐interval smoother can be avoided using the fixed‐lag smoother. The proposed methods are compared with simulated measurements and real data. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号