首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Sensitivity or importance analysis has been widely used for identifying system weaknesses and supporting system improvement and maintenance activities. Despite the rich literature on the sensitivity analysis of many mission‐critical and safety‐critical systems, no existing work has been devoted to wireless sensor networks (WSN). In this paper, we first analyze link and node importance with respect to the infrastructure communication reliability of WSN systems. The binary decision diagrams based algorithms are implemented to evaluate and compare three importance measures: structural importance measure, Birnbaum's measure, and criticality importance measure. The effects of node degree, choice of the destination node, data delivery models, as well as mission time on the importance analysis results are investigated through examples. Results from this work can facilitate the design, deployment, and maintenance of reliable WSN for critical applications. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

2.
This paper proposes an economic model for the selection of time‐varying control chart parameters for monitoring on‐line the mean and variance of a normally distributed quality characteristic. The process is subject to two independent assignable causes. One cause changes the process mean and the other changes the process variance. The occurrence times of these assignable causes are described by Weibull distributions having increasing failure rates. The paper combines two existing models: (I) the model of Ohta and Rahim (IIE Transactions 1997; 29 :481–486) for a dynamic economic design of $\overline{X}$\nopagenumbers\end control charts, where a single assignable cause occurs according to a Weibull distribution and all design parameters are time varying; (II) the model of Costa and Rahim (QRE International 2000; 16 :143–156) for the joint economic design of $\overline{X}$\nopagenumbers\end and R control charts where two assignable causes occur independently according to Weibull distribution, with variable sampling intervals. The advantages of the proposed model over traditional $\overline{X}$\nopagenumbers\end and R control charts with fixed parameters are presented. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

3.
Two problems which are of great interest in relation to software reliability are the prediction of future times to failure and the calculation of the optimal release time. An important assumption in software reliability analysis is that the reliability grows whenever bugs are found and removed. In this paper we present a model for software reliability analysis using the Bayesian statistical approach in order to incorporate in the analysis prior assumptions such as the (decreasing) ordering in the assumed constant failure rates of prescribed intervals. We use as prior model the product of gamma functions for each pair of subsequent interval constant failure rates, considering as the location parameter of the first interval the failure rate of the following interval. In this way we include the failure rate ordering information. Using this approach sequentially, we predict the time to failure for the next failure using the previous information obtained. Using also the relevant predictive distributions obtained, we calculate the optimal release time for two different requirements of interest: (a) the probability of an in‐service failure in a prescribed time t; (b) the cost associated with a single or more failures in a prescribed time t. Finally a numerical example is presented. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

4.
A limitation of the importance measures (IMs) currently used in reliability and risk analyses is that they rank only individual components or basic events whereas they are not directly applicable to combinations or groups of components or basic events. To partially overcome this limitation, recently, the differential importance measure (DIM), has been introduced for use in risk-informed decision making. The DIM is a first-order sensitivity measure that ranks the parameters of the risk model according to the fraction of total change in the risk that is due to a small change in the parameters’ values, taken one at a time. However, it does not account for the effects of interactions among components. In this paper, a second-order extension of the DIM, named DIMII, is proposed for accounting of the interactions of pairs of components when evaluating the change in system performance due to changes of the reliability parameters of the components. A numerical application is presented in which the informative contents of DIM and DIMII are compared. The results confirm that in certain cases when second-order interactions among components are accounted for, the importance ranking of the components may differ from that produced by a first-order sensitivity measure.  相似文献   

5.
As an application of the Internet of Things, smart home systems have received significant attentions in recent years due to their precedent advantages, eg, in ensuring efficient electricity transmission and integration with renewable energy. This paper proposes a hierarchical and combinatorial methodology for modeling and evaluating reliability of a smart home system. Particularly, the proposed methodology encompasses a multi‐valued decision diagram‐based method for addressing phased‐mission, standby sparing, and functional dependence behaviors in the physical layer; and a combinatorial procedure based on the total probability theorem for addressing probabilistic competing failure behavior with random propagation time in the communication layer. The methods are applicable to arbitrary types of time‐to‐failure and time‐to‐propagation distributions for system components. A detailed case study of an example smart home system is performed to demonstrate applications of the proposed method and effects of different component parameters on the system reliability.  相似文献   

6.
Simple series systems of identical components with spare parts are considered. It is shown that the cumulative distribution function of a system failure time tends to be a step function as the number of components increases and resources can be shared. An example of ‘continuous resources’ is also described. The time-sharing strategy for standby systems is investigated. It is proved that an optimal rule for a system of standby components with increasing failure rates is the single switching performed at a=t/2, where t is the mission time.  相似文献   

7.
Phased missions consist of consecutive operational phases where the system logic and failure parameters can change between phases. A component can have different roles in different phases and the reliability function may have discontinuities at phase boundaries. An earlier method required NOT-gates and negations of events when calculating importance measures for such missions with non-repairable components. This paper suggests an exact method that uses standard fault tree techniques and Boolean algebra without any NOT-gates or negations. The criticalities and other importance measures can be obtained for events and components relevant to a single phase or to a transition between phases or over the whole mission. The method and importance measures are extended to phased missions with repairable components. Quantification of the reliability, the availability, the failure intensity and the total number of failures are described. New importance indicators defined for repairable systems measure component contributions to the total integrated unavailability, to the mission failure intensity and to the total number of mission failures.  相似文献   

8.
An energy density zone (EDZ) model is developed for the prediction of fatigue life. The microscopic effects can be involved in the EDZ model. Three scale transitional functions in the model are utilized to describe the trans‐scale behaviours of fatigue failure from micro‐scale to macro‐scale. Fatigue failure behaviours of a low‐alloy and ultra‐high‐strength steel material (i.e. 40CrNi2Si2MoVA steel) is investigated. Two fatigue parameters in the model are determined from the experimental S–N curves for the smooth cylinder specimens (the stress concentration factor, SCF, Kt = 1). Then, fatigue lives of notched specimens with SCFs Kt = 2 and Kt = 3 are predicted respectively by the proposed model. The predicted S–N curves are satisfactory in comparison with the experimental results. Scatter of the fatigue test data can be depicted when the microscopic effects are considered. Influences of microscopic effects on the fatigue behaviours are explored by means of numerical simulations.  相似文献   

9.
Risk adjustment, which is used when healthcare outcomes are monitored, involves taking into account measures of the patient condition and how these measures are related to the outcomes. When the outcome is dichotomous, such as survival/death, the modeling involves logistic regression to assess the relationship between the predictor(s) and the outcome. Most risk‐adjusted control charts are designed to detect a change in the log‐odds of the adverse outcome, but there are a number of possible changes that could occur. For example, there could be an increase in the probability of adverse outcomes for low‐risk patients with no change for high‐risk patients. We address the problem of risk‐adjusted monitoring as a change‐point problem with several possible change‐point models. For p risk variables, there are 2p + 1 possible change‐point models, because each of the slope parameters or the intercept in the logistic regression model can change. Our approach generalizes previous risk‐adjusted charts in that we look for changes in any of the parameters. We take a Bayesian approach and find the posterior distribution for the model (i.e., which coefficients changed), the time of the change, and the values of the parameters for those that changed. All three tasks are accomplished in the context of a single model. We apply reversible jump MCMC to account for the variable size of the parameter space. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

10.
The aim of this paper is to investigate the issue of degradation modeling and reliability assessment for products under irregular time‐varying stresses. Conventional degradation models have been extensively used in the relevant literature to characterize degradation processes under deterministic stresses. However, the time‐varying stress, which may affect degradation processes, widely exists in field conditions. This paper extends the general degradation‐path model by considering the effects of time‐varying stresses. The new degradation‐path model captures influences of varying stresses on performance characteristics. A nonlinear least square method is used to estimate the unknown parameters of the proposed model. A bootstrap algorithm is adopted for computing the confidence intervals of the mean time to failure and percentiles of the failure‐time distribution. Finally, a case study of lithium‐ion cells is presented to validate the proposed method. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

11.
This paper presents a design stage method for assessing performance reliability of systems with multiple time‐variant responses due to component degradation. Herein the system component degradation profiles over time are assumed to be known and the degradation of the system is related to component degradation using mechanistic models. Selected performance measures (e.g. responses) are related to their critical levels by time‐dependent limit‐state functions. System failure is defined as the non‐conformance of any response and unions of the multiple failure regions are required. For discrete time, set theory establishes the minimum union size needed to identify a true incremental failure region. A cumulative failure distribution function is built by summing incremental failure probabilities. A practical implementation of the theory can be manifest by approximating the probability of the unions by second‐order bounds. Further, for numerical efficiency probabilities are evaluated by first‐order reliability methods (FORM). The presented method is quite different from Monte Carlo sampling methods. The proposed method can be used to assess mean and tolerance design through simultaneous evaluation of quality and performance reliability. The work herein sets the foundation for an optimization method to control both quality and performance reliability and thus, for example, estimate warranty costs and product recall. An example from power engineering shows the details of the proposed method and the potential of the approach. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

12.
In this paper, unconditionally stable higher‐order accurate time step integration algorithms suitable for linear first‐order differential equations based on the weighted residual method are presented. Instead of specifying the weighting functions, the weighting parameters are used to control the algorithm characteristics. If the numerical solution is approximated by a polynomial of degree n, the approximation is at least nth‐order accurate. By choosing the weighting parameters carefully, the order of accuracy can be improved. The generalized Padé approximations with polynomials of degree n as the numerator and denominator are considered. The weighting parameters are chosen to reproduce the generalized Padé approximations. Once the weighting parameters are known, any set of linearly independent basic functions can be used to construct the corresponding weighting functions. The stabilizing weighting factions for the weighted residual method are then found explicitly. The accuracy of the particular solution due to excitation is also considered. It is shown that additional weighting parameters may be required to maintain the overall accuracy. The corresponding equations are listed and the additional weighting parameters are solved explicitly. However, it is found that some weighting functions could satisfy the listed equations automatically. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

13.
This paper develops measures, which identify the contribution to system failure when the system operates a phased mission. The measures developed are the equivalent of Birnbaum's measure of importance and the criticality measure of importance in a conventional analysis. It is assumed that during the mission the system components cannot be repaired. In the determination of the importance measures, the contribution to phase failure is considered in two aspects: failure during the phase (in-phase importance) and failure on transition to a phase (transition importance). Component importance measures indicate the contribution to phase and overall mission unreliability.  相似文献   

14.
Short‐run productions are common in manufacturing environments like job shops, which are characterized by a high degree of flexibility and production variety. Owing to the limited number of possible inspections during a short run, often the Phase I control chart cannot be performed and correct estimates for the population mean and standard deviation are not available. Thus, the hypothesis of known in‐control population parameters cannot be assumed and the usual control chart statistics to monitor the sample mean are not applicable. t‐charts have been recently proposed in the literature to protect against errors in population standard deviation estimation due to the limitation of available sampling measures. In this paper the t‐charts are tested for implementation in short production runs to monitor the process mean and their statistical properties are evaluated. Statistical performance measures properly designed to test the chart sensitivity during short runs have been considered to compare the performance of Shewhart and EWMA t‐charts. Two initial setup conditions for the short run fixing the population mean exactly equal to the process target or, alternatively, introducing an initial setup error influencing the statistic distribution have been modelled. The numerical study considers several out‐of‐control process operating conditions including one‐step shifts for the population mean and/or standard deviation. The obtained results show that the t‐charts can be successfully implemented to monitor a short run. Finally, an illustrative example is presented to show the use of the investigated t charts. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

15.
Reliability Model for Electronic Devices under Time Varying Voltage   总被引:1,自引:0,他引:1       下载免费PDF全文
Present reliability models, which estimate the lifetime of electronic devices, work under the assumption that the voltage level must be constant when an Accelerated Life Testing is performed. Nevertheless, in a real operational environment, electronic devices are subjected to electrical variations present in the power lines; that means the voltage has a time‐varying behavior, which breaks the assumption of reliability models. Thus, in this paper, a reliability model is presented, which describes the lifetime of electronic devices under time‐varying voltage via a parametric function. The model is based on the Cumulative Damage Model with random failure rate and the modified Inverse Power Law. In order to estimate the parameters of the proposed model, the maximum likelihood method was employed. A case study based on the time‐varying voltage induced by electrical harmonics when Alternate Current/Direct Current (AC/DC) transformer is connected to the power line is presented in this paper. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

16.
This paper presents equations for estimating the crack tip characterizing parameters C(t) and J(t), for general elastic‐plastic‐creep conditions where the power‐law creep and plasticity stress exponents differ, by modifying the plasticity correction term in published equations. The plasticity correction term in the newly proposed equations is given in terms of the initial elastic‐plastic and steady‐state creep stress fields. The predicted C(t) and J(t) results are validated by comparison with systematic elastic‐plastic‐creep FE results. Good agreement with the FE results is found.  相似文献   

17.
Creep‐fatigue crack growth (C‐FCG) rates in a P91 steel at 625°C were correlated as the average time rate of crack growth during hold time, (da/dt)avg , with (Ct)avg. At 60‐second hold time, the rates were lower than for 600‐second hold time. At 600‐second hold time, the crack growth rates converged on to the creep crack growth rate (CCGR) trend. Thus, the CCGR trend represents the upper bound for time‐dependent crack growth rates in P91 materials. The analytical expressions based on considering just the elastic and secondary creep deformation rates overestimated the magnitudes of (Ct)avg by as much as a factor of 10 for the 600‐second hold time tests. After accounting for the effects of cyclic plasticity during unloading, and accounting for only partial reversal of creep strains accumulated during hold time, the estimates of (Ct)avg compared well with the measured values. CR represents the extent of crack tip creep strain reversal, and tpl is the time required for the crack tip creep zone during the hold time to become equivalent in size to the cyclic plastic zone in terms of stress carried by that region. Together, these parameters accurately account for the effects of crack tip cyclic plasticity on the magnitude of (Ct)avg. Both tpl and CR depend on material properties, and the latter also depends on the hold time. A parameter ? is introduced that is dependent only on material properties and from which CR can be estimated for a given hold time. tpl and ? can be reported as part of the test results from C‐FCG testing.  相似文献   

18.
In this paper, uniaxial compression tests are conducted on fissured red sandstone specimens to predict fracture damage (large‐scale events). The acoustic mission (AE) coupled with digital image correlation (DIC) technologies are used to monitor and record the real‐time cracking process of tested specimens. The AE characteristics are analysed during the cracking process. Moreover, three types of b‐value methods based on the AE parameters are adopted to predict the occurrence of large‐scale events (macro‐cracking). The results show that every macro‐cracking leads to a rapid decrease in three types of b value. When the fissured specimens reach to ultimate failure, all three types of b value reach to the minimum. The b value based on the AE parameters can be used as a predictor of large‐scale events during the cracking process of fissured rocks.  相似文献   

19.
Variable-stress accelerated life testing trials are experiments in which each of the units in a random sample of units of a product is run under increasingly severe conditions to get information quickly on its life distribution. We consider a fatigue failure model in which accumulated decay is governed by a continuous Gaussian process W(y) whose distribution changes at certain stress change points to < t l < < … <t k , Continuously increasing stress is also considered. Failure occurs the first time W(y) crosses a critical boundary ω. The distribution of time to failure for the models can be represented in terms of time-transformed inverse Gaussian distribution functions, and the parameters in models for experiments with censored data can be estimated using maximum likelihood methods. A common approach to the modeling of failure times for experimental units subject to increased stress at certain stress change points is to assume that the failure times follow a distribution that consists of segments of Weibull distributions with the same shape parameter. Our Wiener-process approach gives an alternative flexible class of time-transformed inverse Gaussian models in which time to failure is modeled in terms of accumulated decay reaching a critical level and in which parametric functions are used to express how higher stresses accelerate the rate of decay and the time to failure. Key parameters such as mean life under normal stress, quantiles of the normal stress distribution, and decay rate under normal and accelerated stress appear naturally in the model. A variety of possible parameterizations of the decay rate leads to flexible modeling. Model fit can be checked by percentage-percentage plots.  相似文献   

20.
The discrete element method, developed by Cundall and Strack, typically uses some variations of the central difference numerical integration scheme. However, like all explicit schemes, the scheme is only conditionally stable, with the stability determined by the size of the time‐step. The current methods for estimating appropriate discrete element method time‐steps are based on many assumptions; therefore, large factors of safety are usually applied to the time‐step to ensure stability, which substantially increases the computational cost of a simulation. This work introduces a general framework for estimating critical time‐steps for any planar rigid body subject to linear damping and forcing. A numerical investigation of how system damping, coupled with non‐collinear impact, affects the critical time‐step is also presented. It is shown that the critical time‐step is proportional to if a linear contact model is adopted, where m and k represent mass and stiffness, respectively. The term which multiplies this factor is a function of known physical parameters of the system. The stability of a system is independent of the initial conditions. © 2016 The Authors. International Journal for Numerical Methods in Engineering Published by John Wiley & Sons Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号