首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 666 毫秒
1.
Reliability assessments of repairable (electronic) equipment are often based on failure data recorded under field conditions. The main objective in the analyses is to provide information that can be used in improving the reliability through design changes. For this purpose it is of particular interest to be able to locate ‘trouble-makers’, i.e. components that are particular likely to fail. In the present context, reliability is measured in terms of the mean cumulative number of failures as a function of time. This function may be considered for the system as a whole, or for stratified data. The stratification is obtained by sorting data according to different factors, such as component positions, production series, etc. The mean cumulative number of failures can then be estimated either nonparametrically as an average of the observed failures, or parametrically, if a certain model for the lifetimes of the components involved is assumed. As an example we here consider a simple component lifetime model based on the assumption that components are ‘drawn’ randomly from a heterogeneous population, where a small proportion of the components are weak (with a small mean lifetime), and the remaining are standard components (with a large mean lifetime). This model enables formulation of an analytical expression for the mean cumulative number of failures. In both the nonparametric and the parametric case the uncertainty of the estimation may be assessed by computing a confidence interval for the estimated values (a confidence band for the estimated time functions). The determination of confidence bands provides a basis for assessing the significance of the factors underlying the stratification. The methods are illustrated through an industrial case study using field failure data.  相似文献   

2.
A general discrete-time, stochastic, dynamic-programming model of opportunistic corrective and preventive replacement policy is presented for multi-component systems with a different life distribution for each component. An approach to optimize the corrective and preventive maintenance policy is proposed, which is based on the introduction of the ‘oldest age’ concept and an ‘array dimensionality reduction technique’. The application of this policy to ball-bearing systems has resulted in considerable savings, as this paper demonstrates.  相似文献   

3.
P. Mertiny  F. Ellyin   《Composites Part A》2002,33(12):1615-1622
In this experimental investigation the influence of the applied tow tension during filament winding on the physical and mechanical properties of glass-fibre reinforced polymeric composite tubulars, was studied. Pressure retaining tubular products used in the transportation/storage of fluids are generally subjected to a variety of loading conditions during their service life; thus tubular specimens were tested under different biaxial loading ratios. The stress/strain response was recorded and functional and structural failure envelopes were developed. These envelopes indicate the leakage and final failure characteristics of the components, respectively. The mechanical properties were analysed in conjunction with the measured physical properties: ‘fibre volume fraction’ and ‘effective wall thickness’. Experimental findings demonstrate that the component strength depends on the degree of fibre tensioning. Under fibre-dominated loading conditions, higher winding tension leads to an improved resistance against failure of tubular components, whereas under matrix-dominated loading failure is delayed by reduced fibre tensioning.  相似文献   

4.
A basic concept in this paper is that a group of components under study represents a sample from a common source (e.g. a factory) that has a failure rate distribution, with distribution parameters originally unknown. This concept is used to obtain generalized James-Stein estimators for individual component parameters. Such ‘shrinkage estimators’ are more accurate than conventional point estimates. They are developed here for component failure rate parameters based on Poisson data, and for failure probabilities based on binomial data. Optimal weights and shrinking factors are obtained as well as formulae for estimating the mean value and the variance of the source population, and the mean values and the variances of individual component estimates. Close connections to the empirical Bayesian estimation are shown  相似文献   

5.
The constrained optimization of resource allocation to minimize the probability of failure of an engineered system relies on a probabilistic risk analysis of that system, and on ‘risk/cost functions’. These functions describe, for each possible improvement of the system's robustness, the corresponding gain of reliability given the considered component or management factor to be upgraded. These improvements can include, for example, the choice of components of different robustness levels (at different costs), addition of redundancies, or changes in operating and maintenance procedures. The optimization model is generally constrained by a maximum budget, a schedule deadline, or a maximum number of qualified personnel. A key question is thus the nature of the risk/cost function linking the costs involved and the corresponding failure-risk reduction. Most of the methods proposed in the past have relied on continuous, convex risk/cost functions reflecting decreasing marginal returns. In reality, the risk/cost functions can be simple step functions (e.g. a discrete choice among possible components), discontinuous functions characterized by continuous segments between points of discontinuity (e.g. a discrete choice among components that can be of continuously increasing levels of robustness), or continuous functions (e.g. exponentially decreasing failure risk with added resources).This paper describes a general method for the optimization of the robustness of a complex engineered system in which all three risk/cost function types may be relevant. We illustrate the method with a satellite design problem. We conclude with a discussion of the complexity of the resolution of this general type of optimization problem given the number and the types of variables involved.  相似文献   

6.
Selection of proper materials for different components is one of the most challenging tasks in the design and development of products for diverse engineering applications. Materials play a crucial and important role during the entire design and manufacturing process. Wrong selection of material often leads to huge cost involvement and ultimately drives towards premature component or product failure. So the designers need to identify and select proper materials with specific functionalities in order to obtain the desired output with minimum cost involvement and specific applicability. This paper attempts to solve the materials selection problem using two most potential multi-criteria decision-making (MCDM) approaches and compares their relative performance for a given material selection application. The first MCDM approach is ‘Vlse Kriterijumska Optimizacija Kompromisno Resenje’ (VIKOR), a compromise ranking method and the other one is ‘ELimination and Et Choice Translating REality’ (ELECTRE), an outranking method. These two methods are used to rank the alternative materials, for which several requirements are considered simultaneously. Two examples are cited in order to demonstrate and validate the effectiveness and flexibility of these two MCDM approaches. In each example, a list of all the possible choices from the best to the worst suitable materials is obtained taking into account different material selection criteria. The rankings of the selected materials almost corroborate with those as obtained by the past researchers.  相似文献   

7.
This paper gives a rigorous proof of something that has been assumed intuitively by workers in seismic PRA; namely, that the ‘mean curve’ of a family of lognormal fragility curves is equal to the so-called ‘composite’ curve. This equality is shown to be equivalent to, and to result from, an obscure but interesting identity property of the standard normal probability curve.  相似文献   

8.
Repairable systems can be brought to one of possible states following a repair. These states are: ‘as good as new’, ‘as bad as old’ and ‘better than old but worse than new’. The probabilistic models traditionally used to estimate the expected number of failures account for the first two states, but they do not properly apply to the last one, which is more realistic in practice. In this paper, a probabilistic model that is applicable to all of the three after-repair states, called generalized renewal process (GRP), is applied. Simplistically, GRP addresses the repair assumption by introducing the concept of virtual age into the stochastic point processes to enable them to represent the full spectrum of repair assumptions. The shape of measured or design life distributions of systems can vary considerably, and therefore frequently cannot be approximated by simple distribution functions. The scope of the paper is to prove that a finite Weibull mixture, with positive component weights only, can be used as underlying distribution of the time to first failure (TTFF) of the GRP model, on condition that the unknown parameters can be estimated. To support the main idea, three examples are presented. In order to estimate the unknown parameters of the GRP model with m-fold Weibull mixture, the EM algorithm is applied. The GRP model with m mixture components distributions is compared to the standard GRP model based on two-parameter Weibull distribution by calculating the expected number of failures. It can be concluded that the suggested GRP model with Weibull mixture with an arbitrary but finite number of components is suitable for predicting failures based on the past performance of the system.  相似文献   

9.
The issue of information loss in the process of system reliability modeling through conventional load–strength interference analysis is discussed first, and the reason why it is impossible to construct dependent system reliability model simply by means of component reliability index is demonstrated. Then, an approach to modeling the reliability of dependent system with common cause failure (CCF) is presented. The approach is based on system-level load–strength interference analysis and a concept of ‘conditional failure probability of component’ as well. With the opinion that load randomness is the direct cause of failure dependence, a discrete type system reliability model is developed via the conditional component failure probability concept. At last, the model's capabilities to estimate system reliability with CCF effect and to predict high multiplicity failure probability based on low multiplicity failure event data are proved.  相似文献   

10.
Given the complexity of power grids, the failure of any component may cause large-scale economic losses. Consequently, the quick recovery of power grids after disasters has become a new research direction. Considering the severity of power grid disasters, an improved power grid resilience measure and its corresponding importance measures are proposed. The recovery priority of failed components after a disaster is determined according to the influence of the failed components on the power grid resilience. Finally, based on the data from the 2019 Power Yearbook of each city in Shandong Province, China, the power grid resilience after a disaster is analyzed for two situations, namely, partial components failure and failure of all components. Result shows that the recovery priorities of components with different importance measures vary. The resilience evaluations under different repair conditions prove the feasibility of the proposed method.  相似文献   

11.
A prototype condition monitoring and diagnostic system has been developed for compression refrigeration plants, which can be used under variable operational conditions. Based on a combination of causal analysis, expert knowledge and simulated failure modes, a failure mode symptom matrix has been created. Healthy system behaviour is predicted based on a regression analysis model. Using multi-valued (or ‘fuzzy’) logic, real-time recognition of failure modes, at an early stage, proved to be possible. Future developments for improvement of diagnostic systems in compression refrigeration plants are discussed.  相似文献   

12.
In this paper, a repairable circular consecutive-k-out-of-n:F system with one repairman is studied. It is assumed that the working time and the repair time of each component are both exponentially distributed and every component after repair is ‘as good as new’. Each component is classified as either a key component or an ordinary component. Key components have priority in repair when failed. By using the definition of generalized transition probability, the state transition probabilities of the system are derived. Important reliability indices are evaluated for an example.  相似文献   

13.
System reliability is an important parameter in the operation of modern utility systems, spacecraft and manufacturing facilities. Over the last several decades researchers have used many different methods to determine complex system reliability. This paper uses new and novel techniques which are based on artificial intelligence and expert systems to determine system reliability. This work is based on heuristic search and a symbolic logic system which provides symbolic representation for overall system reliability when there is a single input and single output. Pivotal decomposition is used on a recursive basis to repeatedly reduce complex systems/subsystems to simpler systems. Eventually, these simpler systems are further reduced into easily resolved series-parallel arrangements. This symbolic logic system uses a heuristic based ‘hill climbing’ search algorithm. The algorithm identifies the pivotal or complex component. Once this pivotal component is selected, additional procedures are used to symbolically recognize other components within the resulting subgraphs which are made superfluous by the selection of the pivot component. These superfluous components are removed, and the simplification process is allowed to continue on a recursive basis. Once further recursive analysis is not needed, additional rules are employed to reduce the result to a system recognizable form in terms of series-parallel components. Finally, a ‘symbolic processor’ is used to convert this recognized form into a symbolic sum of products equation representing the overall system.  相似文献   

14.
This paper defines a type of constrained artificial neural network (ANN) that enables analytical certification arguments whilst retaining valuable performance characteristics. Previous work has defined a safety lifecycle for ANNs without detailing a specific neural model. Building on this previous work, the underpinning of the devised model is based upon an existing neuro-fuzzy system called the fuzzy self-organising map (FSOM). The FSOM is type of ‘hybrid’ ANN which allows behaviour to be described qualitatively and quantitatively using meaningful expressions. Safety of the FSOM is argued through adherence to safety requirements—derived from hazard analysis and expressed using safety constraints. The approach enables the construction of compelling (product-based) arguments for mitigation of potential failure modes associated with the FSOM. The constrained FSOM has been termed a ‘safety critical artificial neural network’ (SCANN). The SCANN can be used for non-linear function approximation and allows certified learning and generalisation for high criticality roles. A discussion of benefits for real-world applications is also presented.  相似文献   

15.
Nanoindentation has been used to study the hardness changes produced by scratching of aluminium alloy AA2024, with and without a clad layer of pure aluminium. The hardness was mapped around scratches made with diamond tools of different profiles. One tool produced significant plastic damage with associated hardening at the scratch root, whilst the other produced a ‘cleaner’ cut with no hardening. The different behaviours are attributed to whether the tool makes the scratch by a ‘cutting’ or a ‘ploughing’ mechanism. The degree of plastic damage around the scratches has been correlated with peak broadening data obtained using synchrotron X-ray diffraction.There was no change observed in the local hardness around the scratch with fatigue loading.  相似文献   

16.
‘Directional simulation in the load space (DS-LS)’ is a simulation-based technique used to perform reliability analysis of structures subjected to time-invariant or time-variant random loads. To perform DS-LS a location must first be chosen for an ‘origin of simulation’. The origin may be positioned in either the safe or failure region of the load space, and its precise location (with respect to these regions) influences the DS-LS formulation needed to evaluate reliability correctly. The current formulation requires the origin to be positioned in the safe region. However, even for simple structures, the ‘exact’ location of the safe and failure region is not always known explicitly ‘a priori’. Modifications to allow for the possibility of positioning the origin not only in the ‘safe’ region but in the ‘failure’ region are proposed in this paper. Some numerical examples involving one or more stationary continuous Gaussian loads and the simulation of directions by ‘Monte Carlo’ and ‘the hyperspace division method’ are presented to demonstrate the validity of the proposed formulations. Some comments on convergence are made.  相似文献   

17.
In this study, a mathematical model was developed for failure prediction on critical equipment in petrochemical industry. The model utilized three principal measurements, namely: temperature, vibration and pressure, in developing a framework for determining expected failure periods for each component of the critical equipment. Validation of the model was done using data collected from Warri Refinery and Petrochemical Company, carbon black plant, Ekpan–Warri, Nigeria. Condition monitored data based on the three principal measurements were generated and used to determine expected failure periods for each component of the critical equipment (single-stage centrifugal compressor) in the plant. It was observed that for all the equipment under consideration, the life expectancy for blower casing is longest for all components, while that of gear and gear bearing is least. Hence, special attention should be focused on monitoring the condition of the gear and its bearing components. It was also observed from the plot trends that the deterioration rate of all components is affected by the equipment operating speed condition and functionality.  相似文献   

18.
ENEL, a vertically integrated utility for generation, transmission and distribution in Italy, has been a worldwide leader since the 1970s in setting up ‘home made’ computing programs for the adequacy evaluation, imbedded in the majority of studies needed in power system planning and operation, of large composite generation and transmission systems.Adequacy evaluation has a probabilistic content. Two complementary approaches have been adopted at ENEL, direct probabilistic approach and Monte Carlo simulation, implementing a number of computing programs tailored to the Italian situation. Naturally, those approaches have been complemented by deterministic evaluations, with the well-known computer programs used in power system planning and operation.The recent changes in the electric supply industry, ‘liberalized’ and privatized at different extent in the various countries, emphasized the importance of the reliability evaluation as a key issue for the establishment of revised/updated adequacy/security standards aiming to the proper allocations of the investments. In Italy the major ‘new’ issues are presently the effect/compatibility of the non-utility generators with ENEL mixed hydrothermal generation system and the increasing ‘size’ of the transmission network, evermore interconnected with other foreign systems, which ask for suitable ‘adequacy-orientated’ equivalents and ‘new’ tools for the evaluation of concept such as ‘transmission duty pricing’.The paper describes how at ENEL the new trends are faced and the work which ENEL, now a joint stock company which—in addition to the obligation to supply—has new responsibilities against the potential stockholders, has under way in the field of adequacy assessment.  相似文献   

19.
The reliability of structures subjected to multiple time-varying random loads is considered herein. It is well-known that the reliability of such systems may be evaluated by considering outcrossings of the load process vector out of a safe domain, and the contribution of individual loads to structural failure may be evaluated by considering outcrossings caused by combinations of one or more loads. In this paper the ‘Directional Simulation in the Load Space’ approach to reliability analysis is developed to consider explicitly outcrossings caused by all possible combinations of loads, during analysis of systems comprising stationary continuous Gaussian loads. To do this, the direction of the load process vector is ‘fixed’ at each point of outcrossing (to physically represent the particular combination of loads causing the outcrossing), and, by considering each possible load combination, all loads not causing an outcrossing are then held constant during radial integration (to model correctly that they do not contribute to each outcrossing). A numerical example demonstrating the validity of the proposed formulation is presented.  相似文献   

20.
This paper [Levitin G. Reliability and performance analysis for fault-tolerant programs consisting of versions with different characteristics. Reliab Eng Syst Safety 2004;86:75–81] presents a detailed reliability and performance analysis of fault-tolerant programs. Unfortunately, the treatment is dependent upon a key assumption: ‘Failures of versions for each component are statistically independent…’ (Section 2). Such an assumption is not justified.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号