首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Petri nets are a powerful technique widely used in the modeling and analysis of complex manufacturing systems and processes. Due to their capability in modeling the dynamics of the systems, Petri nets have been combined with fault tree analysis techniques to determine the average rate of occurrence of system failures. Current methods in combining Petri nets with fault trees for system failure analysis compute the average rate of occurrence of system failures by tracking the markings of the Petri net models. The limitations of these methods are that tracking the markings of a Petri net represented by a reachability tree can be very complicated as the size of the system grows. Therefore, these methods offer less flexibility in analyzing sequential failures in the system. To overcome the limitations of the current methods in applying Petri nets for system failure assessment, this paper expands and extends the concept of counters used in Petri net simulation to perform the failure and reliability analysis of complex systems. The presented method allows the system failures to be modeled using general Petri nets with inhibitor arcs and loops, which employs fewer variables than existing marking‐based methods and substantially accelerates the computations. It can be applied to real system failure analysis where basic events can have different failure rates. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

2.
This paper presents a methodology for evaluating the performance of Programmable Electronic Systems (PES) used for safety applications. The most common PESs used in the industry are identified. Markov modeling techniques are used to develop the reliability model. The major aspects of the analysis address the random hardware failures, the uncertainty associated with these failures, a methodology to propagate these uncertainties in the Markov model, and modeling of common cause failures. The elements of this methodology are applied to an example using a Triple Redundant PES without inter-processor communication. The performance of the PES is quantified in terms of its reliability, probability to fail safe, and probability to fail dangerous within a mission time. The effect of model input parameters (component failure rates, diagnostic coverage), their uncertainties and common cause failures on the performance of the PES is evaluated.  相似文献   

3.
Despite the efforts in developing Petri net models for manufacturing control and scheduling, the generation of Petri net models cannot be automated for agile manufacturing control and scheduling without difficulties. The problems lie in the complexity of Petri net models. First of all, it is difficult to visualize the basic manufacturing process flow in a complex Petri net model even for a Petri net modelling expert. The second problem is related to the complexity of using Petri net models for manufacturing system scheduling. In this paper, a decomposition methodology in automatic generation of Petri nets for manufacturing system control and scheduling is developed. The decomposition methodology includes representing a manufacturing process with the Integrated Definition 3 (IDEF3) methodology, decomposing the manufacturing process based on the similarity of resources, transforming the IDEF3 model into a Petri net control model, and aggregating sub Petri net models. Specifically, a sequential cluster identification algorithm is developed to decompose a manufacturing system represented as an IDEF3 model. The methodology is illustrated with a flexible disassembly cell example. The computational experience shows that the methodology developed in this paper reduces the computational time complexity of the scheduling problem without significantly affecting the solution quality obtained by a simulated annealing scheduling algorithm. The advantages of the methodology developed in this paper include the combined benefits of simplicity of the IDEF3 representation of manufacturing processes and analytical and control properties of Petri net models. The IDEF3 representation of a manufacturing process enhances the manmachine interface.  相似文献   

4.
A typical flexible manufacturing system, Westland Helicopters' sheet metal detail manufacturing complex, has been analysed for reliability. The techniques of fault tree analysis and event tree analysis are presented and their applicability to this study investigated. Event tree analysis has been found to be a more effective method for analysing manufacturing systems. The failure states of the system have been identified from the construction of an event tree which considers random hardware faults that influence production. Failure rate data have been used to quantify the critical production failure states in terms of machine failures. Estimates are made of the system's MTTF and percentage availability using typical MTTR figures. The probability that a selected production route fails to complete the manufacture of a set of parts is also evaluated. A dependency of systems reliability on the production demand has been discovered, and a possible method for modelling and assessing the reliability of systems capable of producing several products is proposed.  相似文献   

5.
This paper introduces a new development for modelling the time-dependent probability of failure on demand of parallel architectures, and illustrates its application to multi-objective optimization of proof testing policies for safety instrumented systems. The model is based on the mean test cycle, which includes the different evaluation intervals that a module goes periodically through its time in service: test, repair and time between tests. The model is aimed at evaluating explicitly the effects of different test frequencies and strategies (i.e. simultaneous, sequential and staggered). It includes quantification of both detected and undetected failures, and puts special emphasis on the quantification of the contribution of the common cause failure to the system probability of failure on demand as an additional component. Subsequently, the paper presents the multi-objective optimization of proof testing policies with genetic algorithms, using this model for quantification of average probability of failure on demand as one of the objectives. The other two objectives are the system spurious trip rate and lifecycle cost. This permits balancing of the most important aspects of safety system implementation. The approach addresses the requirements of the standard IEC 61508. The overall methodology is illustrated through a practical application case of a protective system against high temperature and pressure of a chemical reactor.  相似文献   

6.
The quality of a product is dependent on both facilities/equipment and manufacturing processes. Any error or disorder in facilities and processes can cause a catastrophic failure. To avoid such failures, a zero- defect manufacturing (ZDM) system is necessary in order to increase the reliability and safety of manufacturing systems and reach zero-defect quality of products. One of the major challenges for ZDM is the analysis of massive raw datasets. This type of analysis needs an automated and self-organized decision making system. Data mining (DM) is an effective methodology for discovering interesting knowledge within a huge datasets. It plays an important role in developing a ZDM system. The paper presents a general framework of ZDM and explains how to apply DM approaches to manufacture the products with zero-defect. This paper also discusses 3 ongoing projects demonstrating the practice of using DM approaches for reaching the goal of ZDM.  相似文献   

7.
With the popularization of big data, an increasing number of discrete event data have been collected and recorded during system operations. These events are usually stored in the form of event logs, which contain rich information of system operations and have potential applications in fault diagnosis and failure prediction. In manufacturing processes, various levels of correlations exist among the events, which can be used to predict the occurrence of failure events. However, two challenges remain to be solved for effective reliability analysis and failure prediction: (1) how to leverage various information from the event log to predict the occurrence of failure events and (2) how to model the effects of multiple correlations on the prediction. To address these issues, this paper proposes a novel reliability model, which integrates Cox proportional hazards (PHs) regression into survival analysis and association rule mining methodology. The model is used to evaluate the probability of failure event, which occurs within a certain period of time conditional on the occurrence history of correlated events. To estimate parameters and predict occurrence of failure events in the model, an effective algorithm is proposed based on piecewise-constant time axis division, Cox PHs model, and maximum likelihood estimation. Unlike the existing literature, our model focuses on the interactions among events. The applicability of the proposed model is illustrated through a case study of a manufacturing company. Sensitivity analysis is conducted to illustrate the effectiveness of the proposed model.  相似文献   

8.
This paper describes a method for estimating and forecasting reliability from attribute data, using the binomial model, when reliability requirements are very high and test data are limited. Integer data—specifically, numbers of failures — are converted into non-integer data. The rationale is that when engineering corrective action for a failure is implemented, the probability of recurrence of that failure is reduced; therefore, such failures should not be carried as full failures in subsequent reliability estimates. The reduced failure value for each failure mode is the upper limit on the probability of failure based on the number of successes after engineering corrective action has been implemented. Each failure value is less than one and diminishes as test programme successes continue. These numbers replace the integral numbers (of failures) in the binomial estimate. This method of reliability estimation was applied to attribute data from the life history of a previously tested system, and a reliability growth equation was fitted. It was then ‘calibrated’ for a current similar system's ultimate reliability requirements to provide a model for reliability growth over its entire life-cycle. By comparing current estimates of reliability with the expected value computed from the model, the forecast was obtained by extrapolation.  相似文献   

9.
The quantification of a fault tree is difficult without an exact probability value for all of the basic events of the tree. To overcome this difficulty in this paper, we propose a methodology which employs ‘hybrid data’ as a tool to analyse the fault tree. The proposed methodology estimates the failure probability of basic events using the statistical analysis of field recorded failures. Under these circumstances, where there is an absence of past failure records, the method follows a fuzzy set based theoretical evaluation based on the subjective judgement of experts for the failure interval. The proposed methodology has been applied to a conveyor system. The results of the analysis reveal the effectiveness of the proposed methodology and the instrumental role played by the experience of experts in providing reliability oriented information. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

10.
Recently, storage reliability has attracted attention because of the increasing demand for high reliability of products in storage in both military and commercial industries. In this paper we study a general storage reliability model for the analysis of storage failure data. It is indicated that the initial failures, which are usually neglected, should be incorporated in the estimation of storage failure probability. Data from the reliability testing before and during the storage should be combined to give more accurate estimates of both initial failure probability and the probability of storage failures. The results are also useful for decision-making concerning the amount of testing to be carried out before storage. A numerical example is also given to illustrate the idea.  相似文献   

11.
A common scenario in engineering is that of a system which operates throughout several sequential and distinct periods of time, during which the modes and consequences of failure differ from one another. This type of operation is known as a phased mission, and for the mission to be a success the system must successfully operate throughout all of the phases. Examples include a rocket launch and an aeroplane flight. Component or sub-system failures may occur at any time during the mission, yet not affect the system performance until the phase in which their condition is critical. This may mean that the transition from one phase to the next is a critical event that leads to phase and mission failure, with the root cause being a component failure in a previous phase. A series of phased missions with no maintenance may be considered as a maintenance-free operating period (MFOP). This paper describes the use of a Petri net (PN) to model the reliability of the MFOP and phased missions scenario. The model uses Monte-Carlo simulation to obtain its results, and due to the modelling power of PNs, can consider complexities such as component failure rate interdependencies and mission abandonment. The model operates three different types of PN which interact to provide the overall system reliability modelling. The model is demonstrated and validated by considering two simple examples that can be solved analytically.  相似文献   

12.
Safety systems are often characterized by substantial redundancy and diversification in safety critical components. In principle, such redundancy and diversification can bring benefits when compared to single-component systems. However, it has also been recognized that the evaluation of these benefits should take into account that redundancies cannot be founded, in practice, on the assumption of complete independence, so that the resulting risk profile is strongly dominated by dependent failures. It is therefore mandatory that the effects of common cause failures be estimated in any probabilistic safety assessment (PSA). Recently, in the Hughes model for hardware failures and in the Eckhardt and Lee models for software failures, it was proposed that the stressfulness of the operating environment affects the probability that a particular type of component will fail. Thus, dependence of component failure behaviors can arise indirectly through the variability of the environment which can directly affect the success of a redundant configuration. In this paper we investigate the impact of indirect component dependence by means of the introduction of a probability distribution which describes the variability of the environment. We show that the variance of the distribution of the number, or times, of system failures can give an indication of the presence of the environment. Further, the impact of the environment is shown to affect the reliability and the design of redundant configurations.  相似文献   

13.
A methodology is developed which uses Petri nets instead of the fault tree methodology and solves for reliability indices utilising fuzzy Lambda–Tau method. Fuzzy set theory is used for representing the failure rate and repair time instead of the classical (crisp) set theory because fuzzy numbers allow expert opinions, linguistic variables, operating conditions, uncertainty and imprecision in reliability information to be incorporated into the system model. Petri nets are used because unlike the fault tree methodology, the use of Petri nets allows efficient simultaneous generation of minimal cut and path sets.  相似文献   

14.
The reliability of engineering systems is usually improved by the inclusion of redundant components in the design. Often, the redundant components must all contribute actively to the operation of the system. Examples include structures, water and power distribution systems, and communication networks. For these systems, the failure of each successive component defines a different topological configuration for the system. A reliable system should perform adequately in as many of these configurations as possible. Consequently, the reliability of a system with active redundancy depends on two factors: the probability, considering component failures, that a functional system topology is maintained; and the probability of adequate system performance in each functional configuration. To date, no single reliability measure exists which combines both of these factors, but such a measure would be useful for comparison of alternative redundant designs. Current methods for reliability assessment have been tailored to the purposes of individual engineering disciplines and reflect the inherent physical properties of specific types of systems. However, an increasing need for reliability analysis of large, complex, multidisciplinary systems requires a more general and unified approach. In this paper, we propose a unified, model-based methodology for reliability-based design which provides a single, second moment reliability index for systems with active redundancy. The reliability index of a redundant pipe network is calculated as an illustrative example.  相似文献   

15.
A new reliability measure is proposed and equations are derived which determine the probability of existence of a specified set of minimum gaps between random variables following a homogeneous Poisson process in a finite interval. Using the derived equations, a method is proposed for specifying the upper bound of the random variables' number density which guarantees that the probability of clustering of two or more random variables in a finite interval remains below a maximum acceptable level. It is demonstrated that even for moderate number densities the probability of clustering is substantial and should not be neglected in reliability calculations.In the important special case where the random variables are failure times, models have been proposed for determining the upper bound of the hazard rate which guarantees a set of minimum failure-free operating intervals before the random failures, with a specified probability. A model has also been proposed for determining the upper bound of the hazard rate which guarantees a minimum availability target. Using the models proposed, a new strategy, models and reliability tools have been developed for setting quantitative reliability requirements which consist of determining the intersection of the hazard rate envelopes (hazard rate upper bounds) which deliver a minimum failure-free operating period before random failures, a risk of premature failure below a maximum acceptable level and a minimum required availability. It is demonstrated that setting reliability requirements solely based on an availability target does not necessarily mean a low risk of premature failure. Even at a high availability level, the probability of premature failure can be substantial. For industries characterised by a high cost of failure, the reliability requirements should involve a hazard rate envelope limiting the risk of failure below a maximum acceptable level.  相似文献   

16.
In this paper, a methodology known as APSRA (Assessment of Passive System ReliAbility) is used for evaluation of reliability of passive isolation condenser system of the Indian Advanced Heavy Water Reactor (AHWR). As per the APSRA methodology, the passive system reliability evaluation is based on the failure probability of the system to perform the design basis function. The methodology first determines the operational characteristics of the system and the failure conditions based on a predetermined failure criterion. The parameters that could degrade the system performance are identified and considered for analysis. Different modes of failure and their cause are identified. The failure surface is predicted using a best estimate code considering deviations of the operating parameters from their nominal states, which affect the isolation condenser system performance. Once the failure surface of the system is predicted, the causes of failure are examined through root diagnosis, which occur mainly due to failure of mechanical components. Reliability of the system is evaluated through a classical PSA treatment based on the failure probability of the components using generic data.  相似文献   

17.
This paper describes a novel approach for modelling offshore pipeline failures. Using data for pipelines in the North Sea, a methodology has been developed for explaining the effects of several factors on the reliability of pipeline systems. Discriminant analysis forms the basis of this methodology, which can accommodate the manifold variables affecting such failures and predict the probability of any pipeline failing. In this respect, the proposed methodology is superior to the conventional approach, which is based on average failure rates.  相似文献   

18.
A circular consecutive-2-out-of-n:F repairable system with one repairman is studied in this paper. When there are more than one failed component, priorities are assigned to the failed components. Both the working time and the repair time of each component is assumed to be exponentially distributed. Every component after repair is as good as new. By using the definition of generalized transition probability and the concept of critical component, we derive the state transition probability matrix of the system. Methodologies are then presented for the derivation of system reliability indexes such as availability, rate of occurrence of failure, mean time between failures, reliability, and mean time to first failure.  相似文献   

19.
应用Petri网的故障诊断方法,将知识表示和诊断推理融合为一体,将反映故障的征兆库所作为初始库所和引发故障的故障库所作为目标库所,通过深度优先及可信度最大的搜索策略,得到了初始库所集合到目标库所集合的最佳路径。在推理过程中,经过一系列状态向量递推的矩阵计算,可以快速获得诊断结果。Petri网推理得出的结论与事实相一致,故而适用于现代生产制造复杂过程和系统的状态监控和故障诊断。  相似文献   

20.
Posbist fault tree analysis of coherent systems   总被引:11,自引:0,他引:11  
When the failure probability of a system is extremely small or necessary statistical data from the system is scarce, it is very difficult or impossible to evaluate its reliability and safety with conventional fault tree analysis (FTA) techniques. New techniques are needed to predict and diagnose such a system's failures and evaluate its reliability and safety. In this paper, we first provide a concise overview of FTA. Then, based on the posbist reliability theory, event failure behavior is characterized in the context of possibility measures and the structure function of the posbist fault tree of a coherent system is defined. In addition, we define the AND operator and the OR operator based on the minimal cut of a posbist fault tree. Finally, a model of posbist fault tree analysis (posbist FTA) of coherent systems is presented. The use of the model for quantitative analysis is demonstrated with a real-life safety system.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号