首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
There is a growing interest from both the regulatory authorities and the nuclear industry to stimulate the use of Probabilistic Risk Analysis (PRA) for risk-informed applications at Nuclear Power Plants (NPPs). Nowadays, special attention is being paid on analyzing plant-specific changes to Test Intervals (TIs) within the Technical Specifications (TSs) of NPPs and it seems to be a consensus on the need of making these requirements more risk-effective and less costly. Resource versus risk-control effectiveness principles formally enters in optimization problems. This paper presents an approach for using the PRA models in conducting the constrained optimization of TIs based on a steady-state genetic algorithm (SSGA) where the cost or the burden is to be minimized while the risk or performance is constrained to be at a given level, or vice versa. The paper encompasses first with the problem formulation, where the objective function and constraints that apply in the constrained optimization of TIs based on risk and cost models at system level are derived. Next, the foundation of the optimizer is given, which is derived by customizing a SSGA in order to allow optimizing TIs under constraints. Also, a case study is performed using this approach, which shows the benefits of adopting both PRA models and genetic algorithms, in particular for the constrained optimization of TIs, although it is also expected a great benefit of using this approach to solve other engineering optimization problems. However, care must be taken in using genetic algorithms in constrained optimization problems as it is concluded in this paper.  相似文献   

2.
Two methods of constructing confidence intervals for variance components in gauge capability studies are presented in this article. Comparisons are made between the restricted maximum likelihood (REML) method and the modified large sample (MLS) method for constructing confidence intervals for components of repeatability, reproducibility, and total (gauge) variability. The examples considered involve a factorial design and a nonstandard nested design. © 1997 John Wiley & Sons, Ltd.  相似文献   

3.
The Canadian Nuclear Safety Commission (CNSC) requires that each shutdown system (SDS) of CANDU plant should be available more than 99.9% of the reactor operating time and be tested periodically. The compliance with the availability requirement should be demonstrated using the component failure rate data and the benefits of the tests. There are many factors that should be considered in determining the surveillance test interval (STI) for the SDSs. These includes: the desired target availability, the actual unavailability, the probability of spurious trips, the test duration, and the side effects such as wear-out, human errors, and economic burdens. A Markov process model is developed to study the effect of test interval in the shutdown system number one (SDS1) in this paper. The model can provide the quantitative data required for selecting the STI. Representing the state transitions in the SDS1 by a time-homogeneous Markov process, the model can be used to quantify the effect of surveillance test durations and interval on the unavailability and the spurious trip probability. The model can also be used to analyze the variation of the core damage probability with respect to changes in the test interval once combined with the conditional core damage model derived from the event trees and the fault trees of probabilistic safety assessment (PSA) of the nuclear power plant (NPP).  相似文献   

4.
In the traditional industrial verification process, when the aim is the compliance to assigned specifications, it is difficult to find an affordable statistical method for the purpose. Most data tables in industrial procedures and standards deal with tolerance limits neglecting the potential needs to verify assigned specification limits. A two-sided tolerance interval, combined with a bivariate statistical hypothesis test can be used to address this problem. The proposed risk-based approach leads to the determination of the minimum sample size with preestablished probabilities of Type I and Type II errors, that are essential elements for estimating the safety and reliability risk. A novel method is proposed for determination of the tolerance interval testing factors. This approach calculates the testing factors based on the deviation of the mean and the variance from the null hypothesis when a specified value of Type II error is found. The deviations of the mean and variance are determined in such a way that an assigned proportion of the population falls within the specification limits. Additional studies are provided to assess the robustness of the method for nonnormal environments and to compare it with other methods.  相似文献   

5.
Many industrial processes for discrete consumable products consist of a series (or set) of sequential process operations (or subsystems) which are de-coupled by means of in-process storage buffers. Each subsystem of such a process contains one or more parallel coupled or uncoupled operating lanes. We describe the use of a discrete-event simulation model for determining the availability of such a process. We likewise define and use a genetic algorithm to determine process designs and operating rules that have high availability. A 65-variable example, consisting of four operating subsystems with at most four lanes per subsystem, is used to illustrate the method. The results for this and similar real-world applications indicate that, by applying this methodology, it is possible to design buffered industrial processes having high availability.  相似文献   

6.
This study investigated the availability and safety concerns of the conventional and prospective propulsion systems for LNG carriers:
dual-fuel steam turbine mechanical (DFSM) propulsion,
dual-fuel diesel electric (DFDE) propulsion,
dual-fuel gas turbine electric (DFGE) propulsion,
dual-fuel diesel mechanical (DFDM) propulsion, and
diesel mechanical propulsion with reliquefaction (SFDM+R).
The two prospective candidates, the DFDM and DFGE, exhibited the availabilities of design and emergency propulsion loads as high as the newly adopted DFDE and SFDM+R, while the DFSM demonstrated the highest. All the propulsion systems achieved a satisfactory level of the availability for the BOG utilization. The newly introduced dual-fuel systems of DFDE, DFDM, and DFGE accompanied new hazards due to their need for pressurized fuel gas supply. The failure modes caused by these hazards were identified, and feasible safe guards were suggested. The hazards of fire and explosion stemming from flammable gas leak were considered to be acceptably mitigated by the safety requirements from the current industrial standards and classification society.  相似文献   

7.
An algorithm designed to calculate confidence intervals for solutions to ill-posed problems subject to unequality constraints is applied to the calculation of confidence intervals for a high-voltage impulse distorted by a divider system. Applications of the method to measurements made with resistive and capacitive dividers illustrate its value for obtaining useful stochastic error bounds for high-voltage impulse restoration. The technique can be useful for sensors with a directly measurable unit step response. For other dividers its utility has been adequately demonstrated  相似文献   

8.
The complexity of the modern engineering systems besides the need for realistic considerations when modeling their availability and reliability render analytic methods very difficult to be used. Simulation methods, such as the Monte Carlo technique, which allow modeling the behavior of complex systems under realistic time-dependent operational conditions, are suitable tools to approach this problem.The scope of this paper is, in the first place, to show the opportunity for using Monte Carlo simulation as an approach to carry out complex systems' availability/reliability assessment. In the second place, the paper proposes a general approach to complex systems availability/reliability assessment, which integrates the use of continuous time Monte Carlo simulation. Finally, this approach is exemplified and somehow validated by presenting the resolution of a case study consisting of an availability assessment for two alternative configurations of a cogeneration plant.In the case study, a certain random and discrete event will be generated in a computer model in order to create a realistic lifetime scenario of the plant, and results of the simulation of the plant's life cycle will be produced. After that, there is an estimation of the main performance measures by treating results as a series of real experiments and by using statistical inference to reach reasonable confidence intervals. The benefits of the different plant configurations are compared and discussed using the model, according to their fulfillment of the initial availability requirements for the plant.  相似文献   

9.
A study of service reliability and availability for distributed systems   总被引:2,自引:0,他引:2  
Distributed systems are usually designed and developed to provide certain important services such as in computing and communication systems. In this paper, a general model is presented for a centralized heterogeneous distributed system, which is widely used in distributed system design. Based on this model, the distributed service reliability which is defined as the probability of successfully providing the service in a distributed environment, an important performance measure for this type of systems, is investigated. An application example is used to illustrate the procedure. Furthermore, with the help of the model, various issues such as the release time to achieve a service reliability requirement, and the sensitivity of model parameters are studied. This type of analysis is important in the application of this type of models.  相似文献   

10.
Process capability indices (PCIs) have become popular as unit‐less measures on whether a process is capable of reproducing items meeting the quality requirement. A reliable approach for testing process capability is to establish an interval estimate, for which we can assert that it contains the true PCI value with a reasonable degree of certainty. However, the construction of such an interval estimate is not trivial, since the distribution of the commonly used Cpk index involves unknown parameters. In this paper, we adopt the concept of generalized confidence intervals and generalized pivotal quantities to derive the generalized lower confidence bounds for providing critical information on process performance. Two practical applications in the area of process capability were considered, they include (i) assessing whether a process under investigation is capable and (ii) providing the lowest performance of the manufacturing processes from several production lines or several suppliers for quality assurance. The applicability of the derived results is also illustrated with examples. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

11.
The semi-Markov decision model is a powerful tool in analyzing sequential decision processes with random decision epochs. In this paper, we have built the semi-Markov decision process (SMDP) for the maintenance policy optimization of condition-based preventive maintenance problems, and have presented the approach for joint optimization of inspection rate and maintenance policy. Through numerical examples, the improvement of this method is compared with the scheme, which optimizes only over the inspection rate. We also find that under a special case when the deterioration rate at each failure stage is the same, the optimal policy obtained by SMDP algorithm is a dynamic threshold-type scheme with threshold value depending on the inspection rate.  相似文献   

12.
Statistical intervals, properly calculated from sample data, are likely to be substantially more informative to decision makers than obtaining a point estimate alone and are often of paramount interest to practitioners and thus management (and are usually a great deal more meaningful than statistical significance or hypothesis tests). Wolfinger (1998, J Qual Technol 36:162–170) presented a simulation-based approach for determining Bayesian tolerance intervals in a balanced one-way random effects model. In this note the theory and results of Wolfinger are extended to the balanced two-factor nested random effects model. The example illustrates the flexibility and unique features of the Bayesian simulation method for the construction of tolerance intervals.   相似文献   

13.
This work deals with repairable systems with unknown failure and repair time distributions. We focus on the estimation of the instantaneous availability, that is, the probability that the system is functioning at a given time, which we consider as the most significant measure for evaluating the effectiveness of a repairable system.

The estimation of the availability function is not, in general, an easy task, i.e., analytical techniques are difficult to apply. We propose a smooth estimation of the availability based on kernel estimator of the cumulative distribution functions (CDF) of the failure and repair times, for which the bandwidth parameters are obtained by bootstrap procedures. The consistency properties of the availability estimator are established by using techniques based on the Laplace transform.  相似文献   


14.
A new and a simple algorithm for approximate calculation of optimal output-constrained feedback regulators is presented. computational simplicity is achieved by using properties of weakly coupled linear systems. The algorithm has been shown to be very effective in solving high dimensional control problems.  相似文献   

15.
It is necessary to measure the attributes of the parts in any manufacturing process. It is also important to monitor measurement system in the manufacturing process because repeated measurements of the attributes include variability as well as target value. This paper considers variabilities due to repeated measurements, operators, and gauge in a measurement system. The measurement system is statistically modeled as a two-factor mixed model with one covariate and interaction. That is, this model employs J operators randomly chosen to conduct measurements on I randomly selected parts from a manufacturing process. In this experiment each operator measures each part K times. This paper aims to provide engineering practitioners with statistically optimal confidence intervals on the variation due to operators and gauge resulted from a measurement system statistically modeled. The optimal confidence intervals are based on a moderate large sample method (MLS) and a generalized p-value method (GEN). The confidence intervals proposed can be useful tools to determine whether a manufacturing process is adequate for monitoring a measurement system.  相似文献   

16.
17.
Statistical tolerance intervals are often used during design verification or process validation in diverse applications, such as the manufacturing of medical devices, the construction of nuclear reactors, and the development of protective armor for the military. Like other statistical problems, the determination of a minimum required sample size when using tolerance intervals commonly arises. Under the Faulkenberry-Weeks approach for sample size determination of parametric tolerance intervals, the user must specify two quantities—typically set to rule-of-thumb values—that characterize the desired precision of the tolerance interval. Practical applications of sample size determination for tolerance intervals often have historical data that one expects to closely follow the distribution of the future data to be collected. Moreover, such data are typically required to meet specification limits. We provide a strategy for specifying the precision quantities in the Faulkenberry-Weeks approach that utilizes both historical data and the required specification limits. Our strategy is motivated by a sampling plan problem for the manufacturing of a certain medical device that requires calculation of normal tolerance intervals. Both classical and Bayesian normal tolerance intervals are considered. Additional numerical studies are provided to demonstrate the general applicability of our strategy for setting the precision quantities.  相似文献   

18.
Crash prediction models still constitute one of the primary tools for estimating traffic safety. These statistical models play a vital role in various types of safety studies. With a few exceptions, they have often been employed to estimate the number of crashes per unit of time for an entire highway segment or intersection, without distinguishing the influence different sub-groups have on crash risk. The two most important sub-groups that have been identified in the literature are single- and multi-vehicle crashes. Recently, some researchers have noted that developing two distinct models for these two categories of crashes provides better predicting performance than developing models combining both crash categories together. Thus, there is a need to determine whether a significant difference exists for the computation of confidence intervals when a single model is applied rather than two distinct models for single- and multi-vehicle crashes. Building confidence intervals have many important applications in highway safety.This paper investigates the effect of modeling single- and multi-vehicle (head-on and rear-end only) crashes separately versus modeling them together on the prediction of confidence intervals of Poisson-gamma models. Confidence intervals were calculated for total (all severities) crash models and fatal and severe injury crash models. The data used for the comparison analysis were collected on Texas multilane undivided highways for the years 1997-2001. This study shows that modeling single- and multi-vehicle crashes separately predicts larger confidence intervals than modeling them together as a single model. This difference is much larger for fatal and injury crash models than for models for all severity levels. Furthermore, it is found that the single- and multi-vehicle crashes are not independent. Thus, a joint (bivariate) model which accounts for correlation between single- and multi-vehicle crashes is developed and it predicts wider confidence intervals than a univariate model for all severities. Finally, the simulation results show that separate models predict values that are closer to the true confidence intervals, and thus this research supports previous studies that recommended modeling single- and multi-vehicle crashes separately for analyzing highway segments.  相似文献   

19.
20.
马尔可夫链模型在软件可靠性测试中的应用   总被引:1,自引:0,他引:1  
现行的软件可靠性分析方法禁锢了人们从更简单层次对软件系统的可靠性进行研究。基于上述认识,为了缩短软件测试周期,文中提出了一种新的软件可靠性测试分析方法。该方法把马尔可夫链模型运用于软件可靠性测试中,在测试过程中使用了新的评判准则分析测试结果,通过实例证明了该评判准则的实用性和有效性。从而为更快速地评估软件的可靠性提供了可能。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号